This page contains a Flash digital edition of a book.
deskside supercomputers


the desktop, and only when these tools are inadequate do users turn to high- performance computing (HPC) systems.’ This statement comes from Luke Scharf, a systems engineer at US National Center for Supercomputing Applications (NCSA), located at the University of Illinois at Urbana- Champaign. He works in both the Innovative Systems Lab (exploring new architectures) and the Private Sector Program (helping businesses utilise advanced technology). This combined background gives him a unique perspective when commenting on the use of HPC vs desktop/deskside computers. He has an interesting prediction: ‘Our


‘W


Private Sector Program is acquiring some 48-core servers because we think that they approximate what engineering workstations will look like in a few years. We’ve noticed that engineers do a lot of useful work with 64-core simulations on our clusters, but we also know they do a lot of practical, real- world engineering work on the four- to eight-core engineering workstations that are commonly deployed to engineers’ desks for CAD and analysis today.’ Scharf continues: ‘The large-scale


simulations that happen on our big clusters are just the tip of the iceberg when it comes to the engineering that gets done on any given day, and the sooner the engineering workstations can handle the math, the sooner that work will go under the desktop waterline. Further, “fat” desktop engineering workstations, with dozens of cores and hundreds of GB of RAM, are cheaper and easier to use and program than a full supercomputer. So, if a $12,000 workstation will do the work of a 64-core supercomputer – and we have every reason to think they will within three to five years – then it’s a win for our users and for quality engineering.’ He adds that one advantage of running


jobs locally is eliminating the time to transmit data. ‘I recently helped out on a simulation with a 40GB mesh and it took almost as long to move the data as it did to actually run the simulation. Depending on the situation, if a simulation fits on a many- core desktop machine, you can streamline the operations and do all the work more efficiently in one step.’ Scharf admits it’s extremely difficult for


him to estimate the relative amount of work that gets done on desktops vs clusters: ‘But a figure of 90 per cent on the desktop wouldn’t surprise me,’ he says. Scharf feels that most


www.scientific-computing.com


hat I’ve observed is that most work with CAE (computer-aided engineering) applications gets done on


engineering work will continue to be done on the desktop. ‘When somebody is working on ‘hero’ problems – that’s what we call the extremely large problems that can only be solved on a cluster – it’s easy to lose sight of how much work gets done on desktops.’


When there’s no server room infrastructure Powerful desktop machines are particularly attractive for small- and medium-size companies that don’t have the technical infrastructure for a server room, and the previous comments illustrate they can still get plenty of work done. There are some people who suggest placing a small rack populated with ‘pizza box’ servers in an office, but Dr Oliver Tennert, director of Technology


The latest version of this system is the


CX1-iWS, which stands for ‘integrated workstation’. It combines a workstation and a cluster in a single deskside form factor that runs without any special power or machine- room infrastructure. The workstation node is powered by Windows 7 and features an Nvidia Quadro graphics card that supports two high-definition monitors; the Quadro card can also perform GPU acceleration by utilising the Cuda environment. Each CX1-iWS comes with one


visualisation workstation that includes 24GB of memory and Windows 7, a 3-node compute cluster running Windows HPC Server 2008 R2, 4TB of storage and one or two power supplies (110 or 200+ VAC). The workstation blade uses an Intel Xeon 2.26-


I RECENTLY HELPED OUT ON A SIMULATION WITH A 40GB MESH


AND IT TOOK ALMOST AS LONG TO MOVE THE DATA AS IT DID TO ACTUALLY RUN THE SIMULATION. IF A SIMULATION FITS ON A MANY- CORE DESKTOP MACHINE, YOU CAN STREAMLINE THE OPERATIONS


AND DO ALL THE WORK MORE EFFICIENTLY IN ONE STEP Luke Scharf, systems engineer, NCSA


Management & HPC Solutions at transtec AG, says that these are still relatively large and need cooling. His company sells HPC systems of all sizes, but notes that he’s never seen any demand for a rack in an office. ‘Besides, these things are loud and make more noise than a vacuum cleaner. They’ll also blow away all the papers on your desk with their fans. I just can’t imagine them in an office.’ Instead, he has seen a surge in sales


for systems in the class of the Cray CX1. ‘A system like the CX1 is positioned as a deskside supercomputer. It plugs into a standard office power outlet, requires no special cooling, has special noise reduction and is a blade system with a range of choices to meet various needs.’


GHz processor and a Quadro FX 4800 board, while the compute cluster relies on Intel Xeon 2.93-GHz processors and 24GB of memory. The 7U modular enclosure measures 31 x 44.5 x 35.5 cm (WxHxD). A fully-loaded chassis weighs 136 lbs. One user of the CX1 is NASA’s Climate in


a Box Project, which is exploring the utility of desktop supercomputers in providing a complete, pre-packaged, ready-to-use toolkit for climate research products and on-demand access to an HPC system. According to a presentation given by Dr Tsengdar Lee, High End Computing program manager, a ¼ x ¼ degree global atmosphere model run for hurricane forecasts fits in this machine, and a five-day hurricane forecast can be done in two hours. Likewise, recognising the need for such


Cray’s CX1-iWS integrated workstation


systems is SGI, which recently released the Octane III. In contrast to standard dual- core processor workstations with only eight cores and moderate memory, this machine holds as much as 120 cores and nearly 2TB of memory. It is available in two primary configurations: as a deskside cluster with as many as 10 nodes, or a dual-node graphics workstation. Supported operating systems include SUSE Linux Enterprise Server and Red Hat Enterprise Linux; all configurations are available with preloaded software, including the Altair PBS Professional scheduler.


JUNE/JULY 2011 21


data intensive


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52