This page contains a Flash digital edition of a book.
Nasa


Pleiades supercomputer housed at Nasa Advanced Supercomputing division


➤ William Tigpen, advanced computing branch chief at Nasa, explained that this wide-ranging set of requirements, combined with a need to provide more capacity and capability to its users, puts the centre in a fairly precarious position when it comes to upgrading legacy systems. Te upgrade process must be managed carefully to ensure there is no disruption in service to its varied user base. Tigpen said: ‘A technology refresh is very


important but we can’t shut down our facility to install a new system.’ He went on to explain that Nasa is not a pathfinding HPC centre like those found in the US Department of Energy’s (DOE) National Laboratories – the HECC is focused primarily on scientific discovery, rather than testing new technology. Tigpen explained that when it is time


to evaluate new systems Nasa will bring in small test systems – usually in the region of 2,000-4,000 cores, around 128-256 nodes – so that the systems can be evaluated against a wide range of applications used at the centre. Tigpen said: ‘Te focus at Nasa is about how much work you can get done.’ In this case it is the science and engineering goals, coupled with the need to keep the current system operational, which necessitates that Nasa focuses on scientific progress and discovery rather than ROI, FLOPs, or developing a


24 SCIENTIFIC COMPUTING WORLD


specific technology – as is the case for the DOE. At Nasa, because the codes are so varied,


it means that any new system must be as accommodating as possible, so that Nasa can derive the most performance from the largest number of its applications. In some ways, this actually locks the HECC facility into using CPU-based systems, because any switch in the underlying architecture – or even a move to a GPU-based system – would mean a huge amount of effort in code modernisation, as applications must be ported from its current CPU-based supercomputer.


Assessing requirements So the requirements for upgrading HPC systems can range from the obvious, such as increasing performance or decreasing energy consumption, to the more specialised, such as path finding for future system development at the US Department of Energy’s National Laboratories, or the pursuit of science and engineering goals for HPC centres including NASA’ High-End Computing Capability (HECC) Project. Against this background, it is necessary to


understand how a system is utilised currently, so that any upgrades can increase productivity rather than hinder it.


One aspect that Bolding was keen to stress


is that Cray can offer hybrid systems that can take advantage of both CPU and GPU workloads. ‘If you have workloads that are more suited to the GPU architecture, you can move those workloads over quickly and get them in production,’ he said. An example of this architecture is the Cray


BlueWaters system, housed at the University of Illinois for the US National Center for Supercomputing Applications (NCSA). Tis petascale system, which is capable of somewhere in the region of 13 quadrillion calculations per second, is a mix of a Cray XE and a Cray XK, and contains 1.5 petabytes of memory, 25 petabytes of disk storage and 500 petabytes of tape storage. ‘It is basically combining those two systems


into a single integrated framework, with a single management system, single access disk and its subsystems to the network. You just have one pool of resources that are GPU based and a pool of resources that are CPU based,’ explained Bolding. Cray has tried to reduce the burden of


upgrading legacy systems by designing daughter cards, which remove the need to replace the motherboards – as the CPU or GPU fits directly into the daughter card and then into a motherboard. Tis allows


@scwmagazine l www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40