search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
high-performance computing Electra obtains the impressive PUE figure


through a combination of outdoor air and evaporative (adiabatic) cooling to get rid of the heat generated by the system. Te initial system will contain four racks of SGI servers dedicated to scientific computation, although an expansion has already been announced for later this year. ‘I can’t stress enough the advantages over


traditional computing centres on energy efficiency,’ said Tigpen. ‘It’s not just the electricity. Our current system is between a PUE of 1.26 and 1.3 – in that range. Looking at going to 1.03 is really great from an electricity standpoint, but from a water point of view we are saving 99 per cent of the water.’ Tis saving allows the facility to spend


more money on computing resources, increasing the amount of science and engineering generated by the facility. However, Tigpen stressed that although


➤ new energy efficient design and has been determined to operate within a power usage effectiveness (PUE) range of 1.03 to 1.05. Tis is a major improvement on Pleaides, NASA’s flagship supercomputer which operates with a PUE of approximately 1.3. Pleaides currently sits at number 13 on the Top500. Tis new system is the first generation of


this new modular design to supercomputing at NASA but the success of the project has led Tigpen and his colleagues at NAS to seriously consider employing this design when they come to replace Pleaides. ‘We are looking to start building a large


modular system that will still be in this PUE range in the 2018 timeframe,’ said Tigpen. ‘Even this year we are planning an expansion to Electra that, when fully populated, would be an eight-petaflop computer.’ Although the power savings are an


important driver behind this, Tigpen stressed that the modular design affords a large degree of flexibility when upgrading the system. For example, NASA’s last supercomputer, Pleaides, has been upgraded with seven generations of Intel processor technology and it is expected that the follow-up to Pleiades will also go through several years of upgrades during its lifetime. A modular design helps systems designers to upgrade, as new modules can easily be attached to the existing system. Tigpen said: ‘I think the modular


solution really affords a way to easily expand and you really don’t even have to know what your end system is because you can add the components that you need as you grow into


22 SCIENTIFIC COMPUTING WORLD


them. We do not need to know for instance that we might be limited to 15 megawatts. Maybe we will be at six megawatts – but, in either case, using this approach we can add what we need, when we need it.’ Electra is a self-contained, modular HPC


design with energy efficient HPC as its primary focus. Te new system has been dubbed a Data Center on Demand (DCoD) because of its modular design. However, this


I CAN’T STRESS ENOUGH THE ADVANTAGES OVER TRADITIONAL COMPUTING CENTRES ON ENERGY EFFICIENCY


modular design does not just provide large savings to energy as the new system also reduces water consumption by as much as 99 per cent. As a direct comparison, Cooling the 2,300


processors that the container is designed to hold would require approximately 6,000 gallons of water per year – compared to the almost two million gallons it would take to cool those same processors in the current NAS facility. Tis is largely because the NAS facility


‘wastes’ water as it is used to cool the HPC system which subsequently evaporates. Te current NAS facility uses approximately 50,000 gallons of water each day which evaporates into the air through a cooling tower located next to the main NAS facility.


the potential increase in total performance is welcome, it is important to focus on energy efficiency, purely to reduce the amount of energy we waste powering today’s supercomputers: ‘We [NAS] do not only look at engineering models that are simulating the next launch vehicles, how the universe was formed, or what happens when black holes collide – we also look at the impact of man on the environment.’ He explained that for this project the NAS


facility paid for their own utilities in addition to all of the other associated costs that come with setting up a new supercomputer – and that it’s this holistic approach to resource management that separates NAS from other centres. ‘We are responsible for paying for


everything – there are a lot of computing centres that do not ever get a power bill or a water bill but someone in that organisation is paying for that. To be able to save that money goes directly into being able to do more science and engineering.’ Te challenge of reducing energy-


efficiency and increasing performance is a constant struggle for any HPC centre. Te message here is that efficiency savings can be made when planning a new data centre or supercomputer. To make the biggest savings requires that data centres are designed from the outset with energy efficiency in mind. ‘I would say that anybody that is running a


supercomputer whether they are in a federal agency in some country or whether they are in a private company – they do not want to waste money, they want results. Tis is a way of getting more bang for your high-end computing buck, concludes Tigpen.’ l


www.scientific-computing.com


Kamil Hajek/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32