This page contains a Flash digital edition of a book.
energy efficiency in HPC Software solutions to energy efficiency


One of the simplest ways to improve energy efficiency is to switch off idle nodes. Accordingly, Altair’s PBS Works middleware has been providing a ‘Green Provisioning’ power management facility for the past four or five years – but mainly in applications such as animation, which can be ‘bursty’ in terms of its demands on the system, rather than in HPC where managers try to run the machines at 99 per cent utilisation. Recently, however, Bill Nitzberg, CTO of PBS Works, has been seeing ‘some opportunities in traditional HPC, so people are paying attention.’ But the issue is one of ‘How do we go from just turning machines off that are idle, to smart predictive scheduling based on power? We have built a second generation version of our green provisioning solution and are deploying it to Government customers with some success.’ The software, he said, ‘plays the game of Tetris with your jobs. With any workload there are holes and normally you will fill those holes with small jobs. But sometimes there will be holes that you can’t fill and these will be of reasonable size and so we enhanced our system to look for backfill holes and turn the nodes off. So the system is running at peak utilisation for its workload.’ He cited one customer whose simulations suggest that it might save about $1,000 a day in power costs. But as technology develops, he said, ‘we can expect to move from simply turning things off to more interesting stuff: smart scheduling to optimise the use of power – gathering profile data on jobs to assess their power consumption. We want to take that data and run in more optimal ways.’ In mid-April, Altair announced an extension of its partnership with SGI to integrate its power- management scheduler on SGI systems.


system to the Texas Advanced Computing Center in 2010 and now has installations at five of the Top100 supercomputing sites. But despite this, Christiaan Best said: ‘Tere are people who are not looking to save money or energy. Tere are some purchasers for whom energy efficiency is not the highest priority. We can provide 60 per cent cheaper facilities and halve the power costs, but there are some people who will look you straight and say “we don’t care”.’ He is working on a timeframe of five to 10 years before such systems are widely adopted: ‘Two per cent of US electricity is associated with data centres, and half of that is due to cooling. It’s not if, but when.’ Chad Attlesey remembers the problems of


working with the liquid-cooled Cray 2 – there was a risk that the coolant would evaporate too fast and arcing would trigger plasma fires: ‘What I did in my basement was to look at single- phase solutions that are very easy to maintain.’


Nitzberg concluded: ‘I see our role as to look


more holistically and take advantage of the fact that you are running a lot of jobs, and shuffle things around so that we still meet all of your needs. So all your work gets done in the morning, but you use less power doing it.’ Adam Carter, of the Edinburgh Parallel Computer Centre, agrees that having good power management – powering down the bits that are not being used – is important. But he sees a wider role for software beyond middleware and scheduling jobs, so that end-user application software also has a part to play in energy efficiency. He has recently been involved in a research project to build a small-scale parallel computer for data intensive problems – applications that are of increasing importance in scientific computing.


The idea was to balance the flop rate – the speed of calculation -- with the I/O rate rather than concentrate solely on the computational speed. ‘If your code is not compute-bound, you are wasting compute time and energy if you’ve got a system with high-power CPUs that are waiting for data from memory,’ he said. ‘My argument would be that, as important as getting a high flops/Watt rate, is to make sure that all the other components of the machine are in balance for the sorts of problems for which you are using the machine. This is more important than trying to maximise the flops per Watt ratio.’ But his conclusion is not entirely optimistic: ‘Our biggest finding was that it’s much harder than you might think. You need to change the software as well as the hardware. There is work to be done in re-writing software so it sits on these architectures.’


WE CAN TAKE 80 PER CENT OUT OF THE COOLING COST OF THE DATA CENTRE


He too uses a non-volatile oil: ‘Te liquid is food-grade, so you can drink it – although you don’t go too far from the bathroom aſterwards. It biodegrades in the environment – it’s a green solution.’ Now president and chief technology officer


of Hardcore Computer, based in the city of Rochester, Minnesota, Attlesey also believes it will take time for the significance of the new generation of liquid cooling technologies to get through to everyone. ‘Liquid cooling is far more effective than air cooling [but] we have something that is considered a disruptive technology. We are nibbling round the edges.’ In


his view, the energy efficiency case is compelling: ‘We can take 80 per cent out of the cooling cost of the data centre’. But he stressed that the capital cost of building the data centre is also reduced – much less specialised infrastructure is required as there is no need for raised floors, for example. But, in addition, it is possible to get higher densities – ‘you can put equipment closer together and components closer together. Tese combine to allow for a much smaller, much less expensive data centre.’ Attlesey also cited improvements in compute


speed: ‘Latency is an issue in HPC and in banking and finance.’ With more conventional cooling, ‘I can overclock the processors, but I sacrifice reliability and energy efficiency’. With liquid submersion cooling: ‘you get everything – we can overclock processors because we can keep them cool. We get much better mean time to failure, and you get energy efficiency and other side benefits.’ In fact, he said, ‘Tere is such a long list


of value propositions we have a hard time focusing on what is most important to a given customer. Tere is no concern about humidity or environmental controls when you have liquid cooling, so you can do installations in harsh environments. Fire suppression is already built into the system. We’re using standard FR4 circuit boards off the shelf – we have done years of material compatibility testing.’ Attlesey believes that he can even run submerged cooling of spindle drives – ‘I have a patent on that’ – but there has been no significant customer requirement for it. Another issue is what you do with the heat: his system pumps the liquid, but the energy consumption ‘is a nit compared to the use of fans. Here in Minnesota, it gets cold in the winter time, so we can reuse the heat for office in-floor heating to get additional efficiencies out of the system.’ He is reluctant to make general claims for


Hardcore’s system because he believes that assessments have to be made on an installation by installation basis, but he was willing to cite potential savings of 80 per cent or more in cooling costs and, because ‘these are half or more of data centre costs, there are savings of up to 50 per in running costs. Hopefully soon everyone will see the light – this will truly be a green solution and they get a lot more performance in the end.’


More efficient memories Bathing computers in liquid, whether Hardcore’s or Green Revolution’s baby oil or Iceotope’s refrigerant, may be disruptive technology, but is it surprising to hear a major multinational vendor such as Samsung stress that it too needs to work hard to get the message of energy efficiency across. At the beginning of May, Samsung held its


JUNE/JULY 2012 33


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52