This page contains a Flash digital edition of a book.



and Industrial Mathematics (ITWM). Te VSC-3 cluster is managed using Bright Cluster Manager from Bright Computing. On the other side of the Atlantic, a


different submerged cooling solution has emerged as a ‘proof of concept’ announced by 3M, in collaboration with Intel and SGI. Tis uses two-phase immersion cooling technology. SGI’s ICE X, the fiſth generation of SGI’s distributed memory supercomputer, and the Intel Xeon processor E5-2600 hardware, were placed directly into 3M’s Novec engineered fluid. According to 3M, the two-phase


immersion cooling can reduce cooling energy costs by 95 per cent, and reduce water consumption by eliminating municipal water usage for evaporative cooling. Heat can also be harvested from the system and reused for heating and other process technologies such as desalination of sea water. In common with the other cooling


systems that use a heat-transfer fluid other than air, the 3M technique reduces the overall size of the data centre – the company estimates that the space required will be reduced tenfold compared to conventional air cooling. Te partners also believe that their immersive cooling will allow for tighter component packaging – allowing for greater computing power in a smaller volume. In fact, they claim that the system can cope with up to 100 kilowatts of computing power per square metre. ‘Trough this collaboration with Intel and


3M, we are able to demonstrate a proof- of-concept, to reduce energy use in data


centres, while optimising performance,’ said Jorge Titinger, president and CEO of SGI. ‘Built entirely on industry-standard hardware and soſtware components, the SGI ICE X solution enables significant decreases in energy requirements for customers, lowering total cost of ownership and impact on the environment.’ Beyond the plumbing, the next step


will be energy-literate sophistication in the way the cluster runs its jobs. Eurotech believes that it can improve the plumbing by constant engineering improvement, so


ON THE OTHER SIDE


OF THE ATLANTIC, A DIFFERENT SUBMERGED COOLING SOLUTION HAS EMERGED AS A ‘PROOF OF CONCEPT’


that it can reduce the cost of a water-cooled supercomputer to less than that of an air- cooled machine – even when cost savings in terms of equipment to reject heat to the outside environment are taken out of the equation. But using expertise from the other side of its business, in embedded systems, Eurotech is starting to provide high- frequency measurement of temperature and power at the nodes within the cluster. With real data on how much power different applications consume and how that power consumption is distributed, soſtware writers will have the opportunity to build applications that can influence the power


usage. ‘For management of our nodes, we tend to use low-power processors; and on the indirect tasks of an HPC – for example measuring the temperature sensors – we use very efficient compute modules that we “borrow” from our colleagues [on the embedded side],’ Arts said. ‘If you look at the cross-links we have in-


house, then you can see HPC technologies ending up in embedded and the “nano-pc” technologies ending up in HPC’. High- performance computers, he continued, ‘have many sensors on different nodes, and you have to transport that data.’ Building on the cross-disciplinary expertise, Arts believes it will be possible ‘to read out data across the whole machine – to make the developers aware of the energy and temperature across the machine. Tis feedback is the first step to energy-efficient programming’. Te idea of programming for energy rather than compute efficiency is being promoted by many people and, he continued: ‘What we as manufacturers want is to give people tools to work with’. Arts concluded: ‘What I also want to say


is that we have a big research community in Europe that is focused on energy- efficient high-performance computing. As an integrator of the hardware, I see myself as an enabler of the community, so it can push the limits. I think this is possible within Europe and we are very strong. With European projects – such as Prace – we are able to bring new technologies into this field. Europe has a very strong team working towards energy efficient solutions. I am very proud of that.’


Taking up the intelligent power challenge


Te demand for intelligent power management, monitoring, and control is growing as data centre power consumption continues to rise. Adaptive Computing’s new Moab HPC Suite – Enterprise Edition 8.0 – introduces several power management capabilities that enable HPC administrators to achieve the greatest efficiency. Two such features are power throttling, and clock-frequency control. With power throttling,


Moab manages multiple power management states, allowing administrators to place compute nodes in active, idle, hibernation and sleep modes, which helps to reduce energy use and costs. Moab also minimises power usage through active clock frequency control, which provides greater control over processor consumption and memory workloads, enabling administrators to create an equitable balance that saves


32 SCIENTIFIC COMPUTING WORLD


energy and maintains high levels of performance. www.adaptivecomputing. com


Asetek’s RackCDU D2C is a ‘free cooling’ solution that captures between 60 per cent and 80 per cent of server heat, reducing data-centre cooling cost by more than half, and allowing 2.5x-5x increases in data-centre server density. D2C removes heat from CPUs, GPUs, and memory modules


@scwmagazine l www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53