This page contains a Flash digital edition of a book.
high-performance computing


June 2013 Green500 list. But Eurotech’s approach is not total


immersion, rather it has developed over the past six years a cooling system that delivers warm water to the processor. According to Arts, contact cooling is widely used within the Eurotech group for applications other than HPC, in embedded computing for example: ‘So the competence for cooling and thermal designs is far from new to the group’. But he stressed a further advantage to the technological solution Eurotech had chosen: ‘We want to build a compact machine. Trough compactness we can create high speed. With immersion cooling, you have to create space between the components [for the coolant to flow between them] and you have to have something on top of them


MORE EFFICIENT POWER CONVERSION ALSO HAS THE ADVANTAGE OF LEADING TO MORE COMPACT MACHINES


to create more heat-exchange surface. In our case, the coolant does not have to flow between the components and the cold plates are attached to the components via a thermal interface material that conducts the heat. It is efficient cooling in a smaller space.’ At the present, the design point for


future. Such considerations are already driving European supercomputer owners to seek out energy-efficiency. Cambridge University’s Wilkes machine scored second highest in the most recent Green500, for example. Te sense of urgency is perhaps not so acute in other parts of the world where electricity is cheaper, but even there data centres are being moved to sit adjacent to hydropower or other convenient and cheap sources of power. Europe does not make its own CPUs,


but it does have expertise in embedded processors, which have to draw low power, and this, according to Paul Arts, director of HPC research and development for Eurotech has led to some fruitful cross-fertilisation. An international company based in northern Italy, Eurotech has a presence both in the embedded-systems sector and in high- performance computing, where its systems have been some of the most energy-efficient on the market. Eurotech Aurora systems took both the first and second slots in the


www.scientific-computing.com l


Eurotech is to obtain ‘free cooling’ to the outside air, which means that the coolant water can be delivered at up to 50 degrees. ‘In the end, the only temperature that’s interesting is the junction temperature of the silicon itself. It depends on what components we are talking about, but if we are talking about CPU and embedded industrial silicon, we are talking about 105 degrees. For consumer silicon, it can be 100 degrees.’ Te headroom can be used not so much


for overclocking but to take advantage of the higher power CPUs available from Intel. He remarked laconically: ‘Overclocking will bring warranty issues’. Eurotech is investigating energy reuse, inside a building for example, but there the water temperature may have to be higher – around 60 to 65 degrees – and ‘you may have to buy extended temperature range components for this.’ Arts stressed the importance of


considering power conversion in assessing energy efficiency, and here again, the technology developed by the embedded systems side of the company can offer advantages: ‘Our goal is to make power


@scwmagazine


conversion as efficient as possible – we share a lot of information with our colleagues in embedded. Power conversion in embedded, especially for portable devices, is critical. Many of the power conversion steps on a board – going from 48V to 12V and from 12V to 3.3V and even down to 0.9V – all these steps have quite a bit of inefficiency. So the goal is not only to keep the compute elements cool but also the power elements if we are to reach energy efficiency.’ More efficient power conversion also has the advantage of leading to more compact machines, he added. ClusterVision’s approach to making its


machine compact, within the context of total immersion cooling, has been to join forces with Green Revolution Cooling to design the first ever skinless supercomputer, by removing the chassis and unnecessary metal parts that would obstruct oil flow and thus also keep down the costs of the initial investment. According to ClusterVision, the solution


almost eliminates power consumption for air-cooling – cutting it to just five per cent of a conventional system’s consumption. In turn, this cuts total energy consumption by about half, and also reduces the initial capital outlay by removing the need for equipment such as chillers and HVAC units. Another area of saving is reduced current leakage at the processor level in the submerged solution, resulting in less wasted server power. ‘Power efficiency in high-performance


computing is of growing concern, due to technological challenges in the ongoing race to Exascale and, far more importantly, growing concerns on climate change. With this reference, we set the stage for a new paradigm,’ said Alex Ninaber, technical director at ClusterVision. Te ClusterVision machine will be used


by Austrian research organisations which are collaborating on the Vienna Scientific Cluster (VSC-3) project. Te VSC-3 cluster is designed to balance compute power, memory bandwidth, and the ability to manage highly parallel workloads. It consists of 2020 nodes based on Supermicro’s motherboard, each fitted with two eight-core Intel Xeon E5-2650 v2 processors running at 2.6GHz. Te smaller compute nodes have 64 GB of main memory per node, whilst the larger nodes have up to 128 and 256GB of main memory. Te interconnect system is based on Intel’s Truescale QDR80 design. Soſtware includes the BeeGFS (formerly known as FhGFS), parallel file-system, from the Fraunhofer Institute for Technological


JUNE/JULY 2014 31





Alex Mit/RAStudio/In Green/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53