HIGH PERFORMANCE COMPUTING
applications
not available to the average HPC user. ‘I think we have only had one or two
instances where customers have tried to retrofit water cooling to a datacentre – and it is definitely possible with the right infrastructure partners – but it is a bit of a headache,’ said van Kesteren.
‘It depends at which end of the spectrum
you are looking at but I would say that the majority of our customers don’t have a custom-built datacentre, they are people who have re-purposed the machine room to have general-purpose computing and then decided they want a cluster,’ van Kesteren added. ‘They are often still using things like air
conditioning in the server room and just standard air-cooled servers. But we also have a rising number of people using water- cooled systems but then that is almost always back of the rack water-cooled rear doors. We also have a few high-end customers that are using on-chip cooling.’ While technologies such as evaporative cooling and immersion cooling can provide
www.scientific-computing.com | @scwmagazine
large savings in total power used, reducing the power usage effectiveness (PUE) of the datacentre, it requires that an organisation has the resources to design or adapt the datacentre to these technologies. In many real-world scenarios, this is just not feasible so a compromise must be made between the engineering cost of building the infrastructure and the return from increased efficiency. ‘Evaporative cooling is really the bleeding
edge, it is what some of the really high- end systems in the Top500 are using and I think immersion cooling would also fall under that category. These are the kind of technologies used by massive datacentres – but, at the lower end, there are people with three to five rack HPC clusters. Ultimately those people are still wanting to run very heterogeneous environments where you cannot be restricted to just using water-cooled nodes,’ stressed van Kesteren. ‘In those kinds of situations, you need to have the flexibility that either a rear-door
cooling solution, atmospheric cooling or air conditioning offers.’
Efficiency tools One way to increase the utilisation of a cluster is to tightly control the number of processors being fed power at ay one time. When a system is not running at full capacity, software can be used to help manage the power used by powering down certain sections of the compute infrastructure. ‘If customers come to us and they want
to improve energy efficiency based on their current estate, the kind of things you want to look at would be some of the features in the scheduling software that they use which can power off compute nodes, or at least put them into a dormant state if the processor you are using supports that technology,’ said van Kesteren. ‘We would look if they have those kinds of features enabled and if they are making the most of them. However, for some older clusters
December 2019/January 2020 Scientific Computing World 5
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28