high-performance computing
electricity bill,’ Hopton added. Andy Dean, HPC business
development manager at OCF, noted that the HPC industry is ready to move on from the traditional water cooled doors used in many HPC systems to more innovative and, in some cases, exotic cooling technologies such as ‘to-the-node’ water cooling. Dean stated: ‘In the last 10 years
we have started to see the adoption of water cooling. At this stage, the vast majority of our multi-rack systems have got to that point.’ More recently we have started to see this ‘to-the-node’ water cooling. Birmingham was OCF’s first installation of this technology or at least the first academic installation of water cooled nodes.’ CoolIT’s CEO and CTO, Geoff
Lyon, commented on the scale of manufacturing reducing the cost of high-performance cooling technology. ‘Since 2010/11, high-
volume manufacturing of liquid cooling products have
become a mature category in so far that a few vendors have mastered the ability to reliably manufacture high-quality, low- cost liquid cooling assemblies,’ said Lyon. ‘Tis has transitioned liquid cooling from a novelty to a commonly accepted category of product now being considered for large scale datacentre deployments that increase efficiency, increase density and enable increased performance.’
How do you help users select the correct technology? On the topic of recommendations for specific technologies, Michalski states that much of the choice comes down to a customer’s preferences and requirements: ‘Some customers may want to keep the cooling efficiency at the highest level, reusing the heat that is produced by the server components so they would consider using one of the liquid cooling technologies in their
IN THE LAST 10 YEARS WE HAVE
STARTED TO SEE THE ADOPTION OF WATER COOLING. AT THIS STAGE THE VAST MAJORITY OF OUR MULTI-RACK SYSTEMS HAVE GOT TO THAT POINT
environment. Tis does involve a higher initial cost of the cooling equipment, which some customers won’t accept choosing the standard air cooling approach if their air- conditioning units can cope with that.’ CoolIT’s Lyon shared this view
on customer requirements but went one step further to explain that these conditions can be appraised through collaborative consultation. ‘Tings that play into the
recommended approach include the PUE or efficiency goals for the datacentre, the chip-level power density, rack power density, available power to the rack, climate and environmental conditions, heat capture targets, energy re-use, labour expense, existing
STULZ Micro DC
High density, stand-alone Micro Data Center featuring CoolIT Systems’ Direct Contact Liquid Cooling
• 100% self-contained IT solution for HPC, Edge and Enterprise • Cutting edge Direct Contact Liquid Cooling (DCLCTM
)
• Supports up to 64kW per cabinet • Protects against harmful environmental conditions • Complies with safety requirements • Lockable hardware for high security • Remote monitoring • Quick configuration and delivery • Modular, scalable solution with custom configurations available
infrastructure and others,’ said Lyon. While Motivair’s policy is that
customer requirements must come first, Whitmore also argues that expert opinion can help to highlight the need for growth within HPC infrastructure: ‘Each customer needs to evaluate their data centre loads today and forecast where those loads may be in the future. ‘Te fact that most data centre
operators and owners don’t know where their densities will be in two to five years validates their need for a highly scalable and flexible cooling solution. Cooling systems should be a server and rack agnostic, allowing for multiple refreshes over the life of the facility.’
➤
www.coolitsystems.com sales@coolitsystems.com
www.stulz-usa.com sales@stulz-ats.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32