This page contains a Flash digital edition of a book.
HPC 2013-14 | Energy efficiency


extreme, custom-made solutions approaching closer to 60kW – more than enough for the high-density racks that are becoming more popular. Tis is because direct liquid cooling enables the temperature of the liquid to be much closer to the operating temperature of the processor. Lyon explained: ‘Since the processor is happy running upwards of 80°C, having cooling liquid in the 40 to 45°C range is adequate support for that operating stance. If we compare that to air, we see that the temperature to achieve that same cooling performance is going to have to be down


“If operators are able to look at making the right infrastructure investment now they can take steps to being able to intersect with a full fresh air data centre capability in the future”


in the 20 to 25°C range. Maintaining that range with a high heat load would require an astronomical-sized air-con capacity.’ Not everyone believes that liquid


cooling is the best solution, however, and so attention is still being paid to air-cooling. Dell, for example, has spent recent years investing in its engineering-focused Fresh Air Initiative, which has been focused on the development of a more temperature- tolerant data-centre infrastructure led by the company’s PowerEdge designs. Hugh Jenkins, the enterprise programs marketing manager at Dell, explained that the resulting technology can tolerate temperatures and humidity well above the norm for that type of infrastructure. A data-centre operator would be able to run a server at 45°C for up to five per cent of the time in a typical annual cycle, he said. Running at that extreme has been rated by Dell for up to 90 hours per year. Te company has also rated the technology for operating at 10 per cent of the time from 5 to 40°C at a relative humidity of 85 per cent. Te total operating envelope assumed is 8,760 hours per year, and the ‘norm’ for servers of this kind is to test up to 35°C – but, should a system move outside this normal operating envelope, there is a high chance it will fail. Tis is where Dell’s Fresh Air testing helps protect against bursts of more extreme conditions that might occur in the data centre. Dell’s Fresh Air Initiative is complemented by having the management tools in place


to monitor and set thresholds, and start to throttle systems and set idle states for different components within the server to preserve the ongoing operation for longer within temperature extremes. Given the significant power and cooling cost savings that can be achieved through systems that can not only operate in higher ambient temperatures and humidity for some period of time, but can adapt to changing circumstances by powering up and down, Jenkins believes that when operators update or build new data centres they will be focused on cooling strategies – and that fresh air cooling techniques will figure prominently. ‘Tere are, however, certain configurations


at the very top end of what customers want to do with particular servers that sit outside what we certify and deliver in our Fresh Air initiative,’ said Jenkins, adding that an example would be a server filled with the highest frequency processors, GPUs, SSDs, etc., as the company has aimed the initiative at the broader six- and eight-core system configurations that are most popular in data centres. One final point by Jenkins was that, in the long-term, it makes operational sense to look at fresh air cooling. ‘If operators are able to look at making the right infrastructure investment now they can take steps along that path to potentially being able to intersect with a full fresh air data centre capability in the future,’ he said. ‘Even in interim it may give them the ability to turn up the temperature in their existing environment, and do it in a way they know the server infrastructure is designed for.’


Hone in on the hardware CoolIT Systems’ Geoff Lyon agrees that free air cooling is a growing trend; one that is being successfully employed by operators such as Facebook. Climate is an important factor here, and Lyon commented that the combination of free air with direct contact liquid cooling results in the maximum possible efficiency. Tere are, of course, many other strategies for minimising power consumption, such as deploying the latest processor technologies. Intel’s Xeon Processor E5-2600 v2 product family, for example, is based on the company’s 22-nanometre process technology, contributing to energy efficiency improvements of up to 45 per cent when compared to the previous generation. ‘Tere are many variables at play so, if


a data centre has a cyclical load, at certain times of the day it can go between peak and zero many times,’ said Lyon. ‘Having a more efficient processor will therefore save a considerable amount of money.’ He added that, as increased utilisation is another element people are anxious to control, virtualisation is becoming more important. Tis is because, in the past, there would only be one operating instance available per processor. ‘If a specific application was running on a dedicated server it would have to have everything at the ready despite the fact that server would only be used sporadically,’ he added. ‘Having a server in place that’s running many different operating system instances, and can provide resources to a number of applications with varying


The SGI Management Center administrator’s interface provides visualisation of power cost savings, trends and utilisation of the cluster that the PPRS integration offers


19


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36