This page contains a Flash digital edition of a book.
FOCUS on COOLING, APRIL/MAY 2012


COOLEMALL


‘CoolEmAll’: Europe’s holistic view of effi ciency


An EC-funded research project takes aim at improving data center efficiency from multiple angles. By Andrew Donoghue, analyst, data center technologies, 451 Research


The public-private consortium behind project CoolEmAll wants to explore the idea of taking into account the entire data center infrastructure and the applications it supports to optimize energy efficiency.


Participants in the €3.6m (US$4.9m) 30-month project expect it to produce new data center monitoring software, designs of energy-efficient IT hardware, possibly even new energy-efficiency metrics.


CoolEmAll, funded by the European Commission (EC) and participating organizations, started in the fourth quarter of 2011. Its seven participants are the Poznan Supercomputing and Networking Center, The Toulouse IT Research Institute, High Performance Computing Center University of Stuttgart (HLRS), the Catalonia Institute for Energy Research, Atos, 451 Research and Christmann Informationstechnik.


A DENSE, HIGHLY INTEGRATED BOX The role of Christmann, a German startup, is key, since much of the study will be based on its technology. Particularly, it will be technology of the RECS|Compute Box - a system currently under development, which the company calls ‘data center in rack’.


It’s a server rack that holds a number of 1U server units, each of which consists of multiple skinless servers. Each server unit features integrated monitoring, and the rack features integrated liquid cooling.


in the baseboard infrastructure, the sensors enable network status, fan speed and power usage monitoring for each CPU board.


Because Christmann’s technology is highly specialized - suited for HPC environments more than anything else - the CoolEmAll project may also include a number of racks holding traditional rack and blade servers supplied with with additional sensors .


Existing metrics may be insufficient


The sensors built into the Compute Box are especially important to CoolEmAll. WIth them, the researchers can study the interaction of cooling, airflow, IT hardware and application performance and model this interaction in a way that’s difficult to replicate with conventional servers and external sensors.


SIX HUNDRED SERVER BOARDS PER RACK The main test bed will consist of a number of RECS|Compute Boxes installed at HLRS’. Server units may be combined with storage in a single rack, and storage may also include a de-duplication solution.


A Compute Box rack can contain up to about 600 individual server boards. Each server unit contains 18 CPU boards, 18 thermal sensors and 18 electrrical-current sensors. Integrated


THERMAL MAPS FOR ANALYSIS Sensor data will be analyzed using a combination of Computational Fluid Dynamics (CFD) and visualization tools. A thermal map of the Compute Box and traditional servers will be created and used to monitor effect of changes in IT hardware performance, cooling technologies (free cooling versus mechanical chillers) as well as different applications and workloads on total energy use.


CoolEmAll will also use the results of another EC project on data center efficiency, called Green Active Management of Energy in IT Service centers (GAMES).


The project’s wide scope may mean existing efficiency metrics, such as Power Usage Effectiveness (PUE), may be insufficient. Therefore, participants do not exclude the possiblity that new metrics will come out of project CoolEmAll.


IT/FACILITIES CONVERGENCE ISN’T ENOUGH Breaking down barriers between IT and facilities gets a lot of attention but they are only part of the story. Real efficiencies will not be realized until the data center is managed as one interconnected unit. Projects like CoolEmAll attempt to push in that direction. n


8


www.datacenterdynamics.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20