D ATA CENTRES
A look at greener data centre cooling Are you cool? by Jack Bedell-Pearce, Director of 4D Data Centres
Until quite recently, nearly 40% of power consumed in a traditional legacy data centre was spent on cooling equipment alone. Small and medium sized businesses that run their own mini- data centres (or Comms Rooms) and house between 4 and 12 racks are even more inefficient. These inefficiencies inevitably lead to
overheating - resulting in server/storage failures - or overcooling - which can be very expensive.
Jack Bedell-Pearce of 4D Data Centres shares tips on how to keep your cool without upping your carbon footprint and breaking the bank...
Why is this a problem? The data centre industry is now second only to the airline industry in overall carbon footprint. IT manufacturers are slowly catching
on by producing more power efficient servers and storage units, but these improvements are relatively marginal and will take time to come into effect as they depend on hardware refreshes. Current estimates suggest that about
1% of the energy consumed by the entire world each year and 5% of Europe’s annual energy bill is spent just
on cooling computers. It’s for this reason that owners and operators of large-scale data centres are turning their attention to cooling. Only a few decades ago, electricity
was (relatively) cheap and it was only governments and multinational companies that actually used data centres. Just in the way money was thrown at the technical problem of sending humans into space, the same was true of cooling in data centres. Racks and equipment were installed
in a very ad-hoc fashion and computer room air conditioning units (CRACs) were run at maximum power 24/7. Fortunately, just like the space race, it took smaller, innovative start-ups to look at the problem and revolutionise the status quo.
Take a PUE One of the first steps was working out a way of calculating how efficient any given data centre actually is. The most common and quoted method is power usage effectiveness (PUE).
PUE is calculated by dividing the
total amount of power a data centre consumes by that which is used by the IT equipment. A PUE score of around 2 for a data centre is considered ‘average’, while a PUE of 1.5 is deemed ‘efficient’. Anything below 1.5 was, until
recently, difficult to achieve. The first significant shift in data
centre design towards creating a more efficient cooling environment was the introduction of hot and cold aisles. Rather than trying to bring the
overall ambient air temperature of the whole data centre down to 19°C (the brute force method), racks and equipment were set up in such a way that cold air coming out of the sub-floor plenum was delivered to the front of the servers and SANs (aka the ‘cold’ aisle) and the hot exhaust air was focused into a specific area for the CRACs to handle (aka the ‘hot’ aisle). The next logical step in this design
was to physically segregate the hot and cold air and thus cold aisle corridors were born, bringing with them the first
CRAC fans hard at work 12 NETCOMMS europe Volume IV Issue 1 2013
www.netcommseurope.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44