This page contains a Flash digital edition of a book.
DATA CENTER LAYOUT


FOCUS on COOLING


Best Laid Plans Can Really Help


Smartening up your data center to improve cooling doesn’t have to mean starting from scratch


To maximise the cooling and thus the overall operational efficiency of a data center, the most optimal layout of cabinets has to be a priority. So what factors go into making this topological exercise as much of a success as possible?


It is a given that with a CFD (see page 8) driven design process on a greenfield site with unlimited budget great things can be done. But the reality is that most data center managers and operators need to manage the power and cooling requirements of mixed density equipment in an existing space, which of course means a mix of legacy and high end servers.


Advice from some quarters says the best place to start working out if your data center is operating within its maximum cooling capacity, or if it has already been exceeded, is to assume a one-to-one relationship between watts of power consumption and watts of cooling to be the maximum ratio for cooling capacity. This, it is argued, can then determine if your data center is being cooled efficiently or if you need to look at making substantial changes in your infrastructure to make it so.


One solution to this - once the aforementioned co-efficient has been determined - are so-called “high density zones”(one or more rows of racks that support a per-rack density of 4kW or above) that promote mixed density cooling scenarios. Note that a high-density zone is not the same as a high density data center but really means a mixed (though carefully designed) environment.


This approach is more attractive than installing fewer high density servers in each rack (“spreading out”). This isn’t great, as it tends to involve inefficient use of floor space, increased cabling costs because of longer runs, and poorer use of electrical input as cooling-system air paths


Datacenter plan


are longer and less targeted – this increases mixing of hot and cool air, which results in lower return temperature to the air conditioner. All of this can lead to hot spots. So, goes this argument, put all your high-density racks in an isolated, standardised self-contained area of the data center, creating a mini high-density data center.


When tried out, this idea normally works by isolating server exhaust heat and directing all that heat into the air-conditioner intakes, where the air is first cooled before being redistributed in front of the servers. By isolating both hot and cold air streams, the high-density zone neutralises the thermal impact that housing high-density equipment in areas designed to manage lower density equipment, and therefore presents a room-neutral load to the existing data center cooling system.


Companies are now offering ways to tailor this by refinements such as using uncontained zones, hot aisles and rack containment, where the hot exhaust air is contained using the back frame of the equipment racks and a series of panels to form a rear air channel that can be attached to a single IT rack or to a row of racks. An optional series of front panels may be used on rack containment arrangements that require complete containment of hot and cold air streams.


Getting a better data center layout doesn’t only involve shifting the racks around, sensible as that is. There are simple low-cost/low-tech things that can help, too. One is the use of blanking panels. In all probability, the racks may not be not completely filled, leaving open spaces that act as a hot air thoroughfare back to the equipment’s intake – resulting in the equipment being surrounded by circulating hot air and having to work in a “self-roasting” environment. Blanking panels, made of either metal or plastic, are a simple solution, closing these rack openings and so preventing the recycling of hot air.


Even the cable firms are getting in on the act. Siemon (www.siemon.com/uk), says its “any to all” cabling configuration “will allow you to place servers where it makes the most sense for power and cooling, not where there is an available switch port.”


The pitch is that you need to ‘pre-cable’. It goes like this: “Top of rack switching limits your equipment placement options and creates hot spots that will result in supplemental cooling needs: just because a server department has a budget for a new server, that doesn’t necessarily mean that the networking department has the budget for a new switch to put it in the most efficient spot in the data centre. Pre-cabling, (of course) “means the server can be deployed anywhere and connected via the central patching area.” This, it is claimed, decreases the number of switch ports required and increases the agility of the data center. It may require a little extra cabling, (naturally) “but cabling is passive and has no maintenance costs,” they say.


Thus paying attention to what you can do with your existing data center layout can be a very effective way of improving overall efficiency and opex via keeping as tight a lid on that cooling overhead as possible. 


www.datacenterdynamics.com


9


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20