This page contains a Flash digital edition of a book.
Issue 1, December 2008


FOCUS FEATURE MEASUREMENT


F


or a company that has made an art of never talking about its data centers, Google's declaration came as a bolt from the blue. While Google is


encouraging the rest of the industry to put its house in order, it in turn has not taken well be being lectured.


The Green Grid said it was meaningless for anyone to declare that it ran the world’s most efficient data centers. Analysts declared the numbers as scrubbed and respected industry experts have kept their counsel until Google explains itself and how it arrived at the numbers. Others said the numbers simply didn’t look right.


Google said in its data center blog: “We measure PUE in the spirit it was intended to the follow its definition strictly…We strongly encourage all data center owners and operators to adhere to Green Grid PUE measurement standards so we can all drive efficiency forward with meaningful comparisons.”


FINDING MEANING IN NUMBERS


The Green Grid, the organisation that the developed the PUE rating said that using it to compare a PUE rating in one data center with that in another is meaningless given the number of variable factors at play such as climate, age, utilisation, resiliency and redundancy requirements.


Victor Smith, UK representative of the Green Grid said it is great that the PUE measurements were being used but people are using it as an empirical measurement – when it is a statistical measurement.


“They [Google] can’t say it is most efficient – their PUE at 1.21 is a reflection of how and where they are measuring and that is not clear in any statements. The metric is not that close or well defined to make such a bold statement. It needs our PUE is 1.2 and this is what we call IT , power and cooling equipment. Geographic location affects PUE, cooling and free air cooling for example can drive down PUE.”


Lex Coors, director of engineering at data center builder and operator Interxion said: “We all know what is feasible. In case Google is following the US standard for Telecom equipment they will run the server inlet at approximately 30C is 13C


intake of the server which higher than most legacy data centers


(leading to a significant improvement of the PUE). Legacy datacenters run at 24C at the return of the CRAC unit. Interxion is following the ASHREA standard which is a standard accepted by all server manufacturers e.g. a wider temperature range window of which Interxion is using 22 – 24C


air intake of the server which is already 10C higher than legacy leading to well accepted PUE figures as communicated by Uptime, TGG and the Code of Conduct.”


Reports say that Google has indeed being running its data centers hot, with the company recommending that the temperature can be safely raised by 10 degrees Fahrenheit.


On Google's assertion that a PUE measurement of <1 is possible (Google says: “A PUE <1 would be possible with on-site generation from waste heat, but currently this is commercially impractical to implement, Coors offers a smile: “With regard to Google's statement on the PUE < 1. I would first like to say  and secondly I say perpetual motion.”


Another sceptic is Joost Metten, CEO of Terremark Europe who said: “I do agree that their numbers are not totally correct.”


UNACCOUNTABLE NUMBERS


Ed Ansett, managing director EMEA at HP owned EYP Mission Critical Facilities,


“We all know what is feasible..." EQUATION FOR PUE FOR GOOGLE DATA CENTERS PUE = • EUS1 • EUS2 EUS1 EUS2 + EUS2 + ENet1 + ETX -ECRAC + EHV - EUPS - ELV


Energy consumption for type 1 unit substations feeding the cooling plant, light- ing, and some network equipment


Energy consumption for type 2 unit substations feeding servers, network, stor- age, and CRACs


• ETX Medium and high voltage transformer losses • EHV High voltage cable losses • ELV Low voltage cable losses • ECRAC CRAC energy consumption


• EUPS Energy loss at UPSes which feed servers, network, and storage equipment • ENet1 Network room energy fed from type 1 unit substitution


ERROR ANALYSIS To ensure our PUE calculations are accurate, we performed an uncertainty analysis using the root sum of the squares (RSS) method. Our uncertainty analysis shows that the overall uncertainty in the PUE calculations is less than 2% (99.7% confidence interval). Our power meters are highly accurate (ANSI C12.20 0.2 compliant) so that measurement errors have a negligible impact on overall PUE uncertainty. The contribution to the overall uncertainty for each term described above is outlined in the table below.


maximum www.datacenterdynamics.com 21


Term EUS1 EUS ETX


ECRAC EUPS EHV ELV ENet1


Overall Contribution to Uncertainty 4%


2 9% 10% 70% <1% 2% 5%


<1%


said: “The extent to which energy efficiency is prioritised depends on various business drivers such as: reliability, CAPEX, regulatory compliance and of course energy efficiency, all of which influence design and therefore PUE. The trick is to get the right balance of business drivers into the design of the data center and allow this to dictate PUE; and not the other way around. Google's initiatives are specific to their line of business although the energy best practice recommendations are sound and can be applied to all types of data center where energy efficiency is important. Many transaction based systems require real time high reliability due to application business criticality. This is usually achieved by mirroring, instantaneous fail over and resilient topology; and usually has the result of reduced energy efficiency even in new data centers. However it is possible with the right topologies and equipment selection to have fault tolerant power and cooling systems that deliver high efficiency using solutions such


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72