FACILITIES ef f icienc y
Hot Water Cooling Throughout To maximize flexibility and modularity, the facility was designed with two parallel water cooling systems so that racks and containers could tap into one or both cooling loops depending on their requirements. £ Traditional cooling loop
A traditional cooling system consisting of modular chiller units was designed to circulate 13O £ Hot water cooling loop
C water throughout all of the data center.
A separate cooling loop was designed to use cooling tower water exchanged through a water-side economizer to deliver 30O water cooling to container and rack positions. 30O
C hot C is the worst-
case scenario on the hottest day of the year based on 50 years of weather data. The cooling system and the servers can use hot water cooling for the entire year by designing both to accommodate a water temperature maximum of 30O a maximum data center inlet temperature up to 36O
C and C. Server
manufacturers tuned their equipment in response the server RFP to align with this goal
The system is designed to deliver spot cooling throughout, without any traditional CRAC units. In the data center space, both cooling loops were plumbed to every potential rack position underneath a two-meter-high raised floor. Likewise, both systems were plumbed to the rooftop container area. In practice, eBay has found that its modular chiller units (MCUs) were not needed for primary cooling. Their function changed to act as trimmers that eBay can use if the temperatures are unseasonably hot or the workload is too high for any period. With hot water cooling as the primary system, eBay has been successful with free cooling year round.
The traditional data center space was equipped for flexibility to accommodate rack power changes. Initially, eBay designed the data center for using in-row cooling systems to provide up to 70 kW of cooling for groups of racks using hot water, chilled water, or a combination of both using three-way valves. As individual racks reach densities of 28 kW, they can be equipped with a passive rear door for spot cooling of the single rack. When passive doors are added, in-row cooling systems can be removed, freeing up more space for compute racks. The expectation is that as racks reach 40 kW of density, direct- to-chip cooling can be used to handle even this highest density with the hot water loop. If densities increase further, weather conditions become too hot, or overclocking yields better than expected results, three-way valves at each cooling position can be switched to use either chilled water or hot water.
C days. Experience has shown that the chillers are not needed unless the temperatures exceed the maximum recorded over the last 50 years of weather data. The Green Grid’s DCMM dedicates an entire section of the model to cooling systems. The contribution of cooling systems to overall PUE, including the reduced dependence on mechanical cooling, and the wider temperature and humidity ranges are all part of the DCMM’s best practice model. In this area, eBay has exceeded even some of the five-year best practices.
The containerized data centers were fitted with a plate-type heat exchanger that allows the hot water system temperature to be trimmed by running chilled water through the heat exchanger in the event that water temperatures could not be maintained on the hottest 50O
Engineered Efficiency: Containerized Data Centers
Containerized data centers’ heat loads are typically closer to their cooling systems, require smaller volumes of air, and are designed to tolerate wider extremes of temperature and humidity. The Project
24
www.dcseurope.info I May 2012
Mercury data center was designed with a 700 square meter rooftop facility to house containerized data centers, further helping to reduce TCO with average PUE less than the traditional data center space. The containers were designed to withstand direct exposure to the 50O summertime heat while using adiabatic cooling or hot water for cooling.
C
One of the most impressive parts of this installation was the container housing eBay’s new high-performance compute deployment. The container utilizes adiabatic cooling and supports 26 kW per rack, achieving a pPUEL3,HC as low as 1.046 with an outside air temperature of 48ºC. This containerized data center contributed significantly to the data center’s overall efficiency and is expected continue to drive down overall site PUE as more containers are used.
Competition between multiple vendors promises to increase data center efficiency more than traditional brick-and-mortar designs because of the degree to which efficiency can be engineered to match the heat load. The Project Mercury data center is poised to benefit from rapid advances in efficiency with its large and flexible space combined with its standard three-year technology refresh cycle.
For more information from The Green Grid on containerized and modular data center facilities, please refer to The Green Grid white paper #40, “Deploying And Using Containerized/Modular Data Center Facilities.”
Conclusion Faced with the challenge of data center consolidation combined with the need to deploy tens of thousands of new servers in less than six months, eBay responded with Project Mercury, a holistic, end-to- end effort that resulted in a highly efficient data center. The publicly reported best-case site PUEL3,HC of 1.26 at 30–35 percent site load, and container pPUEL3,HC values as low as 1.018 in January 2012, and 1.046 in August 2011, are significant achievements—especially given that the data center is located in one of the hottest desert locations in the United States.
eBay is one of the first data center owners to report PUE values in compliance with The Green Grid’s recommendations. Adopting strict measurement protocols for PUE increases both measurement accuracy and precision, providing the highest levels of engineering confidence in the results.
Project Mercury was a success because eBay began with its IT and facilities organizations aligned from the beginning. Parallel server and data center RFP processes that used PUE and TCO metrics to motivate optimizations in the server supply chain, coupled with a modular, scalable and forward-thinking data center design amplified this success.
The approach used by eBay, as aligned with The Green Grid’s Data Center Maturity Model, further testifies to the value of rightsizing data center spaces. By using metrics to guide design and purchase decisions, and projecting several generations of technology into the future in order to achieve a longer data center lifespan and drive an end-to-end compute ecosystem.
Acknowledgments
The Green Grid would like to thank Mike Lewis, Dean Nelson, and Jeremy Rodriguez of eBay for their contributions to this case study.
All of the PUE values in this paper are as reported by eBay and have not been certified or validated by The Green Grid.
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52