This page contains a Flash digital edition of a book.
FACILITIES ef f icienc y


up the frequency for a rapid response to workload demands (Figure 2). This opens the door for future software to balance physical infrastructure capacity with workload. This virtual gas pedal will match server performance with demand by increasing or decreasing clock frequency, further lowering total wattage consumption. This strategy is aligned with the DCMM’s Section 5.3, which recognizes the need to understand workload energy consumption.


Managing Packaging Density


Greater data center packaging density reduces the use of costly data center area, increases business agility by allowing more servers to be deployed at once, and increases data center cooling efficiency. The DCMM’s Section 1.3 recommends that companies align rack power sizing with typical uses. The server RFPs that eBay issued clearly aligned with this recommendation, and they are designed to support two different deployment models: £ Fewer than 1,000 servers. When the business requirements drive a deployment of one to 1,000 servers, the need is fulfilled with rack-and-roll deployments. This deployment unit has all power and network cabling infrastructure preconfigured and tested by the server manufacturer in custom eBay-specified racks. The rack- and-roll deployment model allows server racks to be moved quickly from the loading dock and into position in the data center, reducing hardware deployment times from months to minutes.


£ eBay nicknamed the racks they use in the data center “sidewinder,” because of the extra room on each side of the server space. The sidewinder racks that eBay sources from two manufacturers are optimized for size, power, cabling, weight and cooling. These racks are 76 cm wide, 122 cm deep, and 48 RU high supporting either 48 or 96-server configurations depending on their form factor. Network-enabled, modular power distribution units (PDUs) support capacities from 5 to 50 kW based on configuration. A top-of-rack switching model is used where the network access switches are actually mounted vertically in any of six 2-RU slots on either side of the stack of servers. These racks are capable of supporting up to 1,600 kg of rolling load and support in-row or passive rear-door cooling technologies. This configuration tightly aligns supporting infrastructure components without sacrificing any server space.


£ More than 1,000 servers. When business requirements drive a deployment of more than 1,000 servers, the economics shift from favoring the rack-and-roll model to containerized data centers. These deployment units allow server manufacturers to


closely couple their server innovations with the power and cooling infrastructure that optimizes both TCO and PUE. From eBay’s perspective, containerized data centers support an even more efficient deployment model. The entire server complex can be moved to the data center roof space in one step, with modular power, cooling, and network connections made in a highly optimized, self-contained data center environment.


All three data center modules (shown in Figure 4,in the last issue of DCS) were deployed in less than one day, with one container holding 1,500 servers lifted from the truck to the rooftop in 22 minutes. The data center containers were sourced from two vendors and utilized completely different cooling designs, showing the flexibility of the building to support different cooling technologies. There is space for eight additional modules on the 700 square meter area, bringing the total roof capacity to 6 MW of IT load.


Data Center Design


In parallel with the server procurement process, eBay issued an RFP for its data center design that gave rough parameters for its construction. While TCO factors heavily in the design, the primary goals for the data center included free cooling year round, driving eBay’s design PUE of 1.2 with rack densities up to 40 kW per rack. Achieving such a low PUE in a hot, desert location in the United States virtually dictates out-of-the-box thinking on the part of the winning vendor.


Data Center Design Requirements £ Multi-tier space


The data center campus is a multi-Tier environment and high- availability applications are placed in the Tier IV space. Project Mercury is designed as Tier II redundant capacity components space with a single path of UPS power. A second power path comes direct from the utility connection with no UPS backup. The Tier II topology of this space implies scheduled downtime events for maintenance and capacity upgrades, however the Project Mercury design included concurrently maintainable Tier II infrastructure. This meant that eBay would be able to conduct maintenance on the mechanical, electrical, and plumbing infrastructure without requiring the IT equipment to be powered off. Without the alignment of resiliency requirements between facilities and IT, this would not have been possible.


£ High density


The data center would have to support high density in both its conventional rack space as well as its containerized data center area. The team found that when 96 servers were installed into a single rack with two 28 kW PDUs and four 48-port Gigabit Ethernet switches, fewer racks, switches, and PDUs were necessary. The cooling system also extracts heat from dense racks more efficiently, and less data center space is needed, slowing the rate of data center expansion and extending the new data center’s lifespan. £ Modular and future proof


The data center must be modular so that it can evolve through four to five generations of technology refresh. To optimize the environment, compute capacity should be deployed in “whole rack” or “whole container” increments. The power system must scale from 4 MW on day one to 12 MW over the data center’s lifespan. The cooling system must scale by expanding capacity a module at a time. Each rack position must be prepared to scale up to 40 kW of power and cooling over time.


£ Low PUE


Figure 2. eBay can adjust CPU frequency based on varying load, increasing energy savings in low utilization periods, and overclocking for peak demand. (Source: eBay diagram, used by permission)


22 www.dcseurope.info I May 2012


As stated above, the design PUE needed to be less than 1.2, which virtually dictates a cooling system that pushes the envelope of possible technologies.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52