FOCUS 2011/2012 REVIEW
Issue 19, January 2012
providing a benchmark for an efficient data center; and demonstrating proof of concept for a number of nontraditional design approaches.
“If you don’t know how efficient a data center can be, there is no way you can optimize yours,” he says. “You don’t even know where the opportunity is, or where you’re wasting energy.” The benchmarking data, however, is most useful when looked at in concert with the system’s design. Weale also sees great value in a big end user, such as Facebook, implementing an unorthodox design — no chillers, for example — and demonstrating its success.
“Everyone wants someone else to be first on a data center,” he says. “In this case, Facebook is saying: ‘Look, we’re first. We’re running this [without] chillers, and we’re running 80ºF or higher, and we’re showing how we’re doing that exactly’ down to the server they’re using.”
While Facebook is not the first to build a data center without chillers, it is the first to share how it did it. “They’re taking that risk and they’re willing to release the benefit of that,” Weale says. “And the benefit of that is confidence in using that approach.”
A company can now build a chiller-less data center without installing a chiller, just in case something went wrong in their calculations and the chiller-less setup doesn’t cut it. Facebook has already paid the “risk premium” that a company usually has to pay when implementing an innovative solution for everyone else.
IS SECRECY REALLY A BIG ADVANTAGE?
In Weale’s view, the benefits of sharing the information outweigh the competitive advantage a company gains from holding its data center strategy to itself. This despite the advantage of secrecy being very real.
“There’s certainly a reason to keep this secret,” he says. When a company improves efficiency of its data centers, the resulting savings can potentially be measured in millions of dollars.
“It does go to the bottom line. It does go to the books. If they have a way to run a data center at a PUE of 1.0, it’s definitely a competitive advantage,” Weale says.
By sharing the way the company saved that $1m, it gives up that advantage but gains “a
measure of goodwill” for taking the lead on sharing information from which everybody can benefit.
Showing an entire industry effective ways of reducing data center energy use, a company sets itself up for potentially having a tremendous impact on the planet, given how much energy data centers consume.
APPLICABLE AT A SMALLER SCALE
Use of the design of Facebook’s data center is not necessarily restricted to web-scale companies with homogenous application workload. “The concept certainly can scale down,” Weale says.
restricted to web-scale companies
A company can build a data center, for example, that is not completely chiller-less, but has a portion of its cooling capacity supplied by a chiller-less system.
Data center developer and operator Digital Realty Trust joined the Open Compute community to help companies apply Open Compute principals at scales smaller than Facebook’s. Digital Realty Trust CTO Jim Smith says this can be about more than applying just a few of the design principles.
“We probably can’t match 100% of the spec,” but matching about 75% is doable, Smith says. Digital Realty is making Open Compute-based data centers available as one of its turnkey products in the hopes of capturing some of the expected demand.
Frankovsky says that Digital Realty provides an opportunity for a company that cannot afford, or has no need, to build its own data center to take advantage of Open Compute concepts. It is working to adopt Open Compute electrical infrastructure designs to data center pods with 2MW or less power capacity, Smith says, adding that there is demand for outsourced Open Compute data centers both small and large.
“There is probably a handful of 10MW and up players that may be interested, but there also may be some that are interested in smaller ones,” Smith says. “There are going
Use of the design of Facebook’s data center is not necessarily
to be some bigger players that… may want to outsource it on a large scale, and we want to be able to do that.”
Demand for Open Compute data centers is going to come from customers who want to deploy Open Compute servers, Smith says. “It really relies on the server power supply to make the magic happen.”
Besides adiabatic cooling, Open Compute is a departure from the supplier’s traditional turnkey design because its electrical infrastructure omits centralized UPS, instead using custom battery cabinets and feeding high-voltage AC power directly to server power supplies.
DATA CENTER COMMODITIZATION
Another company that joined OCP as an enabler was Future Facilities, which provides computational-fluid dynamics software tools for designing electronics and data centers. Frankovsky says the development was an “interesting and refreshing one”.
One of the challenges in applying open source methodologies in the hardware and infrastructure space was the ability – or the inability – of players to consume the specifications. Future Facilities created a downloadable free viewer with Open Compute concepts integrated to address this very problem.
Sherman Ikemoto, general manager for North America at Future Facilities, says that OCP is a step toward further standardization of data center design and commoditization of data centers (much like Intel’s standardization of the PC led to its commoditization and the resulting explosion of computing), and the company wants to play a vital role, ensuring it enables that process.
Future Facilities takes credit for being part of Intel’s work to standardize PC design. “In a way, we contributed to the standardization of PC, because we were the way that different companies exchanged information and, of course, we were the simulation platform for the Intel PC,” Ikemoto says.
“We believe the same concept is going to be in the works with Open Compute. The data center will become a commodity in the same way that the PC is today.”
| Page 2
| Page 3
| Page 4
| Page 5
| Page 6
| Page 7
| Page 8
| Page 9
| Page 10
| Page 11
| Page 12
| Page 13
| Page 14
| Page 15
| Page 16
| Page 17
| Page 18
| Page 19
| Page 20
| Page 21
| Page 22
| Page 23
| Page 24
| Page 25
| Page 26
| Page 27
| Page 28
| Page 29
| Page 30
| Page 31
| Page 32
| Page 33
| Page 34
| Page 35
| Page 36
| Page 37
| Page 38
| Page 39
| Page 40
| Page 41
| Page 42
| Page 43
| Page 44
| Page 45
| Page 46
| Page 47
| Page 48