This page contains a Flash digital edition of a book.
thermal management


prototype chips aren’t top in energy efficiency. When we bring chip fabrication to production levels, we can optimise the back end of the process to tweak energy parameters. Historically we’ve gained between 10 and 20 per cent in performance when going from prototype to production chips, but with today’s silicon technology it’s getting tough to maintain that level. We just aren’t sure what gains to expect.’ In addition, in the same rack package for the final product there will be less memory per chip required, which will further improve the energy budget.


Revolutionary low-power memory devices on the way Power consumption of CPUs seems to have levelled out somewhere in the range of 100W but, with their multiple cores each wanting memory, power consumption for memory and storage is becoming a larger issue. Marc Hamilton, VP of HPC at Hewlett-Packard says that, when examined over a group of servers, memory can consume more power than the CPUs. Depending on the memory technology, he adds, power has jumped. A typical two- socket server with two 100W CPUs consumes 1000W of power, and combining all types of memory its contribution can be 50 per cent of the power budget. This same server could have 24 to 36 memory chips, at 4W each, so any savings adds up quickly.


This shot from the SC10 conference shows what the IBM Blue Gene/L should look like with a 1:4 scale model in the background and a typical node board in the right foreground


because the #1 spot goes to a prototype. In the ProLiant SL390 G7s, the networking chips are built directly onto the server’s pc board, which means a shorter signal path and more efficient transfer to data to the CPU, and thus the ability to keep it more efficient. The Tokyo system also takes measures to speed up access to mass storage with the use of solid-state disks (SSD); each server in the Tokyo system has 60 to 200GB of SSD.


AN INCREASINGLY POPULAR ALTERNATIVE TO BRICK AND MORTAR


DATA CENTRES ARE MODULAR DATA CENTRES (MDCS), WITH WHICH OPERATORS CAN ADD COMPUTE CAPABILITY QUICKLY


Hamilton points to an innovation which


could dramatically cut memory power: the memristor (memory resistor), which is being heavily developed at HP Labs. In this component, when charge flows in one direction its resistance increases, and if charge flows in the opposite direction the resistance decreases. If the flow of charge is stopped by turning off the applied voltage, the component remembers the last resistance it had – and, when the flow of charge starts again, the resistance is what it was when last active. This memory effect can be used to store data. Contrast this operation to a conventional DRAM chip, which needs power to maintain its data. This technology could eventually take over all areas of memory and is expected to have a pricing structure similar to DRAMs when it becomes commercially available – which, Hamilton says, should be in 2013. Another place to save energy is in


networking, adds Hamilton, who points to HP’s #2 ranking in the Green500 list as actually being the top production machine


www.scientific-computing.com


Modular data centers An increasingly popular alternative to brick and mortar data centres are modular data centres (MDCs), with which operators can add compute capability quickly and without building or expanding a facility. An illustration of how far things have come in this area is SGI’s third-generation product, the ICE Cube Air, which representing four years of MDC design and deployment experience. Key features are that HVAC (heating/ventilation/ air conditioning) is completely eliminated; it is operational within hours and with a starting price of $99,000. Capable of housing up to four compute


racks, the ICE Cube Air expands in standard increments out to 80 racks/97,920 cores/143.36 Pbytes. Using efficient fans along with a three-stage evaporative cooling system, this MDC runs almost anywhere in the world with air and evaporative cooling, achieving a power usage effectiveness (PUE) ratio of less than 1.06. Supplemental direct expansion using chilled water coils provides an optional


cooling system for customers deploying in extreme environments. Using one per cent of the water needed in standard water-chilled containers and requiring a water flow of only two gal/min, ICE Cube Air can get its water from almost any clean source, including a garden hose. ‘In our standard rack designs,’ adds SGI’s


chief engineer Tim McCann, ‘we strive for air flow that is as linear as possible with fewer turns and bends, which then reduces the energy for blowers. Further, we augment cooling with water-cooled rear door heat exchanger that move the heat into a facility’s chilled water system. This takes heat removal closer to the load and saves on air-blower expense for the computer room itself.’


Power supplies, too Another aspect to consider at the rack level is the power conversion from 200V AC to 12V or 48V DC. A while back, power supply efficiencies in the range of 80 to 85 per cent were common, but things have since improved dramatically. Consider the listings at 80PLUS.org, which requires multi-output power supplies in computers and servers to be 80 per cent or greater energy efficient at 20, 50 and 100 per cent of rated load with a true power factor of 0.9 or greater. For the Platinum rating, a power supply must have a PFC (power factor correction) of 0.95 at 50 per cent along with efficiency of at least 94 per cent at 50 per cent load and efficiency of 91 per cent at 100 per cent load. The list shows that there are now quite a number of Platinum rated supplies available. 80PLUS.org is illustrative of the trend of


establishing organizations to help promote energy efficiency. For instance, many people feel that energy efficiency metrics are not


FEBRUARY/MARCH 2011 23


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44