This page contains a Flash digital edition of a book.
HPC news


HIGH-PERFORMANCE COMPUTING


Bristol University opens new data facility


SDSC deploys new supercomputer


A new data facility that is so advanced it can hold approximately one Pbyte of information – the equivalent of 20 million four-drawer fi ling cabinets fi lled with text – has been opened to safeguard the University of Bristol’s research data. The £2 million facility, named


‘BluePeta’, has been specifi cally designed for the university through collaboration with technology suppliers SCC and IBM in response to the increasing volumes of research data being created and the need to curate


this information securely over the long term.


Ian Stewart, director of the university’s Advanced Computing Research Centre, said: ‘Research funding bodies are increasingly asking universities to retain research data to allow its use, reuse and repurposing over the long term. BluePeta allows this process to take place in a central and secure environment, enabling researchers to maintain data integrity – the accuracy and consistency with which they store their data.’


Companies face extreme- scale computing challenges


Intel has selected E.T. International (ETI), a provider of system software solutions for hybrid and many-core hardware architectures, as system software partner to spearhead the development of high-performance parallel runtime systems for future many-core computer architectures. Part of the US Defense Advanced Research Projects Agency’s (DARPA) recent Ubiquitous High Performance Computing (UHPC) programme, this collaboration is an element of the public-private research initiative to build an energy-effi cient extreme-scale


www.scientific-computing.com


computer system. The DARPA UHPC programme seeks to develop these extreme-scale computing systems by 2018. Intel’s UHPC team, jointly funded by Intel and DARPA, includes Rishi Khan, ETI vice president of R&D, among its principal investigators. In addition to ETI, other partners on the Intel team include Reservoir Labs; University of Illinois at Urbana Champaign; University of California, San Diego; and University of Delaware, lead by principal investigator and ETI founder Guang Gao.


The San Diego Supercomputer Center (SDSC) at UC San Diego, US, has deployed a new high-performance computer (HPC) system named Trestles. The system is based on Quad- socket, eight-Core AMD Opteron compute nodes connected via a QDR Infi niBand Fabric confi gured by SDSC and Appro, a provider of supercomputing solutions. Available to users of the TeraGrid, the US' largest open-access scientifi c discovery infrastructure, Trestles is among the fi ve largest systems in the TeraGrid infrastructure, with 10,368 processor cores, a peak speed of 100 Tfl ops, 20 Tbytes memory, and 38 Tbytes of fl ash memory.


In November 2009, SDSC announced a fi ve-year, $20 million grant from the US National Science Foundation (NSF) to build and operate Gordon, the fi rst high-performance supercomputer to employ a vast amount of fl ash memory. Dash, a smaller prototype, was deployed in April 2010 and Gordon will become operational in late 2011. Trestles will be available to TeraGrid users through 2013 and to ensure that productivity remains high, SDSC will adjust allocation policies, queuing structures, user documentation, and training based on a quarterly review of usage metrics and user satisfaction data. All three systems are being integrated by Appro.


US FACILITY TO ADDRESS HPC COLOCATION NEEDS


Data Foundry’s Texas 1 next- generation data centre, opening June 2011, will fully support high- performance computing (HPC), the company has announced. The unique requirements of HPC systems have often led companies to build their own facilities rather than outsourcing to a colocation provider, however these in-house facilities represent long-term capital deployments that can be out of sync with the much shorter lifetimes of HPC servers. Designed for HPC users across a range of industries including oil and gas, the chilled water plant design within the Texas 1 infrastructure allows HPC cooling to be separated from traditional room airfl ow so that water-cooled


cabinets can reach power density demand of 50kW per cabinet and will accommodate higher limits in the future. ‘Texas 1 supports high fl oor loads, private suites starting at 2,500 SF and fl exible confi gurations including chimney cabinets, hot or cold aisle containment, and customised liquid cooling solutions,’ explained Edward Henigin, chief technology offi cer of Data Foundry. As part of a 40-acre, 100 megawatt data centre development named the Data Ranch, Texas 1 will be the fi rst data centre in the central Texas region designed from the ground up to support HPC requirements.


APRIL/MAY 2011 21


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48