This page contains a Flash digital edition of a book.
high-performance computing News


WATSON CLOUD SERVICES


IBM has announced new Watson services delivered through the cloud. IBM’s Watson Discovery Advisor is designed to reduce the time taken for researchers to make conclusions of their work. The other service, Watson Analytics, is used to explore Big Data insights through visual representations. Watson Discovery Advisor uses


Watson’s cognitive intelligence to save researchers time, after reading through, determining context and synthesising vast amounts of data. It helps users pinpoint connections within the data to accelerate their work. Connections are made that are often overlooked within the volume of information available from relevant data sources.


R


For regular news updates, please visit www.scientific-computing.com/news


Science to benefit as supercomputing opens its doors


esearchers in the USA and in Europe have been offered access to some of the world’s most sophisticated


supercomputers, as a result of three initiatives announced recently. The US National Science Foundation (NSF) has opened up the Blue Waters supercomputer to US researchers. The Extreme Science and Engineering Discovery Environment (XSEDE) has launched new open source tools so that scientists can move their research from campus-level computers to national computational facilities. Finally XSEDE has joined together with Europe’s Partnership for Advanced Computing in Europe (PRACE), to extend their


existing collaboration and support research teams spanning the US and Europe.


The NSF has invited US researchers to apply for run-time on the Blue Waters supercomputer, in an effort to provide computational capability so that investigators can tackle much larger and more complex research problems. Applicants must be able to show a compelling science or engineering challenge that will require petascale computing resources. They must also be prepared to demonstrate that the challenge will exploit the computing capabilities offered by Blue Waters effectively. Blue Waters is a petascale system,


LENOVO AND IBM JOIN FORCES


Lenovo and IBM have announced an agreement for Lenovo to acquire IBM’s x86 server business. The deal includes System x, BladeCenter and Flex System blade servers and switches, x86-based Flex integrated systems, NeXtScale and iDataPlex servers and associated software, blade networking and maintenance operations. The purchase price is US$2.3 billion, approximately two billion of which will be paid in cash and the balance in Lenovo stock. IBM will retain its System z mainframes, power systems, Storage Systems, Power-based Flex servers, and PureApplication and PureData appliances.


Texas A&M University partners with IBM to drive computational science


Texas A&M University is partnering with IBM to create on scientific research that will be supported by a large computational sciences infrastructure -- the initial focus will be on agriculture, geosciences and engineering. The collaboration will harness big data analytics and HPC systems to research on improving extraction of energy resources, facilitating the smart energy grid, accelerating materials development, improving disease identification and tracking in animals, and developing a better understanding of global food supplies.


The A&M system will consist of a


research computing cloud that will be made up of IBM hardware and software. Blue Gene/Q will serve as the foundation of the computing infrastructure, with a Blue Gene/Q system consisting of two racks, with more than 2,000 compute nodes, providing 418 teraflops (TF)


30 SCIENTIFIC COMPUTING WORLD


of sustained performance. A total of 75 PowerLinux 7R2 servers with POWER7+ microprocessors will be connected by 10GbE into a system optimised for big data analytics and high performance computing. This complex includes IBM BigInsights and Platform Symphony software, IBM Platform LSF scheduler, and IBM General Parallel File System. The system will also contain an estimated 900 IBM System x dense hyperscale compute nodes as part of an IBM NeXtScale system. Some of the nodes will be


managed by Platform Cluster Manager Advanced Edition (PCM-AE) as a University-wide HPC cloud while the others will be managed by Platform Cluster Manager Standard Edition (PCM-SE) and serve as a general purpose compute infrastructure for the geosciences and open source analytics initiatives.


consisting of 237 racks of Cray XE6 nodes, 32 racks of Cray XK7 nodes with NVIDIA GK110 Kepler GPUs and over 25 petabytes of usable online storage. The system is located at the US National Centre for Supercomputing Applications (NCSA) at the University of Illinois. The NCSA and Cray are currently testing the functionality, feature, performance, and reliability of the system at full capacity. As part of these tests, a representative production workload of science and engineering applications will run on Blue Waters. In turn, this will improve the ability of the Petascale Computing Resource Allocation team to use the system at full capacity.


LSI STORAGE FOR COMPUTE PROJECT


LSI is contributing two storage infrastructure reference designs to the Open Compute Project (OCP).


The first is a board design for a 12Gbps SAS Open Vault storage enclosure.


The Open Vault is a cost-


effective storage solution with a modular I/O topology that’s built for the Open Rack. The storage solution is designed to enable Open Vault to optimise the performance of 12Gbps SAS infrastructure by utilising LSI’s unique DataBolt bandwidth optimiser technology. DataBolt technology


aggregates the performance of storage devices to enable 30 to 60 per cent higher bandwidth from existing SATA drives in the Open Vault enclosure.


@scwmagazine l www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40