search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Robert Roe looks at how interconnect technology is helping to change the landscape of high-performance computing


i I


n recent years, the movement of data – rather than the sheer speed of computation – has become a major bottleneck to large scale or particularly data intensive HPC applications.


Tis has driven bandwidth increases in interconnect technology, effectively widening the pipe that data uses to travel across the network. While the HPC community will still see


more bandwidth increases in the future – most likely 200Gb/s, which will be introduced in the next two years. Interconnect developers are investigating new technological developments that further increase the performance of a network by reducing the latency of communications – the time taken for a message to be passed across the network. Tese developments are taking shape in two


distinct directions – hardware accelerated, smart switches are on the horizon, and commercial products from both Mellanox and Bull will


8 SCIENTIFIC COMPUTING WORLD


ship in 2016 and 2017 respectively. Tere is also discussion around Interconnect technology from Calient that uses pure optical switches. Tis could be used to produce a reconfigurable supercomputing architecture, although these products are only in the development stage at this time.


Developing technology specifically for HPC Mellanox has been at the forefront of HPC Interconnect technology for a number of years. In the latest Top500, the twice-annual list of the fastest supercomputers, based on a LINPACK benchmark, around 47 per cent of the Top500 systems use Mellanox InfiniBand Interconnects. It should be noted that a LINPACK benchmark is used to determine the raw FLOPs performance of a HPC system, rather than its ability to pass data across the network. However,


ntelligent


nterconnects


the appearance of Mellanox on this list indicates the prevalence of Mellanox technology in high- end HPC systems. Unsurprisingly, Mellanox aims to continue the


dominance that it has established in recent years. At the US Supercomputing conference, held in Austin, Texas during November 2015, Mellanox announced its first smart switch based on the 100Gb/s EDR InfiniBand, called switch-IB2. Gilad Shainer, vice president for marketing


at Mellanox Technologies, explained that there have been several key shiſts in the development of HPC technology. Te first was the shiſt from SMP nodes to cluster-based supercomputing: ‘Te ability to take off-the-shelf servers and connect them to a supercomputer requires a very fast interconnect technology’ commented Shainer. Te next shiſt in HPC was the move from


single fast cores to a multicore or many-core architectures. Tis increase in parallelism and the introduction of accelerator technologies further increased HPCs reliance on interconnects, as computation was being split into increasingly


@scwmagazine l www.scientific-computing.com


vs148/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36