This page contains a Flash digital edition of a book.
interconnects


topology called Dragonfly. It is designed to work in Cray’s next-generation supercomputer, previously code-named ‘Cascade’, the XC30. When it ships next year, the Aries XC interconnect will plug directly into the on- chip PCI-Express 3.0 controllers on the Xeon E5-2600 processors, which will let Aries speak directly to any other device, such as CPU or a GPU, with PCI-Express 3.0 port. Back to RDMA products shipping today,


what’s the key difference between InfiniBand and iWARP? Basically, iWARP needs TCP (Transmission Control Protocol) to operate. Most Ethernet solutions run the TCP on the server CPU, which means that iWARP requires CPU cycles and does not really totally bypass the CPU; in a sense, it’s a soſtware emulation of RDMA. Some interesting comments about the


QLogic acquisition come from Intel’s Joe Yaworski, product marketing for the company’s Fabric Products division and who came to Intel as part of the acquisition of the QLogic InfiniBand technology where he was director of marketing for HPC programs. He maintains that InfiniBand was originally designed for data centres and not HPC, whereas the InfiniBand-based True Scale was designed from the ground up for HPC. It uses a PSM (Performance Scale Messaging) library to handle very small messages efficiently, which he feels is important in HPC where 90 per cent of the messages have 256 bytes or less and there can be tens of millions of them in a large cluster. He adds that True Scale has excellent collectives performance, where in parallel computing data is simultaneously sent to or received from many nodes. While you may think of Intel as a chip


vendor, in the networking space it primarily sells adapters and switches and at the time of writing it does not sell InfiniBand silicon. Note, though, that the PSM is executed by a Xeon processor. Intel has not moved interconnect technology onto processors at this time, but Yaworski notes that the ultimate goal is to drive the fabric closer and closer to the processor. Tis will leverage continually improving processor performance and lead to better performance through even lower latency, reduced component count and lower costs, as well as improved power consumption. As for future plans in this regard, he sees it replacing the PCI bus as the means of processors communication over the fabric but ‘that’s all I can say today.’


InfiniBand still the speed king In dealing with InfiniBand, we face a new set of acronyms. An InfiniBand link is a serial


www.scientific-computing.com


The Connect-IB from Mellanox dual-port FDR adapter achieves throughput of more than 100 GB/s


INFINIBAND WAS


ORIGINALLY DESIGNED FOR DATA CENTRES AND NOT HPC


Intel’s QLE7340 True Scale host channel adapter is based on the InfiniBand technology purchased with its acquisition of QLogic


important performance factor and refers to a study done at QLogic, which is now being repeated. Te study looked at a large number of MPI apps in areas such as oil and gas, molecular dynamics and other sciences, and profiled their MPI requirements. Te results were that all these apps ran fine in 20 Gb DDR, and that QDR was overkill. More critical are the message rate, latency and collectives. To show what is possible with InfiniBand,


link operating at one of several possible data rates: single data rate (SDR), double data rate (DDR), quad data rate (QDR), fourteen data rate (FDR) and enhanced data rate (EDR). On a per-lane basis, the SDR connection’s signalling rate is 2.5 Gb/s in each direction per connection, while DDR is 5 Gb/s, QDR is 10 Gb/s, FDR is 14.0625 Gb/s and EDR is 25.78125 Gb/s. Te InfiniBand Trade Association has also published a roadmap which includes higher future schemes including HDR (High Data Rate) and NDR (Next Data Rate); Mellanox has announced its intentions to achieve 200 Gb/s by 2016. However, there’s not always agreement on


how much speed is actually needed. Consider some comments from Intel’s Yaworski. He states that bandwidth is actually the least


consider one of Mellanox’s latest products, the Connect-IB adapter, which is now in sampling. In what the firm refers to as ‘world- record performance results’, the Connect-IB dual-port 56 Gb/s FDR InfiniBand adapter achieves throughput of more than 100 GB/s using PCI Express 3.0 x16 and more than 135 million messages per second – 4.5 times higher than previous or competing solutions. Te Connect-IB line consists of single- and dual- port adapters for PCI Express 2.0 x16; each port supports FDR 56 Gb/s InfiniBand with MPI ping latency of <1μsec. In addition, it will soon be possible to use


InfiniBand over long distances and thus extend its use beyond a single data centre network, to deliver higher performance to local, campus and even metro applications. Te MetroX TX61100, supporting up to 10km over 10 Gb/s speed, will be available in 1Q 2013; the TX6200, supporting up to 10km over 40 GB/s speed, will be available in 3Q 2013.


Ethernet activity heating up While there are only a few companies that dominate the InfiniBand market, there is much more activity in Ethernet for HPC. Indeed, Ethernet has a long history in HPC – it’s only been in the last few years that InfiniBand has more systems in the Top500 than Ethernet, reports Gnodal’s Taylor. Te benefits of Ethernet are obvious. First,


it’s basically free at the server level and soon all servers will have 10G Ethernet, a trend being forced by the enterprise sector where Ethernet is still the ubiquitous networking scheme. In addition, you also have a far wider choice of peripherals in areas such as storage. Another benefit is that many people are familiar with the technology and there’s a large skill base, which is important given the high demand for HPC experts. Finally, there’s the fact that the market for Ethernet devices and peripherals is


DECEMBER 2012/JANUARY 2013 15


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32