This page contains a Flash digital edition of a book.
interconnects


larger and so prices are generally lower. Speeds are also going up. 100G Ethernet is starting to arrive in metropolitan area networks and less so at the device level, and the IEEE is starting work on a standard for 400 Gb/s. To implement RDMA at Gnodal, the


company developed its own ASIC called the Peta so it can apply its own technology to avoid congestion at low latency. Tis device is used in the company’s GS family of switches. Te trick, explains Taylor, is that it obeys the Ethernet standard at the device level, but once a packet enters a Gnodal network, it morphs into another protocol using the Peta. It supports 72 devices at 10G or 18 devices at 40G, and it’s possible to put multiple ASICs in one product. Te company specs single-switch latency of 150ns, which Taylor claims is three times better than any other Ethernet vendor. Tey also get 66ns between two switches, and it’s possible to build switches with as many as 64k ports. Because of RDMA, Ethernet performance


tends to be comparable to InfiniBand in terms of latency. Intel, for example, recently published a white paper comparing the Ethernet-based version of iWARP RDMA vs. InfiniBand when running PAM-CRASH soſtware from the ESI Group. It states that: ‘In lab testing, simulations of a car-to-car frontal crash gave nearly identical results between Ethernet networking with iWARP technology versus InfiniBand.’ Te iWARP system used the NE020 10 Gb/s Ethernet server cluster adapter with a Gnodal GS7200 switch, while the InfiniBand system used Mellanox QDR ConnectX-2 adapter (note: not the firm’s latest ConnectX-3 56 Gb/s adapter) with a Mellanox IS5030 switch.


Which to use? Tere’s no doubt that Ethernet is gaining ground on InfiniBand in terms of latency. But the selection of which scheme to use is typically based on far more than pure latency, comments Gilad Shainer, VP of market development at Mellanox. System architects must look at the total system view. What do you want? Are you building upon an existing Ethernet storage infrastructure? How important is scalability? Does price play a role? How easy will it be to manage the infrastructure? For large-scale


Further


Cray www.cray.com


Trends in interconnect within the Top500 (data compiled by Gnodal, November 2012) information


Gnodal www.gnodal.com


InfiniBand Trade Association www.infinibandta.org


Intel www.intel.com


Mellanox Technologies www.mellanox.com


QLogic qlogic.com


Top500 www.top500.org


16 SCIENTIFIC COMPUTING WORLD


infrastructures, says Shainer, InfiniBand clearly has a competitive edge over Ethernet. According to a report from Mellanox, InfiniBand is the most used interconnect on the list, connecting 224 systems – 20 per cent more than the 189 Ethernet-connected systems. In fact, InfiniBand is the most used interconnect in the Top500 supercomputers – 52 per cent of the Top100, 53 percent of the Top200, 50 per cent of the Top300, 46 per cent of the Top400 and 45 per cent of the Top500. Further, FDR InfiniBand is the fastest growing interconnect technology on the Top500, with a 2.3 times increase in number of systems versus the June 2012 list. Finally, InfiniBand connects 43 per cent of the world’s most powerful petaflop systems on the list. Shainer also points to Microsoſt’s Azure cloud platform, where that soſtware giant states that with the help of InfiniBand they have 33 per cent lower costs per application and 90 per cent efficiency on a virtualised environment, approaching that of a physical environment. On the other hand, anyone already committed to Ethernet to maintain compatibility with existing infrastructure or because they have a big investment in Ethernet technology, expertise and management tools will be attracted to RDMA over that fabric. You can get close to InfiniBand- like latency but with a much lower learning curve and


take advantage of the ubiquitous Ethernet ecosystem. Tis all is leading to increased use of


Ethernet-based interconnects in HPC systems. In his analysis of the Top500, Gnodal’s Taylor acknowledges that Gigabit Ethernet is now being squeezed by InfiniBand as the number of servers required to enter the Top500 increases. InfiniBand is still the ‘flavour of the month’, he notes, while custom interconnects are growing and occupying more in the Top10. He predicts that the trend for 10 GB/s Ethernet will follow the Gigabit Ethernet line and gain in popularity as it becomes ‘free’ on servers and will be able to scale due to emerging standards and with Ethernet fabrics providing high utilisation.


Best of both worlds If you’re undecided as to which to go with, note that with iWARP, InfiniBand and RoCE, LAN and RDMA traffic can pass over a single wire. A further interesting fact is that today you


can purchase hybrid adapter cards that can support multiple schemes. So it’s possible, for example, to establish an InfiniBand connection while at the same time using Ethernet peripherals and with no changes required to Ethernet-based network equipment. In the case of Mellanox, its Virtual Protocol


Interconnect (VPI) enabled adapters make it possible for any standard networking, clustering, storage and management protocol to seamlessly operate over any converged network leveraging a consolidated soſtware stack. With auto-sense capability, each port can identify and operate on InfiniBand, Ethernet or Data Center Bridging (DCB) fabrics.


www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32