search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
high-performance computing PCIe switch pool Server pool GPU pool Infiniband switch pool


CALIENT S-Series OCS


Calient’s GPU pooling and allocation concept ➤ Daniel Tardent, VP of marketing at


Calient, said: ‘We build a switch that is pure optical layer circuit switch based on 3dMEMS technology using micromirrors. Tis is switching at its most simple level; we are not doing any kind of packet-based switching whatsoever.’ While this technology is still in the


early stages of development, Calient’s switch concept offers the prospect of redefining supercomputing architectures – enabling HPC users to reconfigure their supercomputers from a shared pool of resources. Calient’s proposed solution uses single mode fibres for a switch system based on 320 input/output ports, which could support any combination of compute elements. Tardent explained: ‘In between those two


planes are micromirrors that can be adjusted by applying electrostatic voltages onto the back of the tiny mirrors. By angling the mirrors on both sides of the matrix, you can bounce light from any input fibre to any output fibre, which gives you a 320-port purely optical based switch.’ Tardent was quick to distance himself


from conventional interconnect providers. He explained that Calient was not trying to compare itself to companies with packet- based switching because the company was offering a different approach from supercomputing. He said: ‘We can re- architect the distribution of compute resources to deliver optimum efficiency. We are fundamentally going to re-architect the relationships of how things are connected to give you the best performance.’ Tardent explained that there are three main areas that set Calient’s optical switch


10 SCIENTIFIC COMPUTING WORLD


apart from the competition. ‘Because it is pure physical layer, pure light, we are completely agnostic to bit-rate and protocol. It doesn’t matter if it is 10GB, 40GB, 100GB, 400 GB – as long as it is single mode light we can switch it.’ Te other key factor for HPC is the


extremely low latency of a purely optical system. ‘Once you setup a connection the latency through the switch is only 30 nanoseconds,’ Tardent confirmed. Today the relationship between the GPU


or other logic elements of HPC system are fixed. Once an architecture is constructed,


WE COULD NOT GET ANY MORE PERFORMANCE FROM JUST INCREASING THE FREQUENCY OF CPU


you can switch the components but the integral structure remains the same. Calient’s vision of supercomputers in the future removes the fixed nature of the architecture; an optical switch could match any combination of GPUs with servers, storage, or other components as they are needed. If the systems could be managed and


reconfigured with minimal latency and effort, then this concept could pave the way for a new adaptable supercomputing architecture – underpinned by a very flexible interconnect technology. Tardent commented: ‘One of the things


that make the Calient style of optical circuit switch work in this scenario is the extremely


low latency. Even if you were going to think about doing this with a packet-based technology, it just doesn’t have the latency required to make this work.’ Another aspect of this technology that


could be useful to high-end HPC systems is the inherent redundancy offered by a reconfigurable architecture. Using the Calient switch, a hardware failure could be rerouted to another resource in the pool, as long as there were spare resources available. However, this technology is still in development and it is not clear how the switch will be reconfigured. At this time, reconfiguring the system may need to be done manually, based on pre-determined scripts that would be input by an operator. ‘Te other option is that the resource management systems in the facility can drive the reconfiguration of the optical switch’ commented Tardent. ‘Either way the reconfiguration time is


going to be in the order of less than a second to reconfigure the whole thing.’ While it is still too early to tell which


technology will be the most successful, it is clear that interconnects will to play an increasingly important role in increasing HPC application performance. Whether by developing entirely new technologies, or by increasing the intelligence of the switch, HPC users must address the performance bottlenecks that face an increasingly data- intensive industry. ‘If you want to overcome performance


barriers, then you need to look into the entire application, the entire framework of what you want to execute. You need to use every active component in the data centre as part of the application,’ concluded Shainer. l


@scwmagazine l www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36