search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Timofeev Vladimir/Shutterstock.com


HIGH PERFORMANCE COMPUTING


employed in parallel file systems such as GPFS where a server can have different hosts sharing logical block devices. If one host dies another can continue to update because it also sees the logical block device. Many logical block devices split across any number of hosts are what make up a typical parallel file system that you would see attached to most HPC systems today.


Goldenhar noted that, just like a car, most supercomputer users just want results rather than knowing the inner workings of the system. ‘As a HPC user you are the driver and you just want to drive the car. You use the tools available to you, such as the steering wheel and the gas pedal, but you just use it.’ The idea behind this technology is to not only simplify the access of data for users but to also create a distributed high performance storage system that can work with a completely standardised rack of servers with NVMe drives. ‘We are trying to make this logical block device that performs as good as or better than proprietary hardware all-flash arrays. If you have that and you have RDMA, Mellanox


InfiniBand or even 100G Ethernet, then you can just sprinkle our software into the mix then you get a virtual all flash array that can be used underneath Lustre, GPFS, BeeGFS take your pick,’ stated Goldenhar. ‘We take this pool of NVME drives sprinkled throughout the cluster or densely packed in the top of a rack server. We can add our software and make those drives accessible not just as individual disk drives you can make it appear to be one very big and fast drive,’ commented Goldenhar.


The future of storage However, there is more to this technology than just creating a single namespace as the system is also physically disaggregated which can be used to provide redundancy. ‘We let you use and pool these resources but let you retain those performance characteristics. We get full performance for bandwidth and less than one per cent degradation of latency performance so we only add about five micro seconds of overhead to what a drive can do locally. You basically access NVME for the same speed that you can access


“You can put a file system on block, you can put object storage on top of block through various different measures – it is really the central common denominator”


at host but it’s not just speed because you can add redundancy,’ added Goldenhar. For Goldenhar and his colleagues


at Excelero, the future lies in software defined computing architecture. The benefits for storage are clear; performance and redundancy on cheap commodity hardware, all managed by a single software layer to create a physically disaggregated storage architecture. ‘I don’t think there is anyone arguing against moving towards software and away from proprietary hardware arrays. That was part of our strategy to focus on the emergent market rather than to steal some of the shrinking market from the big traditional players,’ concluded Goldenhar.


www.scientific-computing.com | @scwmagazine


April/May 2018 Scientific Computing World


9


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40