search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING g


make it more usable in these multi-use case environments. Things like IRODS and technologies like StrongLINK, that sit in front of the object storage layer,’ said Dean. ‘We are seeing S3 [Amazon S3] being


adopted as a kind of pseudo-standard as well. I think those things coming together are making it a little more general purpose, which allows the technology to be more widely adopted within our base,’ concluded Dean. There are a number of buzzwords floating around the HPC storage market. Two of the most prominent are software defined storage (SDS) and flash-based storage. SDS focuses on the use of commodity or cheaper storage media that use software acceleration, which enables a company to differentiate with its own reliability for performance features built into the software layer. Flash storage refers to storage media


created using flash-based memory. Flash is different to spinning disks, in that the technology is based on non-volatile flash memory.


Flash memory: a potential future for HPC ‘You could argue that we have been doing software-defined storage for years because we have been working with file systems like Spectrum Scale for a number of years. Flash is coming into play; we are starting to see that having an effect on our customers,’ said Dean. The move towards flash is steadily gaining pace as users move away from large file systems in favour of something that can deliver increased performance for highly parallel applications, which are becoming more ubiquitous in HPC. ‘Now I think that is shifting, where flash is getting faster and the capacity is getting much larger, at the same time as the price of the technology falls,’ said Dean. ‘It is now at the point where it is being considered by a number of institutions. Exactly when the tide is going to turn, I don’t know but it is going to happen at some point. There will still be a place for disks for the foreseeable future, but it looks like we will be building at least the smaller HPC systems using pure flash.’ This move towards flash is a sentiment


shared by McMullan, who noted that data telemetry from Pure Storage flash arrays points towards HPC workloads becoming increasingly random in terms of the I/O. ‘It used to be large streaming sequential


workloads but now we see much more random I/O, particularly with computer vision, machine learning, training, design and rendering for the special effects market. Everybody is much more


8 Scientific Computing World August/September 2018


the technologies available to be able to do this, such as Spectrum Scale. We are able to look at multi-site solutions and whether the cloud could be a backup target or another tier used for the lowest performance/cost – in the same way we have used tape in the past. It could also be used as replica target, there are all sorts of opportunities around the way we can use cloud,’ stated OCF’s Dean. ‘If someone has got two data centres


“There will still be a place for disks for the foreseeable future but it looks like we will be building at least the smaller HPC systems using pure flash”


random and the dataset characteristics have changed, this plays nicely into the strengths of flash, because that’s where disk starts to fall down,’ said McMullan. McMullan explained that traditional


HPC storage systems from two to three years ago would have been disk-based on hundreds or thousands of spindles, all delivering around 100 IOPS, which provides huge sustained bandwidth. ‘That is great for those large sequential workloads, whether it is database warehouse table scans, or analysing a large sequential dataset, but the workloads have changed,’ said McMullan. ‘The use cases change the


characteristics of an I/O stream. The more nodes there are, the more parallelism, which means those nodes are looking at different parts of the dataset. As the grid expands, the datasets change and we see evidence of that quite clearly. Disks don’t do random I/O well at all, whereas flash devices do not care, as there is no latency penalty in the whole address space,’ added McMullan.


The rise of cloud While HPC is performance-focused, there is also a huge effort to increase the efficiency of resources. This is true of the hardware elements as we push towards exascale, but it is also true of the way that resources are consumed. Cloud storage offers high availability of data and can be particularly effective in certain use cases, such as for HPC centres across multiple sites. ‘We have


already, and they have power and rack space available, then it might not make sense to replicate data into the cloud but if they do not have that infrastructure, then it could make a compelling argument. Again, it is very use case dependant and our job, as an integrator, is to keep all these things in our kit bag and continue building uses cases and prove POCs to make sure that when a customer comes to us with a requirement, we are ready to meet that requirement,’ said Dean. OCF and Amazon Web Services recently announced OCF as an AWS partner, so it is likely that that OCF will be pushing this provider as an option for HPC users that want to take advantage of cloud computing in the future. ‘The interesting and fast developing side of things for us, is definitely how the cloud can be part of our solutions and customer strategy for the storage space. The cloud is definitely one to watch. It will be very interesting to see how that develops,’ concluded Dean.


The flash future While McMullan is focused on an all-flash future for HPC storage, this may only cover the next few years, as new technologies are on the way to drive performance even higher. ‘HPC is leading edge, in terms of the technological envelope, but the message from our customer base is very much “all flash”. Not just because of performance, but also the efficiency and the cost efficiency. ‘We see the industry moving en masse


to an all-flash based technology for now, but then you have storage class memory and persistent memory coming up as the next wave of media beyond that. This provides another exciting envelope that everyone will push into,’ added McMullan. In his opinion the industry is driving


towards flash technology and this can be seen from research done by Seagate and Western Digital that highlights the demand for drives is coming down, while demand for video archive drives is going up. ‘People are keeping their cold data on


disk, as almost a replacement for tape, but their primary dataset is moving towards NAND,’ concluded McMullan.


@scwmagazine | www.scientific-computing.com


Fullvector/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36
Produced with Yudu - www.yudu.com