search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Valery Brozhinsky/Shutterstock.com


HIGH PERFORMANCE COMPUTING


‘What we have aimed to do with our Cloud Replication Service is to actually build – from the ground up – a replica of your nodes and your software stack and provide a way for your data live data to be available to you in the private cloud’


is not just for HPC users, however, as it can be used to recreate your internal infrastructure so researchers can look at novel technologies. Pancholi gave an example of a recent


customer who had become responsible for not just the HPC resources but a whole range of research computing services. ‘They have got people looking into novel areas like IoT and AI and the size of your team to run those services means that you have to be quite clear about where you put those resources. For cutting- edge research you often need to put a disproportionate amount of resources to set it up and keep it running for a small number of people,’ said Pancholi. ‘If you leverage a public cloud resource


for that it makes it easy to investigate and see if it is worthwhile to invest that time and money in the underlying infrastructure without having to move people away from the main services that everyone else is relying on,’ Pancholi added. However OCF’s cloud replication service does more than just provide users with cloud availability as it has been designed


www.scientific-computing.com | @scwmagazine


to provide users with a replica of their own internal cluster. In the past this has been prohibitively expensive. Without doubling up on servers, it is not possible to recreate a cluster ready for when there is a major issue. Pancholi states: ‘We have seen a trend


towards HPC becoming more and more accepted as a critical service for research institutions. From a business perspective we have also seen changes in how these institutions are run. They are moving away from a home-


grown IT directive to more commercially aware CIO types. The first thing they want to do is make sure they have disaster recovery coverage across the organisation for their key services.’ OCF aimed to come up with a way


to provide the reassurance of having a cluster for disaster recovery available without the huge costs. This meant keeping the software and management infrastructure available 24 hours a day, so that users can quickly provision the nodes to replicate their own internal HPC cluster. ‘The work that we have been doing with


public cloud providers led us to believe that we could start to do something that is a proper replica of your cluster in the cloud,’ said Pancholi. ‘There have been attempts to try and fill this void before and predominantly people have tried to host software in the cloud and say well you do mainly Ansys stuff on your HPC cluster so you can come and do Ansys in the cloud. Now my personal experience of running a HPC cluster for a university helps me understand that there is a big difference between running a piece of software on two different infrastructures. There are so many different parameters that can change things but actually what you need is something that is installed as close as possible to the original system.’ ‘What we have aimed to do with our Cloud Replication Service is to actually build – from the ground up – a replica of your nodes and your software stack and provide a way for your data live data to be available to you in the private cloud,’ said Pancholi. ‘We think this is a very novel approach and the cost implication of having that on standby instead of buying a second cluster is something that we have found to be very attractive proposition when we have spoken to people.’


Bare-metal future While some companies develop specialist services for disaster recovery and testing new software, others are looking to make cloud an avenue for traditional HPC workloads. There are several companies now offering so called ‘bare metal’ cloud servers or platforms which are designed to deliver performance that is comparable to an in-house HPC cluster.


December 2018/January 2019 Scientific Computing World 5


g


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28