search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING g


compute platform where the HR data can sit idly for most of the month, and then for the five days it has to work hard, it can burst out to the rest of the infrastructure; helping everything run more efficiently and quickly for that crucial time of the month. Providers of research computing infrastructure have been keen to take advantage of the flexibility and security OpenStack provides and projects such as eMedLab and CLIMB (Cloud Infrastructure for Microbial Bioinformatics) are two very successful examples of this showing that private cloud has an established use for many universities. But what about public cloud? Usually, the groups who


provide research computing infrastructure at universities lead the way in the uptake of new technology, but because the public cloud has been so widely available, everyone has been trying cloud bursting at the same time. There are large enterprise organisations that have moved vast swathes of their infrastructure out to the public cloud because it’s much cheaper than holding on to a data centre and its staff, and means they can grow and shrink that infrastructure as demand requires. At universities, core IT computing is leading the way with public cloud bursting, unlike research computing at universities which is on par or slightly behind on its uptake.


Does it come at a price? With cloud bursting, there can be attractive initial rates to run the servers, but on top of that, there are all the additional costs which, unless you’re experienced at running IT or cloud infrastructures, might not necessarily be noticed on the outset. For example, there are costs around data egress, whereby most companies will say it is free to put your data into the cloud, but then there will be a cost to store the data on a monthly basis and a cost to access that data. So,


essentially, you are paying for the bandwidth when you are accessing your data back out of the cloud. When conducting a


scientific experiment that involves huge amounts of data, you need to continuously access the terabytes of data to run analyses, and you’ll get charged every time you retrieve that data. In some instances, even if you just try and run a search through your file structure that counts as a data egress charge as you are still accessing the data. Unless all the potential scenarios have been considered, these sorts of costs can sneak up on you and it can become very expensive, very quickly. This has been recognised


by both public cloud providers and the UK’s provider of digital solutions for UK education and research – JISC. There are ongoing efforts to provide special pricing agreements for universities, waivers for certain charges, and even large amounts of credits to help


”This leads us to the real argument for the public cloud - you no longer have to build a system big enough for your largest workload”


show the utility of using public cloud services to enhance and expand the research capabilities for universities. This leads us to the real


argument for the public cloud – you no longer have to build a system big enough for your largest workload. Black Friday is a good example; you don’t want to have to build a server farm big enough for Black Friday. You build a server farm big enough for your ‘normal Wednesday’ and burst out to the public cloud to deal with the additional workload. If there is no such thing as a ‘normal Wednesday’ for


14 Scientific Computing World June/July 2018


your organisation (such as a consultancy with a very peaky workload) it makes even more sense for the whole workload to be in the cloud, so that you only pay for what you consume.


Bursting HPC platforms Typically, HPC services or systems are bought from research council funding and the best HPC system is bought at the time. But, 18 months later, its utilisation is close to 100 per cent and the compute capabilities ageing with no more money available for a new system. But all is not lost, as it is becoming cost effective for research computing departments at universities to access cutting- edge technologies or burst beyond their current capacity by leveraging the public cloud. This involves researchers to be able to use the on-premises cluster and that cluster can burst out to the public cloud for additional resources or when priority jobs are required. Also, through that single point of entry, researchers can take up the non-HPC aspects of the cloud, like Hadoop clusters or running AI workloads without needing the research computing team to install a whole new set of software or hardware. Another key consideration


is how to manage the billing aspect of bursting into the cloud, which could become very expensive if not monitored closely. There are specific toolsets that have


been designed to help with billing control and they are continuing to be developed to meet the needs of universities.


People power Systems’ administrators who would have run the university’s HPC service previously now have the ability to provide a complete research computing platform and can have greater visibility of all their researchers’ needs, so they can continuously enhance the design of the research computing service, helping the researchers carry out even better work. This kind of service is very


attractive for universities and removes the need for researchers to become ‘pseudo-IT systems administrators’ who need to learn how to run a server under their desk and can focus instead entirely on their research. As part of their 10- to


20-year roadmaps, there are some universities that are considering whether they will be buying an on-premises cluster in 10 years’ time and may consider alternatively purchasing a public cloud version of an HPC cluster. There will need to be a culture shift in universities and funding for HPC in the cloud to be fully accepted and most importantly the costs in using the public cloud will need to be driven down further for wider adoption, but the trends are pointing to it not being too far off.


@scwmagazine | www.scientific-computing.com


Dpaint/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32