search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
high-performance computing ➤


efficient resource utilisation. Te truth is that they are in fact coupled in the traditional scientific computing model which is constrained by finite resources; if computing resources (cloud or otherwise) are over- subscribed, science will always be delayed. It is important that computations can


be packed and performed very efficiently to maximise resource utilisation, and therefore produce the most science within the same resource footprint, with the added environmental and economic benefits, and in less time. Terefore, private cloud infrastructures, owing to their limited size, have to be utilised efficiently. Tis can be achieved through carefully craſted use policies to which users must subscribe and respond, or by employing automation using scheduling technologies where VMs are optimally allocated based on a traditional resource allocation model, in addition to ensuring that soſtware is parallelised and optimised, so long as the efforts required to do the latter do not negatively impact time- to-science and are cost-effective.


The game-changer: cloud bursting Te emergence of public cloud and the ability to cloud-burst is actually the real game- changer. Because of its ‘infinite’ amount of resources (effectively always under- utilised), it allows for a clear decoupling of time-to-science from efficiency. One can be somewhat less efficient in a controlled fashion (higher cost, slightly more waste) to minimise time-to-science when required (in burst, so to speak) by effectively growing the computing estate available beyond the fixed footprint of local infrastructure – this is oſten referred to as the hybrid cloud model. You get both the benefit of efficient infrastructure use, and the ability to go beyond that when strictly required. Another important aspect of the cloud


model, and in particular of the use of virtualisation, is the ability to encapsulate workloads in a repeatable and portable manner, and to utilise these in multiple infrastructures. Tis increases the number of resources available to researchers to do their work, and also helps with the goal of improving the reproducibility of the scientific results by, for example, allowing the results published in a paper to be reproduced using a workflow installed in a standard Virtual Machine along with the original data set, both made publicly available. Tis is strongly aligned with the EU’s Open Data policies for EU funding, and increasingly for national – as well as private – funding


6 SCIENTIFIC COMPUTING WORLD


agencies such as the Wellcome Trust with its data sharing policy to maximise the public benefit of research. OpenStack is the cloud operating system


of choice in most biomedical research sites, as well as for CERN sites and Grid PP (Particle Physics), making it a natural choice for eMedLab in the context of UK and EU academia. It is also supported by the fastest growing open source community, with contributors from across all industries, and great hardware vendor support. Te OpenStack roadmap also includes the development of new features to support


THE EMERGENCE OF PUBLIC CLOUD AND THE ABILITY TO CLOUD-BURST IS ACTUALLY THE REAL GAME-CHANGER


interoperation with commercial public clouds and federation between multiple sites – these are strategically important for science, due both to the way funding agencies are seeking collaboration between institutions, in particular in the EU, and by the need to explore new research avenues through multi-disciplinary cross-breeding of ideas and access to data, which otherwise would be locked behind technical or legal barriers. eMedLab is an example of a successful


research grant by the UK’s Medical Research Council, precisely to promote the crossing


between research domains and data sets, using varied methodologies – with the goal of generating new insights that will lead to new clinical outcomes, and the aim of joining a federation of similar clouds.


Even the cloud model is in flux To answer Scientific Computing World’s question about how profoundly the cloud will change scientific computing, it’s worth noting that even the cloud model is now in flux. New technologies are coming along that could potentially up-end it or revolutionise how it is consumed and, at the very least, how it provides computing resources to its consumers. Technologies like containers, made


popular by Docker, do away with the need to have an entire operating system hosted in a virtual machine with their additional (and resource-costly) hardware abstraction layers. Intel’s ‘clear containers’ go a step further by utilising virtualisation technology in hardware to make containers even more efficient. Unikernels go yet another step further by reducing the footprint of traditional virtual machines, while at the same time preserving all their security features and the cloud model intact. Te research infrastructure and scientific


computing community is very keen to explore these avenues and will no doubt be contributing to their development, as historically has been the case. l


Dr Bruno Silva is High Performance Computing Lead at The Francis Crick Institute in London


@scwmagazine l www.scientific-computing.com


elenabsl/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36