search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
High-Performance Computing 2019-20


Contributed content


HPC Yearbook 19/20


mathematical models for sharing and analysing data to understand how the brain works, in order to emulate its computational capabilities5. Te scale of this project is hard to fathom: the human brain is so complex that a normal computer can’t simulate even a fraction of it – in fact, just one of the supercomputers used in the HBP is as powerful as 350,000 standard computers. A significant portion of the HBP uses


massively parallel applications in neuro- simulation to interpret data. Te advanced requirements of the HBP are far beyond current technological capabilities, and will undoubtedly drive innovation in the high performance computing industry. Te HBP utilises a next-generation storage


system based on Ceph, exploiting complex memory hierarchies and enabling next-


massive amount of storage hardware. Ceph is an excellent choice, because it doesn’t require highly performant, expensive hardware for optimal performance. Resiliency is also critical, because the


telescopes are located in remote environments, which makes it difficult to quickly procure new hardware if a component fails. If a node failure does occur, Ceph’s self-healing functionality quickly replicates the failed node using secondary copies located on other nodes in the cluster, thereby ensuring data redundancy and higher data availability. As a result, MeerKat has a highly resilient, scalable storage solution that maximises efficiency, while minimising costs. Replicating data from each of MeerKat’s far-


flung locations to a centralised data store is also critical. Using Ceph, the data for each telescope array is replicated to a centralised data store in Cambridge, England, that’s part of the Square Kilometre Array project. Tis allows all the MeerKat data to be analysed in its entirety, while ensuring availability.


Ceph natively supports hybrid cloud environments, which makes it easy for remote researchers – who might be located anywhere in the world – to upload their data in different storage formats





Immunity Bio can manage Ceph without relying on an outside vendor, were also attractive. Even though the cloud is a popular choice


for storage, Immunity Bio avoided that option, because it believes cloud pricing isn’t scalable. Cloud vendor lock-in is also an issue, because it’s notoriously difficult to move one petabyte of data between cloud vendors. With Ceph, Immunity Bio has achieved


cost-effective storage, better performance and reliability, and eliminated vendor lock- in, allowing them to pursue their research unhindered.


Human Brain Project


Re-creating the intricate, complex processes of the human brain using technology is, by any definition, a massive undertaking. Te Human Brain Project (HBP) is a ten-year European Union flagship research project, based on exascale supercomputers, that aims to advance knowledge in neuroscience, computing and brain-related medicine4. One of the goals of the HBP is to provide researchers worldwide with tools and


www.scientific-computing.com/hpc2019-20


generation mixed workload execution6. With Ceph, the HBP eliminates vendor lock-in, while realising 90 per cent read efficiency and demonstrating outstanding scalability as object sizes increase.


MeerKat radio telescope


Imagine an array of telescopes, collecting vast amounts of information about outer space, located in the world’s most remote and harshest locations. Te MeerKat radio telescope is a 64-antenna array radio telescope, built on the Square Kilometre Array (SKA) site. Te SKA project is an international effort to develop the world’s largest radio telescope with a square kilometer of collecting area. Ceph is used to store and retrieve huge volumes of data, including a 20 petabyte object-based storage system. One of MeerKat’s unique challenges is the


isolated location of the telescope arrays: one is located deep in the South African desert and the other in the Australian outback. Cost is a key factor, because the project requires a


Ceph’s inherent hardware neutrality One of the common issues for each of the research projects highlighted in this article is storage hardware. Ceph’s inherent hardware neutrality is a great benefit to researchers, as they aren’t limited to proprietary hardware solutions that are oſten expensive and inflexible. Ceph is so versatile that it can be run on


nearly anything: a server, a Raspberry Pi, even a toaster (assuming it runs Linux). For research purposes, scientists can choose to run Ceph on a black box server, or they could use HyperDrive, a purpose-built storage appliance, built by SoſtIron, that’s optimised specifically for Ceph. Research institutions, like the University of Minnesota’s Supercomputing Institute and Immunity Bio, are realising the added benefits of using a custom-designed, optimised Ceph storage appliance such as HyperDrive, to power some of the most exciting research projects on Earth. To learn more about SoſtIron, Ceph, or HyperDrive, visit www.soſtiron.com. n


References: 1.


2. 3. 4.


5. 6.


https://www.slideshare.net/noggin143/ openstack-at-cern-a-5-year-perspective


https://www.datacenterdynamics.com/ analysis/cern-secret-sauce/


https://superuser.openstack.org/articles/at- cern-storage-is-the-key-to-the-universe/


https://www.humanbrainproject. eu/documents/10180/538356/ FPA++Annex+1+Part+B/41c4da2e-0e69- 4295-8e98-3484677d661f


https://www.stackhpc.com/ceph-on-the- brain-a-year-with-the-human-brain-project. html


https://insidehpc.com/2018/04/ceph-brain- storage-data-movement-supporting-human- brain-project/


11


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32