search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING


Computing tools help unlock new medicine


HPC AND AI COMPUTING ARE HELPING TO DRIVE RESEARCH FOR NEW MEDICINE, LEADING TO BETTER PATIENT OUTCOMES, WRITES ROBERT ROE


Effective utilisation of computing tools is critical to both AI and HPC research, with recent projects in


both fields highlighting the scientific benefit of carefully managing resources. HPC and AI researchers are still finding surprising ways to innovate on existing research tools and practices through the use of computing technology. Research computing is making efficient use of HPC cycles to accelerate research into Covid-19. AI is enabling emerging fields of research not be possible without new approaches to artificial Intelligence (AI). Recently a distributed computing project,


Folding@home (F@h) has been gaining a huge amount of support and publicity for its research into Covid-19. A tweet from the project in April highlighted that the combined computing power of the project totalled more than one exaflop – larger


www.scientific-computing.com | @scwmagazine


than the most powerful supercomputer in the world. Chris Coates, technological innovation Lead at OCF, has been working with his colleagues to develop a set of instructions for the Slurm Workload Manager. These instructions allow HPC centre operators to donate unused computing power to the F@h project. Coates explained the thought process behind setting up the F@H Slurm instructions for OCF customers.‘This started out as a standard stress test to commission a system. At the time we were doing a performance run for HPL. Once we do our performance runs you would normally leave that running at 100 per cent utilisation for 72 hours.’ While these tests must be done to verify the stability of newly installed system, there is a wasteful element to using these computing cycles for little scientific benefit. Coates and his colleagues were


exploring ways to use these resources more effectively. ‘We took that action upon ourselves to use this time and resources for good. Even if it is not part of a real project, what can we use those cycles for? F@h came up because a lot of us have been involved with the project in one way or another.’


F@h is a distributed computing project that simulates protein dynamics, including


the process of protein folding and the movements of proteins implicated in a variety of diseases. The project enables people to contribute their time on their own personal computers to help run simulations. Insights from this data are helping scientists to better understand biology, and providing new opportunities for developing therapeutics. The timing was fortunate as the team behind F@h began switching their focus to COVID-19 around the same time that OCF were looking at how to best use these computing cycles. It was an easy choice at that point,’ noted Coates. OCF set out to write instructions for OCF customers, or anyone with an x86 Slurm cluster – although OCF were also using a Kuberentes instance. The instructions allow these research centres to donate any spare capacity to assist the F@h project in its efforts to sequence protein dynamics of Covid-19.


“This started out as a standard stress test to commission a system. At the time we were doing a performance run for HPL”


Summer 2020 Scientific Computing World 7


g


everything possible/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38