search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Modular HPC goes mainstream


William Payne investigates the growing trend of using modular HPC, built on industry standard hardware and


software, to support users across a range of both existing and emerging application areas


T


he advent of more scalable, industry standard and lower cost high- performance computing (HPC) is bringing high-performance


computing within the scope of new users across a range of industries and smaller organisations. Instead of being restricted to a handful of


universities and government-funded research centres, modular HPC is increasingly feasible for commercial organisations. Tis is true for small to medium commercial entities, as well as a more diverse range of public organisations such as hospitals, colleges and research institutes. Manufacturing, engineering, architecture,


drug design, personalised medicine, finance, insurance, video analytics, data intensive computing, and market simulations are some of the applications that could drive adoption of modular HPC by new adopters. Industrial IoT, smart cities, smart transportation, smart buildings, engineering and structural visualisation are some of the commercial trends that require high levels of computing performance. Over time, multiple smaller-scale HPC


commercial implementations could greatly outweigh very large research installations in total value.


HPC is getting smaller Since its inception, HPC has been about purpose-building the fastest, largest, biggest number crunching machines. Te industry’s scoresheet is the TOP500, the twice-yearly


4 SCIENTIFIC COMPUTING WORLD


list of the world’s fastest non-distributed supercomputers. Tese huge specialist machines are


typically the preserve of national laboratories using them for high-energy physics and other scientific computing and engineering computing. Tis emphasis on the fastest, biggest


machines is increasingly misplaced. For the last few years, an opposite trend has occurred. It has become easy and affordable to implement smaller scale high- performance systems that can be used at the SME level, or even by start-ups. Tis trend is taking high-performance


computing out of the public national laboratory and university environments and implanting it in the commercial world alongside enterprise systems. Te proliferation of new applications means that, in many cases, these HPC systems are not even being used for scientific computing. Increasingly it is financial services firms, advertising, media and consumer goods companies using high- performance computing for complex financial transactions, video and behavioural analytics, accelerated HD animation, natural language processing, speech and media recognition, data science and deep learning. Ultimately, it even raises the prospect of


smaller on-side HPC systems becoming production systems, working full-time on the same jobs in a fault-tolerant setting aimed at full availability in mission critical situations, such as hospitals, or edge


@scwmagazine l www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36