search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING


 Chassis of the Graphcore IPU


even though double-precision calculation performance is 20 times greater than the K computer.’ The Fugaku technology co-developed


by RIKEN and Fujitsu has already been used in a prototype system that currently sits at the number one position in the latest Green500 list, which measures the energy efficiency of HPC systems. Shimizu commented that the technology underpinning this system includes optimised macros and logic design and usage of SRAM for low power consumption: ‘The Silicon technology, which uses a 7nm FinFET and direct water cooling, is also very effective.’ Shimizu also discussed the integration of the interconnect on the chip as being another innovation developed by Fujitsu and Riken: ‘High-density integration is not only for the performance but also for power efficiency. High calculation efficiency and performance scalability in the execution of HPL, the benchmark program for TOP500/Green500, also contributed to the result. They are delivered by our optimised mathematics libraries and MPI for the application performance.’


Open Compute Semiconductor Company Tachyum recently announced its participation as a Community Member in the Open Compute Project (OCP), an open consortium


www.scientific-computing.com | @scwmagazine


‘The key point was that interconnect technology such as PCI-Express had become fast enough that you could offload work from a CPU to another processor without it being such a huge problem’


aiming to design and enable the delivery of the most efficient server, storage and datacentre hardware for scalable computing. Tachyum’s Prodigy Universal Processor chips aim to unlock performance, power efficiency and cost advantages to solve the most complex problems in big data analytics, deep learning, mobile and large-scale computing. By joining OCP, Tachyum is able to contribute its expertise to helping redesign existing technology to efficiently support the growing demands of the modern datacentre. ‘Consortiums like OCP are an ideal


venue for pursuing common-ground technical guidelines that ensure interoperability and the advancement of IT infrastructure components,’ said Dr Radoslav Danilak, Tachyum founder and CEO. ‘OCP is being adopted not


only by hyperscale, but now also by telecommunication companies, (HPC) customers, companies operating large scale artificial intelligence (AI) systems, and enterprise companies operating a private cloud.


The Open Compute Project Foundation


(OCP) was initiated in 2011 with a mission to apply the benefits of open source and collaboration to hardware that can rapidly increase the pace of innovation in the datacentre’s networking equipment, general-purpose and GPU servers, storage devices and appliances, and scalable rack designs.


AI-driven designs As noted in the energy efficiency feature in this magazine, the growth in AI and ML applications is driving new workloads and applications into the HPC space. Whether replacing traditional HPC workflows with AI or enabling new sets of users with non- traditional HPC applications, HPC users are increasingly looking to AI and ML to enhance their computing capabilities. One company previously known in the


HPC space for offloading and acceleration of HPC applications, Graphcore, has been developing an AI-focused offload engine for several years and is now at the point of pushing out the technology for real application workloads. The company has been partnered with Microsoft since 2017. In that


February/March 2020 Scientific Computing World


g 5


Graphcore


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28