search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HPC YEARBOOK 2021/22


GPU showed that IPU provides two- times higher throughput, making it possible to use Bert in an interactive setting where scalability is a priority. There are also several examples with EfficientNet-B0 and Markov Chain Monte Carlo (MCMC) simulations. Matt Fyles, SVP for software at


Graphcore, explains that while the first generation proved the use case for the technology and got it into the hands of developers, the second generation provides a boost in performance and scalability. ‘The first generation of IPU products was really to prove the IPU as a computing technology that could be applied to problems in the machine learning (ML) space primarily, but also as a general programming platform,’ said Fyles. ‘We started out with a view that


we are making a general-purpose compute accelerator. Our initial focus is on ML and that is where a lot of the new capabilities that we have added can be deployed. But IPU is also equally applicable to problems in scientific computing and HPC. And while that was secondary to our plans to begin with, it was more about building out the ecosystem to support both aspects.’ Fyles noted that getting IPUs


into data centres and the hands of scientists as being the primary goal for the first generation. This then allows Graphcore to begin engaging and developing an ecosystem of developers and applications. ‘A lot of it was about bringing IPU


to the world and getting people to understand how it worked and how we could move forward with it,’ stated Fyles.


The second generation of IPU


technology drives performance and allows users to scale to much higher levels than was previously possible. Scalability is something that Graphcore has spent a lot of time on to ensure that both software and hardware can make use of the additional resources. ‘We have now got this scalable


system that is built around our m2000 platform, which is a 1U server blade containing four IPUs, supported by the Poplar software stack. We can build a POD16, as we call it, which is four of our M2000s connected together – but we can also scale it up a lot further to have 64IPUs or 128 IPUs and allow the software stack to programme it in a similar way,’ added Fyles. But driving adoption is not just about hardware.


www.scientific-computing.com


Fyles stressed that the software and hardware have been designed from the ground up to ensure ease of use, scalability and performance: ‘The common theme is the software development kit and all the great work that the team has done on it to deliver a mature product has continued across both generations and will continue in the next generation that follows.’ ‘We had the opportunity to redesign


the software stack alongside the hardware from the beginning. To augment and try to steer the hardware and software from where it came about out of the HPC space,’ added Fyles. ‘We have had the opportunity for a clean slate approach to both software and hardware that has meant that we can potentially find performance where others cannot and we have a much simpler legacy software to maintain over time. We have built the software and hardware together in a way that allows us to very quickly realise performance.’ Fyles stressed that huge amounts of time and resources have been spent on the Graphcore developer portal and populating that with examples, videos and worked examples of how to use and get the most out of IPU products. ‘We are trying to build an ecosystem


of IPU developers that starts small and grows over time. We do that in a number of ways; one is relatively organically, we have interesting people come to us with interesting problems for the IPU to solve and that is more of a traditional sales engagement,’ said Fyles. ‘Then there is the academic programme that we have introduced recently which starts to work with key academic institutes around the world. The IPU is a platform for research and development that is different from something that they have now but it is also easy to use, well supported and documented. People can take applications now and they can do things without asking for our help. There are a number of great projects I have seen, for example, at the University of Bristol.’ The University of Bristol’s High-


Performance Computing group, led by Professor Simon McIntosh-Smith, have been investigating how Graphcore’s IPUs can be used at the convergence of AI and HPC compute for scientific applications in computational fluid dynamics, electromechanics and particle simulations.


We have built the software and hardware together in a way that





allows us to very quickly realise performance “


‘Traditionally, that seems like


something that we would have had to give a lot of help with and that is the good thing. People are starting to solve challenges and use IPU themselves. We see a lot of great feedback, which helps to push the software forward,’ explains Fyles.


There are many different


technologies available to AI and ML researchers including GPUs, IPUs and FPGAs to name a few. However, Graphcore’s Fyles believes that only technologies that have been designed around ease of use for software engineers can succeed. As such, technologies such as


FPGAs will most likely never see wide adoption in the AI and ML space as they require skills that are not suited to HPC or AI. ‘People have spent a lot of time


trying to turn it [FPGAs] into a platform that software engineers can use but the best platform for a software engineer is a processor that uses a standard software stack with software libraries that they are familiar with,’ stated Fyles. ‘You can abstract a lot of it away but that is why it is hard to get wide adoption on such a platform because the people using the product are software engineers. Now it is all about Python and high-level libraries and ultimately people do not always care about the hardware underneath, they want their application to run and they want it to go fast,’ added Fyles. ‘There is still the low level, close to bare metal developers that have always existed in HPC but the wider audience is using Python and high-level machine learning frameworks. That is where a lot of the new developers that come out of university have been trained. They do not expect to go down to the hardware level.’ l


25


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42