search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
high-performance computing


to bolster performance of specific applications. BC4 includes the latest Pascal GPUs on 32


nodes and also includes DDN’s IME to accelerate specific applications with IME, a scale-out, flash-native, soſtware-defined, storage cache that streamlines the data path for application Input/ output operations (I/O). IME interfaces directly to applications and secures I/O via a data path that eliminates file system bottlenecks. Te previous system BC3 also included a


small number of Nvidia GPUs, but Burbidge and his colleagues decided to increase this number, as more applications used at the university are beginning to take advantage of GPU technology. While GPUs can offer significant performance


increase for applications that have been written specifically for CUDA, Burbidge stated that this requires significant investment in re-writing code, which is another reason to support both GPU and non-GPU users across both the BC3 and BC4 systems. ‘Te thing about programming in CUDA


Bristol


ultra-high density infrastructure and high- performance computing platforms, so an ideal environment for systems like BC4. ‘Te University of Bristol is the 22nd


organisation to join the JISC Shared Data Centre in our facility, which enables institutions to collaborate and share infrastructure resources to drive real innovation that advances meaningful research.’ ‘We’re now in our 10th year of using HPC in


our facility. We’ve endeavoured to make each phase of BlueCrystal bigger and better than the last, embracing new technology for the benefit of our users and researchers,’ said Caroline Gardiner, academic research facilitator. BC4 will replace the now defunct BC2 system, but the BC3 system will be kept running to allow researchers to finish jobs currently planned for the older HPC cluster. Burbidge explained that the University has a


policy of having two systems running in parallel, so they can more easily ‘transition the users from one system to another and maximise the value of the older system until it is retired. ‘We will keep the older system for the next two


to three years, at which point we will be doing a procurement for BlueCrystal5, which will come in and replace BlueCrystal3. Te current big system, BlueCrystal4, will become the backseat system,’ Burbidge added. He was keen to stress the importance of this model, as it allows researchers to continue


www.scientific-computing.com l @scwmagazine


their research while the new system is set up, configured and finally introduced: ‘It can be quite disruptive for them to change machines and their time and effort is valuable, so we do not want to force them to change from one system to another in the middle of a project. We would rather they completed ongoing projects on the older system, then started the new projects on the new one, to minimise the effort they need to do to transition.’


Getting software up to speed Tese upgrades allow researchers to make large gains in application performance, but this requires code to be re-complied and optimised. ‘If you have a code that runs on the older


system, then it will run on the new one, but it is probably not going to have the performance. We would like users to re-compile the code as a minimum, and to look at taking advantage of the features of the system,’ said Burbidge. ‘Depending on the kind of application you are running, you would want to tweak memory that each core was using. Tat could take a bit of work, but equally it enables bigger runs, much more detailed simulations or users could run on more cores and get the results quicker.’ However, it is not just increased core counts


and memory that the university has provided for the new system it has also adopted some more exotic HPC technologies such as Nvidia Pascal GPUs and DDN’s Infinite Memory Engine (IME)


is that you have to program according to the particular hardware that you are running. As a programmer, you need to be aware of the amount of memory you have got and the hardware threads. If you are porting to a new GPU, then you need to tweak the code to take advantage of the new GPUs. Tat is a non-trivial exercise,’ said Burbidge. IME could also enable significant performance


advantages, particularly for applications with demanding I/O characteristics as Burbidge explains: ‘We have used DDN storage before, and it is very fast. What the IME might allow us to do is to pick up particular applications that have unpleasant I/O characteristics.’ ‘For example, if the application is using a lot of


small I/O operations that can put a big demand on the disk hardware. We hope that IME will help us to reduce this bottleneck and we are hoping we will see a speed up of four or five times on certain applications that display this kind of I/O characteristic.’ Whether introducing potentially radical


new technologies or iterative upgrades of CPU of memory technology, each new HPC system requires significant work to re-compile and optimise soſtware for the new system. While the technicalities may differ, the process of migrating existing soſtware to new HPC technologies is something that the Bristol HPC team have become accustomed to over the last decade of HPC research. ‘HPC is about squeezing as much as you can


from every element of the hardware, and as a result of doing that, you are constantly looking at all of the performance parameters and optimising those to get the best performance,’ said Burbidge. ‘We look at the most challenging applications and target those for a particular system.’ l


AUGUST/SEPTEMBER 2017 13


Joe Dunckley/Shuterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32
Produced with Yudu - www.yudu.com