search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING


AMD has come back to announce its EPYC processor, which could challenge Intel – which currently holds 90 per cent of the processor market. IBM is coming back with its Power9 processor, which was selected for two of the Coral research centre procurements in the US. Its OpenPower initiative has led to interesting collaborative arrangements with Nvidia and Mellanox.


‘Some people believe that the cloud will be disruptive, certainly AWS and Microsoft Azure are clearly taking market share. However, for more complex workloads and for people whom research is their business, it is unlikely that cloud will take the place of on premise because this would prove to be too costly. The hybrid cloud model will continue to evolve where organisations will have their own on premise systems and will burst out into the cloud when needed,’ added Fielden.


 Leibniz Supercomputing Center g


accomplish their goals not just to sell them the latest kit. There is no reason to add technologies just because they are the flavour of the month – but, used in the right place, they can provide huge performance benefits. One example of the use of disruptive


technologies in HPC can be found in the recent contract from the Faculty of Mathematics at the University of Cambridge, which selected Hewlett Packard Enterprise to supply its latest HPC system.


The Cambridge Faculty of Mathematics


will leverage this new system, in partnership with the Stephen Hawking’s Centre for Theoretical Cosmology (COSMOS) to understand the origins and structure of the universe. The new supercomputer combines HPE’s Superdome Flex system with an HPE Apollo supercomputer and Intel Xeon Phi systems, to enable COSMOS to tackle cosmological theory with data from the known universe – and incorporate data from new sources, such as gravitational waves, the cosmic microwave background, and the distribution of stars and galaxies. ‘The in-memory computing capability of HPE Superdome Flex is uniquely suited to meet the needs of the COSMOS research group,’ said Randy Meyer, vice president and general manager for synergy and mission critical servers at Hewlett Packard Enterprise. ‘The platform will enable the research team to analyse huge data sets and in real time. This means they will be able to find answers faster.’ ‘High performance computing has become the third pillar of research and


6 Scientific Computing World December 2017/January 2018


we look forward to new developments across the mathematical sciences in areas as diverse as ocean modelling, medical imaging and the physics of soft matter,’ said Professor Nigel Peake, head of the Cambridge department of Applied Mathematics and Theoretical Physics. Beyond the use of Xeon Phi OCF’s,


Fielden also noted several other potentially disruptive technologies that are now finding their way into mainstream use in HPC: ‘GPUs have already established themselves as being vital to obtaining a step change in performance, taking a lead in artificial intelligence and machine learning. One of the challenges HPC systems are going to face going forward is power consumption, and ARM is looking to secure a footprint in the HPC environment with its low-power processors which could prove disruptive.’ Fielden also noted two relatively new


processing technologies that are also making potential waves in HPC over the next few years, as IBM and AMD both try to establish a foothold in the HPC processor market: ‘From the wilderness,


“The HPC system needs to be ‘designed’ not just sold for purpose. Without the services of a good integrator you risk getting a bag of bits, instead of a balanced system”


Adding value While HPC integrators can certainly help users to select the right technology in the opinion of Fielding hardware is no longer the most important piece of a HPC system. ‘We’ve come so far and so fast in the past few years that services, support and consultancy have become equally, if not more important.’ ‘The negotiation of the best deal with


the various vendor partners is another vital part of an integrator’s added value,’ Fielding added.


Much of the added value from


integrators comes from support and maintenance which is particularly attractive to academics or enterprise users that do not have large scale in- house expertise in this area. In these situations an integrator can help to manage and support the system to ensure that the system is kept operational and working as efficiently as possible. ‘Post-sales support is a key role in high-


performance computing – whereby for every system supplied, customers have a support requirement and that will vary. Some customers need far more support than others, so integrator led project- development workshops add great value to work out what the best level of support for a customer is. It is, in the end, this ability to get close to, and, understand the customer, that is the secret of success for integrators,’ said Fielding. ‘The value of the integrator sits above


the hardware and software – it’s the skills we have in designing the solution, pulling it together, and supporting it. We have a library of thousands of scripts written over the years that help the systems to run better,’ Fielding concluded.


@scwmagazine | www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28