high-performance computing ➤
space and partly in the numerical soſtware engineering space. Deemed ‘research soſtware engineers’ (RSEs) in the UK, these people are usually forsaken by the career- measurement systems of the research environments they operate in; they are neither pure researchers, nor traditional support staff. In the HPC world, RSEs need to
understand HPC technology as well as soſtware and research. RSEs are probably better funded and acknowledged in HPC than elsewhere in research, thanks to the science application support teams deployed at most supercomputer centres. However, the recognition is still well short of what is needed, and they usually still suffer the same career challenges. Good progress has been made in the
UK recently, with the EPSRC funding RSE Fellowships and the Soſtware Sustainability Institute (funded by EPSRC, BBSRC and ESRC). Tis year should see more people funded or recruited on their ability as research soſtware engineers, rather than pseudo-researcher or computer support staff, as previously. It should also see greater international debate and co-ordination in this area.
Software: the issue in the shadows One important reason for recognising the role of RSEs, and making HPC research soſtware a more attractive career, is the bottled-up disruption that has been hiding in plain sight for years. Te technology of HPC is changing substantially, towards many-core processors that require lots of fine grain concurrency to deliver useful performance. Complexity in the data hierarchy is burgeoning; then there are the issues of energy efficiency, system topologies, scalability, security, and reproducibility. A disconcertingly large proportion of the
soſtware used in computational science and engineering today was written for friendlier and less complex technology. An explosion of attention is needed to drag soſtware into a state where it can effectively deliver science using future HPC platforms. In many cases, the issue is to address
scalability, concurrency and robustness (‘soſtware modernisation’). However, there will be a sea of applications for which the need is not only better implementation but reconsideration of the algorithms and the scientific approach, if we are to exploit the new HPC technologies effectively for science and engineering. To achieve this, we need a legion of happy and capable HPC RSEs, including the hoped-
22 SCIENTIFIC COMPUTING WORLD
INTEL IS POSITIONING ITSELF TO TAKE ON MORE OF THE ECOSYSTEM
IBM’s Power architecture combined with
Nvidia GPUs, as in the US Department of Energy’s forthcoming Coral supercomputers, has several attractive characteristics and is set to compete with supercomputers based around Intel’s Xeon and Xeon Phi products. Intel is positioning itself to take on more
of the ecosystem – such as OmniPath interconnect, soſtware stack definition, and more – maybe even towards providing whole solutions. Tere are benefits to buyers and developers – sureness of compatibility for instance. But the risk to Intel is that it awakens the inherent distrust of monopoly suppliers within the HPC community, and triggers a reaction against ‘Intel everywhere’ (whether backed by OpenHPC or not). IBM has announced ambitions towards
much greater market share of HPC. Te Nvidia tie-up will be key, as will IBM’s ability to convince buyers/developers that, in spite of cutting a lot of their HPC business, they are truly committed to HPC and are investing in a sustainable ecosystem. And, while the two
for influx when the diversity failures get addressed. Access to a sufficient number and quality of HPC soſtware engineers is already a competitive differentiator for commercial and academic users of HPC, and this will become more acute in the coming years.
OpenHPC vs. OpenPower Many commentators point to the competition between Intel’s KNL and Nvidia’s Tesla as the key technology battle of 2016. However, I think the interesting one will be Intel vs. IBM. Each is doing its best to dress this up as having a community behind it, with IBM’s OpenPower initiative in the blue corner and the Intel-led OpenHPC initiative in the other blue corner.
giants IBM and Intel adjust positions, ARM is quietly but surely building up its HPC focus, with active ecosystem plans and market share ambitions.
The old ways are changing ‘Good old fashioned HPC’ is oſten thought, narrowly, to consist of running applications written with Fortran and MPI on supercomputers accessed either by owning the hardware or through direct allocations of time. In 2016, we are likely to see alternative models grow. Te marketing for running HPC jobs in
the cloud has developed some of the same characteristics as early GPU marketing; comparisons to traditional methods are oſten unfairly constructed to favour the authors’ desired conclusions. GPUs matured both in terms of marketing claims and delivered performance. Cloud will go the same way, probably quite rapidly over the next year or so. Ten we will see compute capacity delivered via cloud or cloud-like models, being used to run real (even large) HPC workloads in companies. Applications requiring inter-process communication at scale (hundreds of nodes) and significant I/O will continue to struggle in cloud environments however. Away from the compute capacity itself,
there are stories emerging of Python, Spark, and C++ gaining traction for HPC application soſtware. It will be interesting to see over the course of 2016 how much of this is just ‘let’s do something different’ and how much turns out to be well-considered choices that give clear benefits. In the core scientific computing soſtware arena, Fortran (plus MPI or OpenMP) will dominate for a long time yet. But new choices will be used for specific gains – for example, we already have experiences where Spark seems to make more sense.
An interesting year ahead With all this bubbling away, 2016 promises to be an interesting year for HPC. SC15 in Austin was, in my view, one of the most vibrant SC conferences of recent years. I trust that is reflective of an active and positive year ahead for HPC But, to me, the main observation I’d call
out as valuable advice for 2016 is: access to sufficient number and quality of HPC soſtware engineers will be a key competitive differentiator for commercial and academic users of HPC. l
Andrew Jones provides impartial advice on HPC technology, strategy, and operation for companies around the world as leader of NAG’s HPC Consulting & Services business. He is also active on twitter as @hpcnotes.
@scwmagazine l
www.scientific-computing.com
Carol Gauthier/
Shutterstock.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44