HPC 2015-16 | Market Research
as it looks to increase adoption of other technologies – as has been the case with the release of its Xeon Phi coprocessors. ‘From there it can try to build its market
share back up again. Intel is in a position where it could really still win, even if it loses a lot of market share initially,’ Snell concluded. In a volatile, fast-paced industry
such as HPC, it is hard to predict exactly how the market will be carved up by the various technologies today. Te increased competition from the likes of ARM, IBM and the already established presence of Intel,
“What you see emerging is a battle of ecosystems”
plus other technology providers such as Nvidia and Mellanox, will drive technological development across the industry. Snell explained that, ultimately, there
are compelling arguments on either side of the Intel-IBM debate. ‘Intel will argue that there is a lot of benefit to having all of these technologies integrated into a single architecture, while IBM is ‘selecting best of breed technologies at every step in the value chain.’ People will choose the architectural mix that suits their workflows on an ongoing basis, he said. Steve Conway explained that although
there is some healthy competition, the lines of battle are not quite as clearly drawn as we would like to think. He said: ‘If you look at OpenPOWER, they position themselves, along with their chief partners Nvidia and Mellanox as the alternative to X86. Tat is not really a clean positioning, because x86 processors are oſten used in conjunction with Nvidia and Mellanox technology. Nevertheless, what you see emerging is a battle of ecosystems.’
Architectures everywhere but not a programmer to code Te main difficulty with the increasing number of architectures available to the HPC community is that they are becoming more specialised in order to differentiate the technology in such a competitive market. Snell said: ‘You have to have specialised
tools for the various architectures. Intel is going to have tools based around the Xeon and Xeon Phi roadmap. CUDA has really been a grass-roots effort started by Nvidia around its own GPUs, but even if you go to some of the outliers like AMD with its own
24
line of accelerated products, they are trying to do a more open-community based initiative with OpenCL. Te problem is they do not have nearly the adoption that Intel or Nvidia has, and AMD has been losing that race.’ Ultimately adoption of a new architecture
for HPC is reliant on widespread adoption of the programming models by its user base. Without a sufficiently large user base, generating code libraries and developing expertise around a given programming model, it is very unlikely that it will be able to generate sustainable success within the HPC market. Te unfortunate knock-on effect of these
competing technologies is that it effectively splits the total user base. As the tools get more specialised, users must make a choice about which architecture they will spend time learning and optimising for their specific workloads. Snell said: ‘AMD hasn’t lost, based on
processor technology. Where they have lost is on the ecosystem around the processor.’ Steve Conway explained that there are
parallels with the Chinese HPC market, which has some of the largest supercomputers in the world. While they have the systems in some areas, they lack the programming expertise to utilise the hardware fully. Conway said: ‘Te issue in China is more
to do with the size of the user base. China has really increased its capacity dramatically over the last few years in supercomputing,
but the user base is still not very large, so a lot of those machines are under-utilised.’ Tis is a conundrum that faces the entire HPC industry, as there are limited numbers of programmers with knowledge of HPC programming. Splitting them further, based on architectural platforms and programming models, further splinters the skilled codebase. In 2014, Intersect 360 research surveyed
HPC centres for a report written for the US government council of competitiveness. Te
“AMD lost on the ecosystem around the processor”
results showed that soſtware scalability was viewed as the largest barrier to a tenfold increase in the scalability of soſtware. Conway said: ‘Soſtware scalability will
be a bigger and bigger issue, because what has happened over the last 15 years is that compute architectures have gotten extremely compute centric. It was almost always the case that the CPU was the fastest part of the system, but that has become very exaggerated. Now CPUs are not being utilised effectively.’ Conway concluded: ‘So much – more now
than ever – is based on soſtware. Leadership will be determined far more by soſtware advances than hardware over the next decade.’ l
Andrey VP/
Shutterstock.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36