This page contains a Flash digital edition of a book.

2015 – A year of disruption? T

For high-performance computing in 2015, the watchwords will be ‘adapt, disrupt, and be nimble’, according to Julian Fielden

he overall HPC technical server market will grow at a healthy 7.4 per cent yearly rate with revenues reaching $14.7 billion by 2018, according to

the market research company IDC. Many of those sales will be driven by national science and research projects and by direct bolstering of HPC infrastructure, which governments around the world seem increasingly keen to support. In the UK, for example, government funding

is traditionally to enable people to acquire technology rather than to develop it. Te UK has been a follower rather than a leader, relative to other countries. However, the government has shiſted its view recently and begun to invest in research and development, adding to the intellectual capacity of the nation and creating new, high-value employment opportunities. Te UK government says it is determined to

make Britain ‘the best place to do science and research’. In 2011, it injected £145m into HPC; £270m for quantum computing in December 2013; £73m for big data in February 2014; and made changes to the national curriculum in September 2014 to introduce soſtware coding at a much younger age. In the past month, the government has launched a £42m Alan Turing Institute to research big data, committed £113m to expand the excellent work done at the Hartree Centre, and announced funding for a £200m science institute for the north of England concentrating on materials science. Sadly, the UK government’s efforts don’t yet

match the funding efforts of other countries such as the US and China, or even France and Germany, but it’s certainly a notable change of stance and I can see this funding continuing in 2015, particularly with next year being an election year. Te move of HPC out of science and research

into more commercial environments is a natural evolution and achievable both in the UK and many other countries. As problems become bigger in the business world, managers will seek solutions and will continue to adopt techniques used in the technical world at an accelerating rate. Right now, the HPC market can be divided into two – high-performance technical computing, and high-performance business computing. Currently the market is split into 75 per cent technical computing and 25 per cent business


computing. Aſter data is created, people want to store it, manage it and gain insight from it, so I think we will see the 75/25 split even out more in 2015.

Technology perspective Te push toward Exascale is exciting – the industry utopia for power and performance – but sadly, it isn’t going any faster today than it did 12 months ago because it depends on having millions upon millions of compute cores – the cost and heat output is prohibitive. Right now, realistically, we only have x86 servers and GPU and Phi accelerator technology; it’s not enough to help us get across the line. But, there have been two notable global

developments on the Exascale front in the past few months, which could make this change rapidly. Te US Government has set a date of 2023 for its first Exascale supercomputer – although a lot could change politically before that is reached. Notably, the system is planning to use the IBM Power Chip. Japan also recently completed its Exascale feasibility study, started in 2012, which provides design specifications to support the creation of a high-end computing

Julian Fielden

as one might expect. We haven’t really seen our customers – academics, manufacturers, engineers etc – turning to co-processors in any great quantity. Tat said, applications do run faster – so sooner or later there will be more of an uptake of accelerator technology. One other technology worthy of mention is

HPC virtualisation. Virtualisation does have a negative impact on server performance, but it has a positive impact on manageability. Virtualisation technology now has far fewer overheads, so in the next 12 to 18 months


system by 2018, although some commentators suggest 2020 is more likely. At SC14 this year, Nvidia had a strong

presence, with a lot of noise being made about the K80 next-generation GPU. We also spotted the introduction of the GPU-accelerated C4130 PowerEdge from Dell, a dense rack server designed to accelerate a range of demanding workloads including HPC. I’m not sure the hype is fully justified at the moment. In the Top500 supercomputers, which need accelerators to reach such massive processing speeds, there were just 62 deployments in the June 2014 list [44 GPUs, 17 using Phi and two using ATI Radeon]. From my own perspective, for the UK, co-

processors and accelerator technology, such as GPUs and Intel’s Phi, are still being tested, with people trying to find out how they can best use them. Tey really aren’t being as well utilised

people will be considering a virtualisation layer on their HPC infrastructure because it simplifies the management. Tis will help drive HPC systems to become more of a private cloud, serving the needs of different users. Te future is not a rail track, fixed and

straight; we follow a big wide road where we need to adapt, disrupt, and be nimble. Organisations need to keep adopting the latest technological solutions to face their challenges. Vendors and integrators need to ensure they can deliver and support these new solutions to offer not just HPC, but HPC on demand, big data storage and management, plus predictive analytics solutions. It’s the way of the HPC market.

Julian Fielden is managing director of OCF, the UK-based HPC, big data and analytics integrator

@scwmagazine l

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32