high-performance computing
performance computing industry is in danger of splitting, between those working with Intel (which is a founder member of the OpenHPC Collaborative Project); and those favouring the IBM Power processor, given that IBM has already created the OpenPower network to develop its ecosystem. Although some companies (Penguin, for example) are members of both, Nvidia, which is closely partnering with IBM, was conspicuous by its absence from the list of founder members of the OpenHPC project. But in an interview with Scientific
Computing World at SC15, Barry Bolding, senior vice president and chief strategy officer at Cray, warned that ‘OpenHPC has to be vendor-neutral and architecture agnostic: you have to do something that is broad to succeed.’ He said that if it became ‘architecture-centric’ then it would be going down the same path as the OpenPower foundation, which Bolding would regard as a retrograde step. Bolding sees the most valuable role of
the OpenHPC collaboration as working on problems that are common to many systems and many vendors – such as future operating systems, for example – while a company such as Cray would continue to develop its own technology in-house. An example would be
A view of the skyline in
Austin, Texas, which hosted this years SC15 conference
Cray’s DataWarp storage accelerator, where working on this technology line would act as a differentiator for the company in the HPC marketplace.
Addressing software challenges Te OpenHPC Collaborative Project is aimed at soſtware, which remains possibly the biggest challenge for HPC. It will provide a new, open source framework, consisting of upstream project components, tools, and interconnections to foster development of the soſtware stack. Te members of the project will provide an integrated and validated collection of HPC components that can be used to provide a full-featured reference HPC soſtware stack available to developers, system administrators, and users. Jim Zemlin, executive director of the
Linux Foundation, pointed out that: ‘Te use of open source soſtware is central to HPC, but lack of a unified community across key stakeholders – academic institutions, workload management companies, soſtware vendors, computing leaders – has caused
OPENHPC HAS TO BE VENDOR-NEUTRAL AND ARCHITECTURE AGNOSTIC
duplication of effort and has increased the barrier to entry. OpenHPC will provide a neutral forum to develop an open source framework that satisfies a diverse set of cluster environment use-cases.’ Reducing costs is one of the primary
goals of the project. By providing an open source framework, it is hoped that the overall expense of implementing and operating HPC installations will be reduced. Tis will be achieved by creating a stable environment for testing and validation, which will feature a build environment and source control; bug tracking; user and developer forums; collaboration tools; and validation.
Helping the developers Te result should be a robust and diverse open source soſtware stack across a wide range of use cases. Te OpenHPC stack will consist of a group of stable and compatible soſtware components that are tested for optimal performance. Developers and end users will be able to use any or all of these components, depending on their performance needs, and may substitute their own preferred components to fit their own use cases.
www.scientific-computing.com l @scwmagazine Developers were also being catered
for on the IBM side of the fence, with an announcement that IBM and fellow OpenPower members have extended their global network of physical centres and cloud-based services for no-charge access to Power-based infrastructure that also uses accelerators. During the week of SC15, a Power8-based accelerated computing cluster developed by IBM and the Texas Advanced Computing Center (TACC) began accepting requests for access from academic researchers and developers. Meanwhile, SuperVessel, a global cloud-based OpenPower resource launched in June, now provides GPU- accelerated computing as-a-service capabilities, giving users access to high- performance Nvidia Tesla GPUs to enable Caffe, Torch and Teano deep-learning frameworks to launch from the SuperVessel cloud.
FPGAs spark interest But it was the announcement about IBM’s interest in FPGAs that attracted attention. It has concluded a multi-year strategic collaboration with Xilinx to develop FPGA- enabled workload acceleration on Power- based systems. Trough a private commercial agreement as well as by collaboration through the Open Power Foundation, the two companies are teaming up to develop open acceleration infrastructures, soſtware, and middleware. Tey have also developed a new FPGA accelerator service on SuperVessel that makes reconfigurable accelerators available to developers via the cloud. IBM’s Gupta placed these developments firmly in the context, as he put it, that ‘accelerators are a well-accepted fact of computing. Xiliax are walking into a situation where there is support for acceleration. At this point, Xiliax is stepping up to build the ecosystem for their FPGAs. It’s a huge market opportunity.’ Te tie-up comes just five months aſter
Intel bought FPGA-maker Altera for $16.7 billion. Just as IBM sees its Power processor technology as a way of serving both high performance computing and data centres for commercial computing, so the Altera acquisition was seen as helping Intel defend and extend what is now its most profitable business: supplying the processors used in data centres. Penguin Computing is a member of both
the OpenPower and the OpenHPC networks because, according to Phil Pokorny, the company’s CTO: ‘We want to have the best of hardware from across the spectrum.’ He welcomed IBM’s decision to open up the Power architecture, reporting that in the past
DECEMBER 2015/JANUARY 2016 19
➤
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40