This page contains a Flash digital edition of a book.
high-performance computing


within the HPC industry, but in the mobile world where the likes of Nvidia (with the Tegra K1, programmed with OpenACC) and Texas Instruments (with the Keystone 2, programmed with OpenMP) target a market with a potential for sales of billions of devices during 2014. One of the issues relating to


standardisation is that both Intel and Nvidia think that they are the incumbent. In a way, this is true for both companies. Intel is the incumbent, in that a large number of parallel applications targeted at Xeon use OpenMP; while for Nvidia, the majority of users targeting accelerators today use CUDA. So, while in the long term both companies may support a standard programming model for accelerators, in the short term they both have to be pragmatic and consider next quarter’s numbers. Te next step for OpenMP will be a refinement of the current support for accelerators, probably two years away, by which time there is a danger that OpenACC will have diverged so much that it is difficult to reconcile the two. In the meantime, shared memory between mainstream processors and accelerators will make the OpenMP approach easier to work with. OpenACC needs to demonstrate that it is working to advance accelerator programming capabilities of future versions of OpenMP, if it is to avoid being marginalised. CUDA and OpenCL work closer to


the metal, with CUDA being Nvidia’s proprietary development environment and OpenCL supporting a wide range of platforms (including Nvidia GPUs). Compiler company PGI produced a CUDA compiler for x86 systems. Te company has since been acquired by Nvidia and continues to support the x86 version, so although


Changes in 2014


2014 will mostly be a year of consolidation in the HPC industry. Changes that can be


expected in HPC markets in 2014 are: l Asignificant number of key HPC ISVs offer flexible licensing for HPC applications on a range of HPC platforms, in the cloud;


l OpenMP 4.0 compilers will become generally available;


l The number of firms training their staff in HPC techniques will increase;


l Nvidia will ship its next generation Maxwell GPU and ARM64 devices to come to market, both will be deployed in HPC systems; and


l Many HPC in the cloud deployments will move from prototyping to production.


20 SCIENTIFIC COMPUTING WORLD


THERE IS A


SIGNIFICANT SKILLS GAP, WITH MANY ORGANISATIONS TRAINING THEIR OWN HPC STAFF


CUDA is proprietary, it does offer a level of code portability. Te low level options (CUDA and OpenCL) oſten deliver better performance, but it can be hard work for a programmer to get it right. Te high level options (OpenMP and OpenACC) are much easier to use, but may not deliver the same level of performance out of the box. Back in the 1980s, I worked for a soſtware


company that provided compilers and development tools for the HPC industry, with a strong focus on array processors (the accelerators of the day). Writing in the company magazine about the hardware platforms emerging around 30 years ago, I noted: ‘More sophisticated soſtware tools will be needed to drive the machines efficiently, but soſtware technology does seem finally to be catching up with the array processor hardware technology.’ In retrospect, I was being rather optimistic. Accelerator hardware today may cost a fraction of what it did way back when, and be the size of a book instead of the size of a fridge, but the promise of easy, automated, programmability remains just out of reach. Maybe next year.


HPC skills Benchmarking of HPC applications is important for two reasons. First, it aids understanding of performance and can help


focus new ‘tuning’ work in the right place. Secondly, it can identify the best architecture for a specific application. But to deliver the best performance for a given application, and to understand how to write meaningful benchmarks, the user oſten has to become an HPC expert in addition to being a domain expert – something that many scientists and engineers either don’t want to do, or don’t have the time to do. HPC systems are becoming more complex, exploiting multi-core, multi- socket systems in clusters that are supported by heterogeneous accelerators. While some universities are doing an excellent job at promoting HPC skills to both their students and industry, parallel programming is still a black art for many programmers. Tere is a significant skills gap, with many organisations training their own HPC staff as they cannot find skilled HPC programmers. More HPC training is required for industry, academia, and even in schools to excite the next generation of programmers to consider HPC as a career that can be both fun and rewarding.


Processors and accelerators One change expected for 2014 is the arrival of 64 bit ARM processors, which will drive the interest in ARM for both enterprise and HPC users. While x86 is a great processor family, it is relatively expensive and does not offer the same opportunities for collaboration in an open ecosystem that ARM does. Licensees of the 64 bit ARM core designs include AMD, Broadcom, Samsung, and STMicroelectronics. Although ARM is already being used for HPC by the Mont Blanc project, which is funded by the European Commission, some people in the industry think it more likely that ARM


@scwmagazine l www.scientific-computing.com


3Dstock/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40