This page contains a Flash digital edition of a book.
Compilers keep up with HPC systems evolution


Doug Miles considers the challenges compilers will face in the new world of high-performance computing


T


wenty years ago, the ‘Attack of the killer microprocessors’ was in full swing. Vector and custom processors for high-performance computing


(HPC) systems, which up until then comprised almost 90 per cent of the Top500 list, were in decline. Because they used expensive and relatively exclusive processor technologies, vector systems were out of reach for many computational researchers. Change came in the form of new and


powerful microprocessor-based RISC/UNIX systems available from multiple vendors. PGI was the de facto compiler supplier for the Intel i860 CPU, one of those killer micros. We honed auto-vectorisation technology for i860 CPUs and Fujitsu µVP co-processors by comparing our code generation to Cray C90 compilers. Unfortunately, the i860 was already scheduled for end-of-life aſter a brief run as a technical computing chip purpose- designed to complement Intel’s general- purpose x86 PC processors. Te Fujitsu µVP showed great promise as a co-processor technology, and was the first heterogeneous target for our HPC compilers, but in the end never flourished due to low adoption. Tis posed a challenge, as we had to


deliver differentiated parallel compilers across a wide array of CPUs, including MIPS, Alpha, Power, Sparc and PA-RISC. PGI chose a translator strategy to develop parallel Fortran for scalable MPP systems. Targeting Fortran 77 as an intermediate language, we delivered parallel compilers for distributed-memory machines that leveraged the huge ongoing investments by the CPU suppliers and HPC system builders in their proprietary optimising Fortran compilers. Tis strategy enabled PGI to thrive as an HPC compiler supplier in the 90s, a period where RISC/UNIX systems would eventually comprise almost 90 per cent of the Top500.


Era of consolidation However, this diversity of CPUs would not continue. With the launch of the Intel


34 SCIENTIFIC COMPUTING WORLD


Pentium Pro CPU in 1995, momentum in the technical workstation market shiſted almost overnight from RISC/UNIX to Wintel-based systems. When Sandia National Labs built the ASCI Red supercomputer in 1997, based on Pentium Pro processors, the world’s first TFLOPs-capable HPC system was born. Te confluence of ASCI Red, the rapid


growth of Linux, and the relentless pace of Intel’s process technology development fostered the Linux cluster revolution, enabling new levels of computational performance and capabilities for scientific research. PGI was the compiler supplier on ASCI Red, and adapted the resulting compilers for a rapidly growing Linux/x86 market. ASCI Red Storm, Sandia’s follow- on system in 2005, was based on the AMD Opteron x86-64 architecture. By the end of 2009, Linux/x86-64 processor-based systems had almost completely displaced RISC/UNIX HPC systems. Aside from x86, only IBM Power CPUs survived in HPC as part of the long successful run of Blue Gene systems.


Processor architecture While this era of homogeneity in HPC platforms made system configuration


and programmability more uniform for scientists and researchers, it came at a cost. Innovation slowed, promising new architectural alternatives and compiler technologies failed to flourish, and ultimately less competition led to increasing prices and fewer choices Yet, aſter a 10-year period of


consolidation, the HPC market has shown once again that it will never stagnate. Tree new technologies have emerged as positive disruptions in the HPC hardware market. Te first is the rise of graphics processing


units (GPUs) for use in HPC systems. Based on the same technology that processes computationally intensive graphics in video games and professional design applications, these GPU accelerators provide massive levels of compute processing power to accelerate the complex algorithms used in scientific applications. Second, is the emergence of CPUs


based on the ARM processor architecture, gradually climbing up the food chain from embedded systems, to mobile phones, to the wide array of tablets and portable computing devices that are now pervasive. Tird, in just the past year, IBM


announced a server and HPC roadmap based around commoditisation of its Power CPU architecture. By opening up Power to HPC technology partners through


Composition of the TOP500 by Processor Architecture @scwmagazine l www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45