This page contains a Flash digital edition of a book.
HPC 2015-16 | Processors


Designed specifically to handle massive amounts of data, IBM Power Systems Linux servers are marketed as an alternative to x86 servers


HPC evolves from the commodity cluster


Robert Roe surveys the processor market that underlies the next generation of HPC systems


High performance computing is on the brink of its biggest transition in two decades. Ever since Tomas Sterling and Donald Becker built the first Beowulf cluster at NASA in 1994, mainstream high-performance computing has tended to consist largely of clusters of server nodes networked together, with libraries and programs installed to


14


allow processing to be shared among them. Tis represented a major shiſt away from proprietary, vector-based processors and architectures that previously had almost defined the term ‘supercomputer’ because, initially at least (and certainly in the classical Beowulf configuration), the servers in the clusters were normally identical, commodity- grade hardware. Although the hardware may be more


specialised today, the clusters are still dominated by the general purpose x86 processor, in marked contrast to the vector machines that appeared in the early 1970s and dominated supercomputer design from then until the 1990s, most notably the machines developed by Seymour Cray. Te rapid fall in the price-to-performance ratio of conventional microprocessor designs led to the demise of the vector supercomputer by


the end of the 1990s. So the scene today is of massively parallel supercomputers with tens of thousands of ‘off-the-shelf’ processors. But now the landscape is going to change


again. Tis time the shiſt will be driven not only by ‘technology-push’ – in the shape of new processors – but also by ‘demand-pull’ as the nature of HPC changes to more data- centric computing. And there is an external constraint as well: power consumption.


Data-centric computing Te change in the nature of computing was recognised by the report, in 2014, of a Task Force on High Performance Computing set up by the Advisory Board to the US Department of Energy (DOE), which stated that data-centric computing will play an increasingly large role in HPC. Te report recommended that ‘Exascale machines


IBM


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36