This page contains a Flash digital edition of a book.
HPC predictions


John Shalf, group lead for the NERSC Advanced Technologies Group at Berkeley Labs


I


think it’s going to be a landmark year with the commercial release of Intel’s first MIC system, Knights Corner. MIC has been


touting this as an alternative to GPUs that’s easier to program and more like a regular CPU is able to drive similar benefits in terms of performance in energy efficiency. However, as access to Knights thus far has been very restricted, there’s no independent verification of that. Many research groups will finally get first-hand experience with MIC and there will likely be a flood of benchmarking and papers this year that will give us a broader perspective on its competitiveness in the HPC market. Tere are two major experimental clusters


in this area and they are ‘Titan’, which will be installed at Oak Ridge National Laboratories and composed of Nvidia GPUs, and a Knights Corner-based cluster, which is being built at the Texas Advanced Computing Center. Te preliminary benchmarking to compare those two approaches will be interesting and will certainly determine the future of accelerator- based computing – whether it will fall along the GPU or CPU track. As Knights Corner hasn’t emerged from an existing market, Intel will need to prove that the HPC market is


willing to pick it up. Tis will be a big deal for MIC because it must establish itself in the HPC market, whereas GPUs still have the graphics market to support them (HPC is just an additional opportunity for the GPUs). Tis means that not only will MIC have to be competitive relative to GPUs; it will have to be self-sustaining. Another alternative to GPU vs. MIC


that has been emerging is the embedded processor technology applied to servers and HPC. Embedded for consumer electronics design processes have always focused on cost- efficiency and more delivered performance per Watt – the two primary concerns for the advancement of HPC systems. However, only recently has the floating point performance of this technology made it feasible to play in the HPC/server space. It offers an alternative to the GPU vs. MIC ‘accelerator’ competition, and tailored System-on-Chip solutions offer better integration. Numerous products emerge in this space based on ARM and Atom processors that we expect to emerge this year from mainstream vendors. We’re all sitting on the fence right now as to where the future lies – is it GPU, Cuda,


Brent Gorda, CEO of Whamcloud F


or the first time ever, the


November 2011 Top500 list of the world’s most powerful supercomputers


listed no new systems. Does that suggest some level of stability in high-performance computing? On the contrary, it is merely a lull before the storm. 2012 will see dramatic change with new systems coming online at unprecedented scale and performance. Even without much of a crystal ball, it is


clear the competition to position and develop exascale technologies will intensify. Te competition with China, in particular, will influence funding levels, and Big Data, already important outside of high-performance computing, will be a dominant trend. In the US alone, there are three major


systems being prepared: I am personally thrilled to see the results of my involvement with Sequoia, a 20 Pflops (peak) system based


34 SCIENTIFIC COMPUTING WORLD


on IBM BlueGene/Q technology at Lawrence Livermore National Laboratory (LLNL). At Oakridge National Laboratory (ORNL), they are upgrading Jaguar with GPGPUs to nine times its current performance and to the same 20 Pflops. And Blue Waters at the University of Illinois was big in the news last year as the contract shiſted from IBM to Cray. It will come online with a performance of more than 11 Pflops (peak). What do these powerful new systems


indicate? Big Data. As these systems grow, they create an explosion in the quantity of data that they produce and consume at ever increasing rates. For Sequoia, LLNL is standing up a 55 Pbytes (formatted) file system. Teir performance goal for this Lustre file system is 1 TB/s sustained. Te Lustre file system at ORNL is expected to reach 10-20 Pbytes in size and run at 700 Gbytes per second. I was at the XLSDB meeting at SLAC


where Google described their daily I/O feed for their worldwide infrastructure running at just over 1 TB/s. Te US labs are striving to


do that in a single POSIX file system. Clearly the big brains in HPC know how to wrangle Big Data. I think, however, that the truly


unanticipated trend is cloud. HPC in the cloud is real. Tere are many in the HPC community who see ‘cloud’ as a loose, undefined marketing term. But I predict we will see more and more cloud providers offering HPC technology as services like Penguin Computing’s ‘POD’ (Penguin on Demand) HPC Cloud service gain traction. Cloud has the potential to make HPC more available and cheaper than ever before thought possible. Imagine the ability to contract for HPC


jobs temporarily, as needed, virtually ‘from home’. HPC in the cloud can provide this – especially if it addresses the I/O problem. It will open up not only commercial HPC markets, but markets where HPC has not even been considered a possibility before. One of the reasons HPC is an exciting place to be is the rapid pace of change. So don’t be fooled by the lack of change in the November 2011 Top500. It’s the calm before the storm.


www.scientific-computing.com


OpenCL, or OpenMP with ornamentation, for example. Everyone recognises that these developments mean they will have to rewrite their code, but what’s holding everything back is the uncertainty of which solution is most likely to succeed. People oſten deride scientists or engineers for being intransigent or unwilling to write their code, but everyone I speak to is more than willing to do so – they just only want to do it once! Another big issue is that people are


searching for what it means to have data- intensive computing. Everyone has a different answer and if it continues to be that way for another year then it has the danger of collapsing like a house of cards. I think 2012 will be an important year to solidify what it is to have data-intensive computing and what requirements we need to target it. Tis will be a make-or-break year for the non-macho-flop computing – will this become the ‘next big thing’, or continue to be a catch-phrase?


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48
Produced with Yudu - www.yudu.com