This page contains a Flash digital edition of a book.
SC12 report


Getting the science done, faster


Now that the adrenalin rush of SC12 is behind us,


Tom Wilkie offers a personal view of the themes


and emerging trends S


omething was lacking in Salt Lake City, this November. Despite ill-informed anxieties about the capital city of the


Latter Day Saints, it was neither coffee nor beer – both were in plentiful supply. Shockingly, the US Government was missing from the biggest supercomputing event of the year. Both the US Department of Energy (DoE)


and the Department of Defense – whose procurement policies over the past few decades have essentially created and sustained the US supercomputing industry – had abruptly cancelled travel for their employees to attend SC12, the supercomputing conference and trade show. More worryingly, they had also cancelled the exhibition stands from which they promote their work. Jeff Hollingsworth, Chair of the 2012


conference and who had therefore spent the best part of the past two years organising the event, found some positives – SC12 had the largest ever number of commercial exhibitors as well as the largest floor area taken by commercial companies, he said. In the short term, he continued, SC is healthy. Some DoE employees, predominantly those


who had been invited to present papers at the conference, had obtained permission and funding to travel. (Others were so dedicated to the technology, and the event, that they used up their own holiday allocation and, in some cases, travelled at their own expense to be there.) However, with his background as Professor


Tate Cantrell, chief technology officer at Verne Global


of Computer Science at the University of Maryland, Hollingsworth stressed the importance of conference attendance to science and engineering, warning the cost-cutters that the papers presented at a conference ‘are yesterday’s results. It’s the hallway conversations that are the wellspring of new ideas that generate tomorrow’s results. You don’t get that from Skype conversations.’ He pointed with pleasure to the large numbers of student papers that had been nominated for some of the prizes that SC awards each year: ‘It speaks very well for our future. We need to expand the high-tech workforce,’ he said. It has been a mainstay of the US


supercomputing shows that the major national laboratories (particularly those funded by the


SHOCKINGLY, THE US GOVERNMENT WAS MISSING FROM THE BIGGEST HPC EVENT OF THE YEAR


DoE) took a massive amount of floor space and built suitably impressive stands. Although seldom discussed explicitly, this presence (albeit renting floor space at a reduced rate) has served as a closet US Government subsidy to the conference and thus the supercomputing industry itself – in addition to the overt purpose of enhancing the profile of the laboratories and the work they do. Although some attendees felt that this year’s


‘no-show’ was more a reaction to financial scandals elsewhere in the US Government, the symbolism was unsettling. Professor Hollingsworth himself admitted to ‘more than a minor concern’ that in the current tight fiscal climate ‘there are questions about whether there will be sufficient [US Government] funding for a full-scale exascale programme.’ Currently, the most powerful machine in the


world is a petascale computer: Titan, a Cray XK7 system installed at Oak Ridge National Laboratory in the US. Te Top500 rankings are published twice a year, to coincide with the European ISC and the US SC conferences, and in November’s listings Titan achieved 17.59 petaflops (1015


floating point calculations


per second) on the Linpack benchmark. Te system uses AMD’s Opteron 6274 16C 2.2GHz processors, Cray’s Gemini interconnect, and its 560,640 processors include 261,632 Nvidia K20x accelerator cores


www.scientific-computing.com


DECEMBER 2012/JANUARY 2013 19


Andy Z. /Shutterstock


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32