This page contains a Flash digital edition of a book.
HPC advisory council workshop

HPCAC workshop: High info bandwidth, low cost

Paul Schreier reports back from

the HPC Advisory

Council’s Switzerland workshop 2011


f you’re looking for a cost-effective way to get information about HPC, you’d be well advised to consider attending one of the workshops conducted by the HPC Advisory

Council. The attendance fee is quite modest yet the amount of information you’ll walk away with is quite high, and you’ll have extended direct access to most of the leading companies in this sector. These statements are based on Scientific Computing World’s attendance at the three-day workshop held in from 21 to 23 March 2011, in Lugano, Switzerland. (Disclaimer: Scientific Computing World is one of the workshop’s media sponsors.) Although the HPCAC performs the strategic

organisation by selecting the dates and locations, setting up the programme and selecting speakers, tactical organisation is often handed by a local body, in this case CSCS (the Swiss National Supercomputing Centre in nearby Manno, Switzerland). And although Lugano is located in Ticino, the Italian-speaking section of the country, the official workshop language was, as always, English. There were 150 registered attendees and in

fact, say workshop organisers, others had to be turned away due to the size of the meeting facilities. People came from as far as India and the Middle East. The fee for the three-day workshop is 80 Swiss francs / 50 euros; most of the cost is borne by sponsoring companies. The fee is intended primarily to filter out curiosity seekers who register but don’t actually attend, as well as to cover the costs of the off-site banquet held one night. The format consisted of a series of

presentations followed at the end of the day by a live demonstration of some aspect of setting up, and running, HPC clusters. The PowerPoints for most talks are available on the HPCAC website (go to the agenda page at


switzerland_workshop/agenda.php, then click on a given speaker’s name); in addition, many are also posted on YouTube. Speakers come from industry and academia. Further, interspersed among the technology talks, are presentations from the event’s sponsors addressing their specific solutions or some aspect of HCP in which they are involved. At this workshop, there were several areas

of particular focus: Message Passing Interface (MPI), high-speed networking, HPC storage / parallel operating systems and GPUs. The first area, MPI, was conducted primarily by D.K. Panda and S. Sur from The Ohio State University; Panda leads a research group that developed the MVAPICH/MVAPICH2 (MPI-1 and MPI-2 on InfiniBand, iWARP and RoCE) open source software packages, thus making him an extremely qualified presenter. In three sessions, he covered the basic principles of MPI, MPI performance measurements and optimization techniques; and future developments including GPUDirect support, advanced collectives (offload, topology-aware, power-aware, etc.), and compatibility with PGAS (partitioned global address space). As for HPC storage, an introductory session

started with a discussion by Hussein Harake of CSCS about experiences with SSD (solid state discs) in HPC systems; he was followed by

Lustre. Thus, much of the Lustre team split off and formed this venture-backed startup to continue the development and support of what is now an open-source parallel operating system. Note also that there was a presentation by storage company Xyratex, which recently acquired ClusterStor, a file system start-up led by Peter Braam, the founder of Lustre. GPUs were also a large part of the discussion.

Andre Heidekrueger of AMD said: ‘A data center that could achieve an exaflop in 2018 using only x86 processors would consume over three TW.’ Clearly there’s a need to get more flops for each Watt, so the audience was quite interested to hear what both AMD (concerning its upcoming Bulldozer devices) and Nvidia had to say with regard to its upcoming Echelon system. There were also descriptions of

supercomputing activities in Switzerland (CSCS) and at KAUST (The King Abdullah University of Science and Technology in Saudia Arabia) as well as the SuperMUC petaflop system IBM is installing at the University of Lepizig Computing Centre with a goal of becoming the most energy-efficient large-scale supercomputer with a target power usage effectiveness (PUE) of 1.1. The subject of energy came up often, and a sponsor presentation by Eurotech reviewed Aurora, a DC power-driven, water-cooled cluster with a PUE <1.



Onur Celebioglu of Dell reviewing various HPC storage architectures – and this was followed by Torben Kling Petersen of Xyratex, who reviewed some amusing real-world lessons he’s learned during many years of building HPC storage systems. The third storage session focused on parallel operating systems, and the users were particularly interested in a presentation by Andreas Dilger of Whamcloud, who is one of the original developers of the Lustre file system. Lustre originally came out of Sun

Microsystems, which was then purchased by Oracle, which subsequently discontinued development of this parallel file system. However, says Dilger, seven of the top 10 in the Top 500 and 70 per cent of the top 100 use

There was plenty more information, far

too much to describe in this short summary. The key thing is that virtually all the major players were represented, and during the coffee breaks attendees had the opportunity to get into in-depth discussions about the matters of particular concern to them. Note that the Lugano workshop is the only

three-day event planned for 2011; one-day workshops in 2011 are planned for Germany (Hamburg, 19 June, the day before the actual start of the ISC’11 conference), China (October, details to follow) and the USA (December, details to follow). Preliminary plans are to hold a three-day workshop in Lugano again in the spring of 2012.

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48