This page contains a Flash digital edition of a book.
HPC 2013-14 | Education


the hardware until they actually arrived in Leipzig, but during the competition, she said, ‘I learnt a lot about different ways to work with power management’. Alejandra Bolanos-Murillo, who is in her


final year studying computer engineering, is no stranger to high-technology. ‘My mother studied computer science and engineering, and encouraged me,’ she said. She wants to go on to study for a masters’ degree. Although the Costa Rican team was one of three winners in the competition, winning or losing did not really matter, she said. Attending the International Supercomputing Conference (ISC’13) ‘is a different experience. We do not have a lot of supercomputing in Costa Rica and you don’t get a chance to see a big conference and exhibition’. For both women, it was the first time they had travelled to Europe – indeed, for Moya-Ramirez it was the first time she had flown in an aeroplane. Te US Supercomputing Conference


and Exhibition, held in Denver Colorado in 2013, also offers a variety of programmes for undergraduate and graduate students. Student volunteers help with the administration of the conference, while also getting time to attend sessions in the technical programme. Te US event originated the idea of the student cluster


challenge and, in 2013, the first-ever team from Australia – traveling nearly 9,000 miles from Perth – is competing. In all, eight student teams, from universities in the United States, Germany, China, and Australia, were selected to compete: IVEC, a joint venture between CSIRO, Curtin University, Edith Cowan University, Murdoch


“Competitions in the USA, Asia, and Europe exist to stimulate interest among the younger generation in the challenges and rewards of a career in high- performance computing”


University, and the University of Western Australia (Australia); Massachusetts Green High Performance Computing Center (USA); National University of Defense Technology (China); Te University of Colorado, Boulder (USA); Te University of the Pacific (USA); Te University of Tennessee, Knoxville (USA); Te University of Texas, Austin (USA); and the Friedrich- Alexander University of Erlangen-Nuremberg (Germany).


Te US competition is rather harsher in


that it is a real-time, non-stop(!), 48-hour challenge to build and operate a small cluster on the SC13 exhibit floor, using commercially available components that do not exceed a 26-amp limit, and work with application experts to tune and run the competition codes. SC13 sees a new addition to the competition – the Commodity Cluster track – in which teams bring commercially available hardware costing less than $2,500 and with a 15-amp limit. As Alejandra Bolanos-Murillo said,


ultimately who wins the challenge is not the important thing – the competitions in the USA, Asia, and Europe exist to stimulate interest among the younger generation in the challenges and rewards of a career in high-performance computing. But to secure a cohort of experts in sufficient numbers to tackle the problems of HPC will require more than a few competitions. In Europe at least, this issue has caught the attention of major players in the industry, who have come together as the European Technology Platform for HPC (ETP4HPC), and high on their agenda is ‘Embracing parallelism – high performance computer science education for the future’ as Malcolm Muggeridge explains in the following article. l


Embracing parallelism Malcolm Muggeridge offers his view of HPC science education


computing are quickly becoming blurred. In the future, the possibility that


I doubt there is anyone in the world today that doesn’t agree that education and training are fundamental needs of any society that wishes to prosper and grow. Te competitive world of HPC and supercomputers is no different; in fact it is, by definition, always going to be the ‘new ground’ which, one day, more general computing and ICT will be using. In these times of constantly accelerating technological growth, we can see how the niche tools and techniques required to make and to run today’s and tomorrow’s supercomputers will move quickly to reach the wider markets. Te lines between ‘cloud’, ‘big data’, ‘HPC’, ‘SaaS’ and many other variations of powerful


30


innovations in hardware will make exascale- level HPC performance possible around 2020 has a ‘ring of certainty’ about it – although, precisely what these machines will be like and, most importantly, how we will develop them and make sure they can be effectively used, is not so clear. What will this mean to the rest of the


computing world? With such scaling of the power and performance, we could have a teraflop of processing capability – today’s definition of HPC – in a laptop. But how can such power be used? Te common theme between these worlds is parallelism in all aspects: be it in hardware;


soſtware; in communications; or algorithmic methods to distributed storage. Te world must embrace doing everything in parallel. So how do we ‘embrace parallelism’? Firstly we need to create the technology;


not just the processors, but the compilers, middleware, debug tools, communication protocols and networks, file systems, storage, and the application code itself, all working together to deliver the results at exascale speeds, while not wasting the huge electrical power demanded by such hardware through inefficient operation. Ten we need to use it effectively to progress and grow our society through solutions to our industrial, scientific, and societal challenges. Te ETP4HPC (www.etp4hpc.eu) is an


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36