This page contains a Flash digital edition of a book.
INSIDE VIEW


Supercomputing on the cusp of change (again)


Barry V. Hess, general chair of the SC10 conference in New Orleans, assesses the issues driving the next evolution of supercomputing and supercomputers


This year New Orleans will host the 23rd annual Supercomputing Conference, and it’s a great match. New Orleans is a city in the midst of reinventing itself, even as it continues to celebrate the traditions that have made it famous. And, today, HPC finds itself inventing the present in the era of trans-petaflops computing – even as we move towards an exascale future. Reinvention and change have always been


at the core of HPC. As new technologies enter the computing landscape, our community adopts and adapts what it finds on the quest to expand the capabilities of the tools of discovery and exploration that we provide to enable the ‘makers’ – the scientists, engineers, and thinkers – to make the world a better, healthier place.


Science in the driver’s seat No single scientific issue is more on the minds of the public, policy-makers, and scientists than climate change. The key to a sustainable future for our planet rests in developing a common understanding of the current state of our global climate, and in establishing agreement on policies to change the way we interact with the planet. The HPC community has a central role in addressing these challenges: because of its global scale and centuries-long time scales, climate science is of necessity a largely computational pursuit, and one that requires the largest computers we can build. SC10 acknowledges the role of our community in this challenge by focusing on climate science as one of three special thrust areas at the conference. We will look at all aspects of climate simulation applications and techniques – including applied mathematics, algorithms, verification and validation, statistics, computer science, atmospheric chemistry, as well as environmental and societal impacts.


54


A data explosion The size and scale of the climate science we need to pursue to address our current challenges is enormous, and these simulations generate extremely large scale data sets that must be analysed. However, this observation is not unique to climate science, and the dramatic growth in data from many disciplines led us to highlight data-intensive computing as the second of our thrust areas. In some cases the data sets are so large, and the processing requirements so demanding, that processing them has become a specialised computing activity in its own right. The stakeholders in large data communities


are faced with significant obstacles in managing, processing, and acting on the data they collect. In some cases there are common technologies – specialised data streaming architectures or lightning-fast large scale storage, for example – that will serve across these communities. In other cases, particularly with respect to data sharing, team collaboration, and data integrity,


‘Organisations should adopt a measured approach to some of the other claims made by suppliers’


the solutions will be domain-dependent. Big data has its roots in the supercomputing community, and SC10 will bring many of the large data communities together to talk about their challenges and brainstorm together on new solutions as another of its thrust areas.


Everything old is new again As I reflect on my career in supercomputing, I see a definite periodicity to the evolution


SCIENTIFIC COMPUTING WORLD OCTOBER/NOVEMBER 2010


of technologies: ideas fade from relevance as new technologies come on the scene, only to find themselves repurposed a decade later in response to opportunities presented by even newer ideas. Today the idea of heterogeneous computing is once again shaping the present and future of HPC, and so we’ve selected this area as our third conference thrust. While adding GPUs, FPGAs, and so into supercomputing systems can significantly improve performance, it comes at the expense of added complexity: resulting systems are more difficult to design, build, and manage. Even more importantly, heterogeneous systems are more difficult to program. However, specialisation through heterogeneity can dramatically lower the energy expended per unit of computation, and this trait alone means that some form of acceleration is likely to be central in the design of the very high-end supercomputers deployed over the next decade. The HPC community is no stranger to change


but, at this point in time, HPC faces a level of disruption like nothing we have ever faced before. Big science and problems with global impact are pushing tremendous growth in the systems we deploy even as we are increasingly constrained by energy and usability limits and the need for new kinds of computing to deal with the data we are collecting. I hope it is true that great leaders are born during times of great change: we are going to need them.


www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56
Produced with Yudu - www.yudu.com