This page contains a Flash digital edition of a book.
HPC DIRECTOR


Onwards to exascale


Dona Crawford, associate director for computation at Lawrence Livermore National Laboratory, is responsible for the laboratory’s computing infrastructure and the HPC systems that serve research programmes from national security to materials science, energy, and climate change


C


omputing is in Lawrence Livermore National Laboratory’s DNA. Ever since the laboratory’s founding


by Edward Teller and Ernest Lawrence, computing has been central to the research and development that put Lawrence Livermore on the global map. Even before the laboratory officially opened its doors in September 1952, Teller and Lawrence ordered a Remington-Rand Universal Automatic Computer, or Univac-1, which, with its 1,000-word memory and 5,600 vacuum tubes was the HPC heavyweight of its time. Teller and Lawrence understood that computing was the key to transforming the lab from an obscure second nuclear weapons lab, still in the shadow of the better-known Lawrence Berkeley Lab, into the world-class research and development institution it is today. Scientific computing has historically been a principal integrating element of the team science that defines Lawrence Livermore’s modus operandi. The laboratory’s leadership has always taken the long view of HPC in an effort to anticipate the need for the ever more powerful computing systems required to fulfil missions in national security as well as the basic science underpinning all lab research and development. The needs of research scientists for computational capabilities to solve intractable problems have always exceeded the computational power available. Consequently, developing new computing resources has long been a part of the national labs’ charter. Recognising this fact of research and development, the US Department of Energy (DOE) has historically played a role


36


in developing new computing capabilities by working in partnership with industry (that role today is played by DOE’s quasi- autonomous National Nuclear Security Administration). In my time at Sandia and Livermore, the need for these capabilities has been driven by the ban on testing nuclear weapons, something we all agree is a good idea. In the absence of testing, the computational muscle required to address the technical and scientific challenges of the effort to ensure the safety and reliability of the nation’s shrinking nuclear arsenal, grows as weapons age decades beyond their intended service life.


HPC has played a vital role in making it possible to reduce the US nuclear arsenal to its smallest size since the administration of President Dwight Eisenhower. The process of reducing the stockpile further, and eventually eliminating nuclear weapons as President Barack Obama envisions, will


require increasingly powerful HPC systems – capabilities that can also be leveraged to address other national challenges, such as energy and climate change. We now know that petaflop/s (quadrillions of floating point operations per second) systems will not be sufficient to meet the needs of the weapons programme or other global challenges the lab has taken on in security, energy, environment, and climate change. For example, exascale computing will be required to better understand nuclear boost, combustion (how to reduce emissions), improved fission energy, fusion energy, power grid modelling, carbon sequestration, and climate modelling. That is why, even though we have yet to install our first petaflop/s system on the floor of Livermore’s simulation facility, we are already starting to plan for exascale computing. Overcoming the many technical hurdles to exascale – including power requirements, system reliability, and writing applications for billion-way parallelism – and developing the needed tools and technologies will require years. Current forecasts put the deployment of the first exascale systems in the 2018-20 timeframe. But that can only happen if we start now. DOE has launched a planning effort for


an exascale initiative to lay the groundwork for next-generation HPC. Leaders in HPC from our lab along with representatives from other labs, including Sandia, Los Alamos, Argonne, Oak Ridge, and Lawrence Berkeley, meet on a regular basis to discuss the path


Things have moved on a long way from the Univac-1


SCIENTIFIC COMPUTING WORLD AUGUST/SEPTEMBER 2010


www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48
Produced with Yudu - www.yudu.com