This page contains a Flash digital edition of a book.
climate simulations


in the UK. In an unprecedented allocation, we were awarded everything we asked for, which is the largest number of core hours ever to be made on a single supercomputer allocation worldwide. What this means is that we are now able to do work that will most likely not be possible in the UK for another five years. Hence we can also preview the future scientific possibilities afforded by improved modelling capability and larger computational resources. I believe our bid was successful for a number


of reasons – a fundamental one is that there aren’t many groups that can effectively exploit these supercomputers. Our application is complete in that it makes use of all aspects of the machine, rather than just the computational power, interconnect speed or visualisation capabilities. Typical climate models use all of these resources very intensely. Te difficulty we had was that we needed to optimise our application in a very short amount of time. When we moved to Japan it took nearly two years to adapt the UK climate model to work on that system and there were many technical hurdles that we had to overcome. Te Upscale Project has a duration of one year and to use


up all the allocated time, we had to port our application within one month. With much help from our technical support teams, we did it, but it was an incredibly challenging thing to do – especially given that we were deploying in another country with technicians and specialists we had never worked with before. Fortunately our model has developed since our time in Japan to make such work more manageable.


the scale between the phenomena we are trying to represent and the computation grid that we’re able to apply to the problem, and if in the future we can remove the need for some of these parameterisations, the fidelity of the models will be higher. Tis would require the use of hundreds of thousands or even millions of cores concurrently, which presents a vast technical challenge.


WE WERE AWARDED EVERYTHING WE ASKED FOR, WHICH IS THE LARGEST NUMBER OF CORE HOURS EVER TO BE MADE ON A SINGLE SUPERCOMPUTER ALLOCATION WORLDWIDE


Te project is now underway and we are


running studies of not only how the climate system works, but also in hypothetical future climates where the emissions are very high and the atmospheric composition has been radically altered. In order to get the level of detail we want, we will have to increase resolution even beyond what has been possible in Upscale. We use parameterisations, which are ways to bridge


For some, it will require a recoding of all the


models, which, while feasible, is a significant effort. A further issue is that it’s hard to predict what the computing architectures will look like in the future. In many ways, everyone is currently trying to develop codes that will work on general architectures. Te code may not be as efficient, but at least our model development and operation will be more sustainable.


Robert Jacob, Argonne Computational Climate scientist, US L


ed by Dr Warren Washington, a senior scientist at the US National Center for Atmospheric Research (NCAR),


the Climate Science Computational End Station (CCES) project is a collaborative effort between the US Department of Energy (DOE), NASA, the US National Center for Atmospheric Research (NCAR) and a selection of universities. With the goal of improving climate models on several different fronts, such as increasing their accuracy and fidelity, the CCES project has requested a large block of time on the DOE’s computing facilities in order to accomplish this work. Here at Argonne National Laboratory, our specific focus for the project is on the issues involved in running climate models at a higher resolution. Improving the resolution is a very important


area of research, because if we are to get predictions on a finer scale – on county levels, for example – then we have to increase the number of points used to cover the globe. Tere are, however, many factors that need to be addressed in getting to that stage, such as the fact that models become far more expensive to run, greater computing power is needed and the soſtware and performance issues that need to be dealt with when simulations are being run on tens of thousands of processors. Many climate models also have


www.scientific-computing.com


parameters that must be tuned in order to make the resulting simulation look like the observable climate, and these tuning exercises are very difficult to do at current resolutions, because they take up a considerable amount of computer and researcher time. We are solving these problems – albeit very


slowly. Te easiest starting point is to look at the raw performance of the model. Te one figure that all climate modellers care about is the number of simulated years that can be done in a day, and there is a long-standing rule of thumb in the climate community that says if you can get five years per day, you can progress


OUR SPECIFIC FOCUS FOR THE PROJECT IS ON THE ISSUES INVOLVED IN RUNNING CLIMATE MODELS AT A HIGHER RESOLUTION


the research at a comfortable pace. Of course, it’s great if it can be done faster, but we are limited in the amount of science we can do if the model is slower than five years per day. When attempting to increase the


performance of the models, we can judge how


well we’re doing by running very short simulations which, by using the statistics and information gained from the computer, let us know where there may be room for improvement. By putting timers around sections of code that we think may be slowing the models down, we can figure out where exactly the problem might be and then possibly turn our attention to developing a faster algorithm. Tat is a relatively straightforward process, but it still takes a large amount of time given that it needs to be done in many short runs. Once we’re satisfied with the raw performance of the model, we can begin to think about the parameters. Tings are going quite slowly at the moment


because while Intrepid, the BlueGene/P supercomputer at Argonne, has 200,000 cores, we need almost all of them in a single run at high resolution. Tuning the parameters has become a process of trial and error, but with Mira, the new IBM BlueGene/Q supercomputer coming online, we can do several runs at the same time which means we’ll arrive at a good solution much faster.


OCTOBER/NOVEMBER 2012 41


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52