This page contains a Flash digital edition of a book.
category-four with a good degree of accuracy. Te new scheme dissipates less energy, enabling the group to retain more vertical motion and extreme winds, and to simulate category-four hurricanes and a few of the category-five storms. In terms of robustness, one advantage that the


UK has is that the same underlying model, the ‘Unified Model’, is used for weather prediction, seasonal prediction and for climate. Vidale explains that, because the code is shared, the climate model is tested several times a day, every day, against real data. ‘Weather forecasts are assessed against real observations every six hours, which enables errors in the code to be picked up quickly. Other centres around the world take a different approach where the climate and weather models differ but are compared in terms of climatology and statistics. Tese are not verified on a daily basis, however.’


Aerosol particles Elsewhere in the UK, at the Institute for Climate and Atmospheric Science at the University of Leeds, Kirsty Pringle is working in a research group that has developed a global model of aerosol particles. Called GLOMAP, it aims to treat the processes that control and shape aerosol distribution. She explained that, by using GLOMAP, they can understand which processes are important and need to be included in climate models. GLOMAP has also been used to develop a complex aerosol scheme included in the UK Chemistry and Aerosol (UKCA) model, a community climate model that is run within the Met Office’s own system. Pringle said that, unlike long-lived greenhouse


gases that are distributed quite uniformly around the globe, the distribution of aerosol particles is heterogeneous, with concentrations close to source regions such as deserts or oceans that experience high wind speeds – or close to emissions sources such as power-plants or road transport. Aerosol particles also undergo


Keeping things in house


‘The dynamical cores that we use are all developed in the public service, worldwide. We could think of using off-the-shelf solutions, but most of the commercial packages on the market are for fluid dynamics – simulating the flow of air around cars, for example – and while it would be possible to use them, the full weather and climate models may become incredibly expensive once we couple our physics, unless proper investment is made in joint development. Most modern climate models have very interconnected physics and dynamics and that line


46 SCIENTIFIC COMPUTING WORLD


between the two becomes finer every day. If we were to take a dynamical core from a commercial vendor we would need to make some extensive changes, which is why there is still a tendency to focus on in-house development. This may yet change in the future, depending on the success or failure of next- generation dynamical cores.’


Pier L. Vidale, Professor at the University of Reading, UK, and a National Centre for Atmospheric Science (NCAS) senior scientist


different processes in the atmosphere, which can change their properties and the way they interact with solar radiation. ‘Including all these different processes in climate models is extremely challenging, both because it is computationally expensive, and because many of the processes are still not understood well enough to be parameterised in models,’ Pringle remarked. Pringle says another area of her work has been


‘model uncertainty’, with a focus on atmospheric aerosols: ‘We know model simulations are an approximation of the real system so, even if a model is able to simulate realistic results, there will be an uncertainty associated with it. In some situations the uncertainty may not be important,


QUANTIFYING MODEL


UNCERTAINTY IS IMPORTANT IF WE ARE TO COMMUNICATE OUR FINDINGS TO POLICY-MAKERS


but in others it may be significant. We can only know for sure by identifying and quantifying it.’ According to Pringle, the climate modelling


community has started to address this issue. One approach is to invite every modelling group in the world to perform a fixed set of experiments with their own model. Tese model inter- comparison projects (MIPs) allow researchers to examine the extent to which different models produce similar results; divergences indicate that there is considerable uncertainty and allow researchers to target future research. Uncertainty analysis can, added Pringle, also


be done with a single model; a well-known example is the ClimatePrediction.net project run by the University of Oxford. Te project invited members of the public to run a climate model on home PCs. Tis allowed the researchers to gather


results from thousands of simulations, each of which used a slightly different set-up of the same climate model. Researchers can examine how sensitive the results are to the setting of individual uncertain parameters in the model. ‘Tis increased focus on understanding


uncertainty comes simply from the fact that the climate is a very complex system, so simplifications must be made when designing climate models that are computationally efficient,’ Pringle commented. ‘Models must be efficient if we are to perform hundreds of years of simulations. Quantifying model uncertainty is also important if we are to communicate our findings to policy-makers and the public.’ Multiple simulations were run with


GLOMAP, each with a slightly different setting of uncertain parameters. ‘A statistically robust estimate of model uncertainty requires many thousands of simulations and, unless you use a distributed computing approach (like climateprediction.net does), most groups simply don’t have the computer resources to complete all the simulations,’ said Pringle. To avoid this, the group ran a few hundred simulations (rather than thousands) and used a statistical emulator to interpret the results. Te emulator is a statistical package that


‘learns’ from the output of the computer model and can be used to interpolate from the hundreds of runs performed to the thousands of runs needed. ‘Te use of statistical emulation in climate modelling is quite new, but is a really useful tool for interpolating model results,’ Pringle explained. ‘In addition to the model simulations used to “train” the emulator we do extra model simulations and use these to check that the emulator is working well.’ Using this approach, the group identified


which of the uncertain parameters within the model had the greatest effect on the aerosol distribution so that, in the future, they can work on trying to constrain the values of these parameters. Pringle noted that a number of technical difficulties were encountered: ‘Although we routinely perform complex model simulations, we had never previously performed so many simulations at once. Te emulator cuts down on the number of simulations required, but we still needed more than 200 simulations.’ GLOMAP runs on 16 CPUs and takes about a day to run a full model year.


The role of HPC According to Per Nyberg, director of business development at Cray, high-performance computing (HPC) has become a fundamental part of numerical weather prediction and ‘in the age of commodity microprocessors, these models need to be able to use an increasing number of processors in parallel.’ Nyberg


@scwmagazine l www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52