climate modelling
56 estimates that are closer to what actually happened; the mean of which is what was most likely happening at that time. We arrived at 56 for two reasons: firstly,
it needed to be a multiple of eight as we were running on the IBM system at the US National Energy Research Scientific
Sciences at Oak Ridge National Laboratories, and NERSC’s Franklin, a Cray XT4, we were able to produce the entire dataset in a year and marry the algorithmic advances that have been made with the computing advances and the advances that have been made in the recovery of historical observations.
TO PRODUCE GLOBAL ESTIMATES OF THE WEATHER WOULDN’T HAVE
BEEN POSSIBLE WITHOUT SUPERCOMPUTING AS IT REQUIRES THOUSANDS OF PROCESSORS TO COMBINE THE OBSERVATIONS WITH HIGH- RESOLUTION NUMERICAL WEATHER PREDICTION MODEL GUESSES
Computing Center (NERSC) that had eight core nodes; and secondly, our tests show that more than 50 guesses are needed, so 56 was the next step up. When we moved onto the dual and quad core Crays it remained a convenient number. To produce global estimates of the weather wouldn’t have been possible without supercomputing as it requires thousands of processors to combine the observations with high-resolution numerical weather prediction model guesses. By using Jaguar, a Cray XT5 supercomputer at the US National Center for Computational
Lawrence Berkeley National Laboratory, USA
T
he main focuses of my work are extreme weather and changing climates. The move to higher resolutions is having an interesting
impact on the simulation of tropical cyclones and hurricanes and the horizontal resolutions are approaching 25 kilometres. Resolutions of this size take a lot of time on the grid, but there are a number of things they do better than the normal production models; mostly related to precipitation. Once you start reaching 50 kilometres,
the storms look realistic to the extent that you could be looking at model output and looking at observations and not know which is which. The same cannot be said for the standard 300km resolutions. There’s a much higher level of realism in these latest models and in my view the most important direction our science is heading in is higher resolutions that capture these details of the water cycle. In order to do that water cycle correctly, however, we need to replace our approximate treatments of clouds – no cloud is 25km across so we need to push those models towards a high resolution of 1km.
42 SCIENTIFIC COMPUTING WORLD
Michael Wehner, senior climate scientist at the
The numerical weather prediction model
itself was made by the US National Centers for Environmental Prediction (NCEP), which is part of the National Oceanic and Atmospheric Administration (NOAA), and is called the Global Forecasting System. It’s an implementation of the equations of the complete weather variability around the globe: essentially motion, heat, radiation and water. We use it at a lower horizontal and vertical resolution than they would use it every day. At the time we used an experimental version of their code because
Resolution is the key. From a software
development point of view the question is how we discotise the globe; in other words how we represent the equations of motion on a sphere. When looking at a globe you see the longitude and latitude lines and notice that the lines of longitude are closer at the poles than they are at the equator. When creating a model of the globe, we have to deal with the fact that cells get skinnier as they get closer to the poles. Around 40 years ago a technique
was used where people would map the equations of motion onto spectral transform functions, eliminating that issue. There are scaling issues with that approach, but software engineers have been able to push these kinds of codes to pretty high resolutions. The group that currently runs the highest resolutions in global weather models is the European Centre for Medium-Range Weather Forecasts in Reading, England. One question I am addressing is how
the hurricane cycle is going to change as the climate warms. There’s a strong consensus in the community that the most intense category four and five hurricanes will increase in number and severity. A number of factors influence hurricane formation such as the temperature of the water and the presence of ideal ambient conditions like low
it was the first one that would allow us to incorporate the radiative effects of time- varying carbon dioxide concentrations, volcanic aerosols and time-varying solar variability. Our algorithm was developed by NOAA’s Jeff Whitaker and Tom Hamill. It is a particular algorithm of the ensemble Kalman filter and we worked to improve that specifically for trying to do this historical weather reconstruction. We have two different goals. The first is to
go further back in time – the 20th Century Reanalysis Project went back to 1871, and in the next version we’re hoping to go to 1850. This effort is part of a much larger international project called Atmospheric Circulation Reconstructions over the Earth (ACRE). The project manager is Rob Allan at the UK Met Office Hadley Centre, and both he and I have a dream that by using this technique we can get back to the 18th Century. We haven’t shown that this is possible yet, but colleagues involved in the project are finding more meteorological observations than we had originally envisioned, so that suggests we may be able to do it in the future.
wind shear. If it isn’t very low, the hurricane will tilt and essentially eat itself up. We don’t really know how the statistics of wind shear will change, but we do know these conditions will continue to present themselves. Of course, we don’t agree on everything and while a lot of the models suggest the number of category one and two hurricanes will decrease, the consensus on that is much weaker. The magnitudes involved are pretty uncertain, which is why we need high- performance computing models. One thing we have learnt is that no single
model gives the right answer. In fact, they all give wrong answers, but do so in different ways that allow you to distil what the right answer should be. Each model has its own unique set of defects and strengths. Models are especially good at telling us
something we already have some theoretical conceptions about. Once in a while models tell you something you don’t know, and that’s difficult because it means you then have to explain it. The issue with climate models is that they are easy to abuse. You can push them into regimes they weren’t designed for, and interpret results from them that they weren’t really designed to produce. And if you don’t have a good theoretical understanding of what you’re trying to explain it’s difficult to know whether the model is accurate or not.
www.scientific-computing.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48