climate modelling
believes that the critical element here is scalability because, unless the system is able to scale to use tens of thousands of cores in parallel, complex models cannot be run within their operational windows. ‘All of the major modelling groups around the world have efforts underway to address scalability from an algorithmic perspective as they understand that the world is moving towards millions of cores, and that the algorithms need to keep pace,’ he said. So far the results are very encouraging and there have been demonstrations of models scaling up into hundreds of thousands of cores. In the UK, there is an experiment looking at
automated code generation that Pier Luigi Vidale believes will improve portability, by enabling scientists to write their algorithms at a very high level. Essentially, the machine code would be generated by a tool, rather than the scientist, and then submitted to the computational platform. Tis represents a fundamental change as it would isolate the scientist from the underlying hardware. However, it would have the benefit of eliminating the need to readapt and rewrite code every time a new computer becomes available. In terms of the hardware, one challenge within
the climate community is the broad adoption of accelerators. ‘Most weather centres use regular x86 processors, but there is a lot of discussion surrounding GPGPU (general-purpose
computation on GPU) and Intel Xeon Phi,’ said Nyberg. Te question, he explained, of what a processor will look like five years from now is an open one, and the modelling community must factor this uncertainty when investing expertise in code development. He believes that the best option is to try to focus less on particular instantiations of technology – especially from a processor perspective – and more on general parallelisation at all levels, as that will provide a safeguard for algorithmic optimisations. ‘Te relationship [between the HPC and
numerical weather prediction communities] is very much a symbiotic one,’ added Nyberg. ‘Weather centres push us in HPC to improve our technologies, while our advancements enable them to test their modelling and simulation limits.’ At the end of 2013, Archer – the next
generation of a national HPC facility in the UK, will become available – impacting the types of modelling that researchers like Pier Luigi Vidale and Kirsty Pringle are able to conduct. According to Vidale, Archer (Advanced Research Computing High End Resource) promises to be at least twice as fast as the Cray XE6 system, HECToR, currently available to the UK academic community – and equivalent to Hermit, installed at HLRS Stuttgart in Germany, which he used during the Upscale project in
2012. Archer, also a Cray machine, will enable Vidale’s group to set up a coupled model that includes both the atmosphere and ocean at very high resolutions. ‘Te opportunity we had in Germany meant that we were using observed sea surface temperatures, because running a project like that would have taken longer than the single year allocation would have allowed,’ he explained. ‘A coupled model is very important for long-term climate prediction and projection. Our access to Archer will allow us to run two coupled simulations concurrently and continuously for at least two years.’ Pringle agrees that, in the future, more
modelling groups will take the approach of either using multiple models to perform experiments, or perform experiments with multiple different setups of a single model: ‘In this way we can be sure that the results are not strongly sensitive to uncertainties within the model and hopefully this will improve the robustness of our research. Tere continues to be much public concern about the reliance on climate models for climate science; I hope that, as we really probe and test our models, we can better understand how confident (or not) we are in the results and this will help us communicate this to the public.’
PL Vidale holds the ‘Willis Chair of Climate System Science and Climate Hazards’
[1]
Ultimate Scale-Out Lustre® Storage Solution
Unmatched filesystem performance
Highly optimized and integrated architecture
Easy to deploy and manage
ClusterStor 1500 ClusterStor 6000
To learn more please visit
www.xyratex.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52