This page contains a Flash digital edition of a book.
aerospace


IN THE FUTURE, SOLUTIONS WILL NOT ONLY GIVE A NUMBER, BUT WILL INCLUDE A CONFIDENCE INTERVAL


each individual calculation is computationally expensive, optimisation is essential in ensuring that models are accurate. ‘Tanks to optimisation the benchmark for modelling has been improved as there are constants that need to be corrected according to experimental variants,’ said Carlo Poloni, CEO of Esteco. Poloni added that uncertainty quantification


tools are under continuous development, which means he expects that, in the future, solutions will not only give a number, but will include a confidence interval. Tis will give the designer the confidence that what they are computing is actually true. ‘Tere are techniques that work on uncertainty quantifications and there are many European research projects that are focused on this environment,’ said Poloni. ‘One project, UMRIDA, will be launched in September 2013 and it will move this approach from a purely academic level to a technical-readiness level. Tis will increase the reliability of the computations, further reducing the number of tests needed in


the design process.’ He went on to say that finding the right compromise between modelling complexity and computing time, according to what is available in the simulation environment, is critical.


Further


Expanding the options One trend to emerge from the increase in technical capabilities contained within modern modelling and simulation solutions is that engineers are furthering their exploration of the design envelope and considering a wider range of scenarios. Mark Walker, principal engineer at MathWorks, commented that the less effort it takes to create a representative simulation, and the higher performance with which it can run, the broader the range of scenarios can be considered. ‘Engineers want to explore as many scenarios as they can, but there is a finite amount of development time in which to do this. Within that time they must consider the minimum amount as required by the legislation and then augment that analysis for their own confidence in the design,’ said Walker. Te extent of analysis that simulation enables


Taking turbulence to HPC


What role does high-performance computing have within turbulence modelling? Beth Harlen reports on one XSEDE project


‘We offer extended collaborative support services under The National Science Foundation’s Extreme Science and Engineering Discovery Environment (XSEDE) project, a federated consortium of service providers with many common infrastructure elements that scientists can use to share computing resources, data and expertise,’ explained Vince Betro, a computational scientist at the University of Tennessee’s National Institute for Computational Sciences (NICS), and a member of XSEDE’s Extended Collaborative Support Service (ECSS). ‘Working with principle investigators we demonstrate the impact high-performance computing can have on their domain science, and help them to get their code running faster and more efficiently.’ One project was headed by Antonino Ferrante


of the William E. Boeing Department of Aeronautics and Astronautics of the University of Washington, Seattle, and focused on the study of droplet-laden


26 SCIENTIFIC COMPUTING WORLD


flows. Ferrante and his research team accessed HPC resources and expert guidance at NICS and the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana–Champaign, via the XSEDE project. Although the majority of the code work was done at the University of Illinois, the majority of the runs were done on the large capability machine operated by University of Tennessee, Kraken. Housed in the Oak Ridge Leadership Computing facility at Oak Ridge National Laboratory, Kraken is a Cray XT5 is currently 30th on the Top500 list. Ferrante’s simulations were particle-based, with each individual particle needing to be mapped and tracked in its interactions. To get even close to a realistic simulation, more than one billion particles had to be modelled and the only way to do that was with the capabilities offered by Kraken. Betro’s role was not only to offer support in understanding the computational fluid dynamics and evaluating the results, but also in dealing with the problem of I/O at this massive scale. ‘The machine can handle that scale of calculation, the file system can handle the massive amount of files, but getting the files from the file server to the machine, that’s where we were getting huge


bottlenecks,’ said Betro. ‘But we were able to work with Ferrante’s I/O pattern so that it was conducive to running on such a massive machine.’ Betro believes the use of HPC resources has become a necessity for the type of modelling and simulation undertaken by Ferrante and his team, not just because the problems have become larger, but because standard supercomputers don’t have the processing power or memory to handle them. Focusing on the visualisation aspects of the project was David Bock, a visualisation programmer at the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana–Champaign. Bock specialised in trying to graphically represent the gigabytes of data being generated by Ferrante and his team. Using custom software, written by Bock, rays were shot through the volume and values were mapped to colour.


Depending on the data values, different structures


were visualised that enabled the team to observe particle structures moving along the boundary layer. When first viewing the data, the range of structures was so narrow that nothing could be seen. Further work was done to visualise these elements.


@scwmagazine l www.scientific-computing.com


information Ansys www.ansys.com CD-adapco www.cd-adapco.com Esteco www.esteco.com Mathworks www.mathworks.com MSC Software www.mscsoſtware.com XSEDE www.xsede.org


is increasing. According to Walker, analysis models are being used through many more of the implementation stages, and engineers are running closed-loop simulations to assess how aircraſt will behave in general turbulent conditions. Rather than focusing the analysis activity at the beginning of the process, engineers are recognising the value in using these techniques to validate further stages. Anyone running an analysis can use the block diagram language contained in Mathworks’


Simulink, for example, to build reference models and run evaluations as needed. Walker said there is a trade-off between the


level of fidelity and the number of scenarios that can be run, given that high-fidelity models run very slowly. Because this type of assessment has historically been driven through desktop analysis and simulation, the fidelity and performance of simulations are tied to the capabilities of deskside supercomputers. As the processing power of these machines


improve, so too will the possibilities afforded by the soſtware tools.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32