This page contains a Flash digital edition of a book.
HPC APPLICATIONS: MULTIPHYSICS


Multiphysics is a key to developing MEMS devices (see page 40). This image from Comsol shows the eigenmodes of a piezoelectric single-crystal silicon plate resonator. (Model and image courtesy of A. Jaakkola, VTT Microtechnologies and Sensors/Mikroteknolgiatja Anturit, Finland)


bring all the applications into an integrated user environment’, which in this case is the Ansys Workbench. That interface makes it easy for users to couple physics from different solvers and pass data among them. When asked if the package can handle any arbitrary couplings, she responds: ‘It would be naïve to pretend we have the entire universe covered, but we have quite good integration of basic physics such as mechanics, fluid flow, electromagnetic/ circuits and heat. It’s a never-ending process to build in more physics and couple them.’


Parallel meshing, too


Ansys provides parallelised versions of both its direct and iterative solvers. In terms of parallelisation, it employs the ‘divide and conquer’ domain decomposition strategy with some of its solvers; depending on the solver and problem size, the right number of cores varies from eight or 16 to hundreds for large-scale CFD calculations. This approach is becoming more important,


www.scientific-computing.com


says Hutchings, to enable high fidelity simulations – working with bigger problems with greater geometric detail, or looking at an overall assembly rather than just individual components and using modelling to understand the interaction of system components. She also feels


‘Now we have software that makes it possible to solve a problem that might involve three, four, five or even more phenomena to get a truly accurate model’


that parametric sweeps and optimisation represent the next big wave in modelling. Here you can either have multiple instances of the Ansys code on a node or spread multiple instances across several or even hundreds of nodes.


For HPC achievements, Ansys has demonstrated the ability to solve model sizes with a billion cells and more (requiring 3.5 hours for 500 iterations on 768 cores) and has also demonstrated linear scaling of problems at 1,024 cores and only a slow dropoff after that with 78 per cent of the ideal linear gain at 2,048 cores. Hutchings adds that from the standpoint of a software developer, they must look at all the areas in the modelling process and adapt those that are potential bottlenecks for cluster operation. This includes meshing, analysis and even support for parallel file I/O; at this time, Ansys does have some parallel meshing and does support parallel file I/O. This latter feature, she notes, is useful for transient analysis where you want to store the results at many points in time, and this involves considerable movement of data to storage. Last year Ansys had a major revamp of


its product packaging strategy, one designed to make HPC more accessible to companies


SCIENTIFIC COMPUTING WORLD OCTOBER/NOVEMBER 2010 27 ➤


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56
Produced with Yudu - www.yudu.com