This page contains a Flash digital edition of a book.
HPC APPLICATIONS: MULTIPHYSICS





specifically for clusters. With direct solvers, when forming and assembling the stiffness matrix that describes the problem, they apply domain decomposition; but when factorising the matrix they use other load-balancing techniques, such as farming out matrix supernodes. For iterative solvers, which involve many matrix-vector multiplies, they also support clusters and parallel processing. And while the direct sparse solver supports parallel file I/O, they try to run most problems in the core memory of large systems to avoid I/O whenever possible. As for scaling, the Abaqus direct sparse solver scales well to 256 cores or more, but ‘not many customers are running problems in that range of rarified air’, with mainstream users typically working with 4 to 32 cores. To handle cluster pricing, Abaqus has a token-based license structure where the number of tokens required to run a simulation depends on the number of cores. The pricing formula is nonlinear, says Weybrant, and it flattens out nicely as the number of cores increases.


A multiphysics framework Echoing sentiments already mentioned, ‘there’s not a code that spans the entire multiphysics field, and each physical situation has some special requirements,’ according to Glen Hansen of the Idaho National Labs’ Multiphysics Methods Group. His group helps develop simulations


for nuclear energy research and faces some unusual modelling challenges. Thus, to provide the flexibility of working with a variety of simulation codes, his group has come up with a framework called Moose (Multiphysics Object Oriented Simulation Environment). It’s a set of common code typically required to create


‘Coupling the best codes of each discipline will enable more flexibility and simulation quality to the end user’


an engineering analysis application for fully implicit, fully coupled multiphysics studies. It’s basically a finite-element methods library linked with advanced nonlinear solvers and preconditioners, parallel adaptive mesh refinement and file I/O; it’s got almost everything except the physics code, which plugs in at the top level. End users write their physics equations and boundary conditions in weak form and plug them into Moose to create a customised modelling code.


Moose started in 2008 because INL has several nuclear applications and needed a more efficient way to perform the modelling tasks. This framework uses a considerable amount of public-domain and Department of Energy developed


software; for instance, the libMesh finite- element meshing library comes from the University of Texas at Austin, the CUBIT meshing tool comes from Sandia National Laboratory, the MPI-based PETSc parallel solver comes from the Mathematics and Computer Science Division of Argonne National Laboratory, and the Trilinos solver framework comes from Sandia National Laboratories. Visualisation options include ParaView, an open-source code developed to analyse extremely large datasets using distributed memory computing resources, and EnSight from CEI. To support domain decomposition and parallel processing, the framework uses the Nemesis mesh decomposition and distribution library, also developed by Sandia. Moose applications typically employ tens to hundreds of processors depending on requirements. An example of what can be done with Moose is Bison, a nuclear-fuel performance analysis application that solves coupled equations for thermomechanics, pellet/ cladding interaction, species diffusion, fission product behaviour, cladding stress and degradation. A fully coupled, fully implicit solution strategy allows for long transient burn-up scenarios. Bison has shown near perfect strong scaling on 1,440 processors on a fuel-performance calculation that was coupled to a mesoscale material modelling code incorporating the effect of irradiation on thermal conductivity.


Domain decomposition – here illustrated with Bison studying the response of a dished fuel pellet to forces on the outer rim of the dish – breaks a large domain into many subdomains and sends each one to a different compute node to solve


30


SCIENTIFIC COMPUTING WORLD OCTOBER/NOVEMBER 2010


www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56
Produced with Yudu - www.yudu.com