This page contains a Flash digital edition of a book.
applications Te study that MultiMechanics was


involved in was preliminary and, as such, is ongoing. Its team is responsible for looking at various ways to improve the efficiency and resolution of the simulation ‘especially by increasing the resolution of the ice microstructure,’ Castro commented. ‘Future studies will apply the same multi-


scale approach not just to the ice, but also to the structure, to explore how advanced materials, such as composites, respond to iceberg interaction, and the effect of microstructural design variables on structural performance,’ Castro concluded. One aspect that particularly pleased


MultiMechanics was the performance gain derived from employing multi-scale technology. Castro said: ‘Tere is a significant performance gain by modelling these microstructural details through a multi-scale approach, instead of a single scale one.’ Te study showed that single-scale


modelling of large structures becomes less efficient as the complexity of the model is increased. Castro said: ‘For example, a model with 638,000 degrees of freedom, and even without including crack initiation or propagation, still required a computational effort of approximately 14 hours for 200 solution steps on a 16 core, 2.6 GHz workstation.’ Te same experiment was modelled with


a multi-scale approach, where global and local finite element models were coupled and analysed simultaneously, while crack initiation and propagation was modelled at the micro-scale level. Castro said: ‘Tis multi- scale simulation had better results, and a total of two million degrees of freedom, yet it took only 18 minutes to run using MultiMech on the same machine. Since the modelling time was reduced significantly, future studies will be able to look into even more complex models and structures.’ Castro went on to explain that the soſtware


is parallelised effectively to scale with the number of CPU cores available, so a HPC infrastructure ‘would provide proportionally faster responses, especially for larger models. However, HPC is not required, as demonstrated by the case study.’ Much of the simulation and modelling at


NOV Elmar takes place on HPC resources, as the complexity of simulation is oſten beyond the scope of traditional workstations. Herdman explained that all resources are in-house at the NOV datacentre in Goodyear, Arizona. Servers consist of HP SL230 and SL250 that


range from 64GB to 128GB of memory and total over 600 cores. Te current scheduler is


www.scientific-computing.com l Ice-structure interaction for an indention experiment and 20 per cent porosity @scwmagazine DECEMBER 2014/JANUARY 2015 25


Moab/Torque with a custom web submission portal for HyperWorks and Abaqus. Ansys jobs are submitted through RSM directly to the scheduler. Tere is also a Virtual Desktop Infrastructure (VDI) for engineering applications that is powered by Citrix Xendesktop and Nvidia GPUs. Shared storage is provided to compute and VDI nodes via a Netapp HA filer pair. Herdman remarked that one of the benefits


of using Altair soſtware is performance of its meshing soſtware HyperMesh, especially for hexahedral elements. Herdman said: ‘HyperMesh is second to none regarding the development of efficient meshes.’ Herdman continued: ‘Tis has three


main benefits. Firstly, by investing at the post processing stage we have reduced lead times so that approximately 20 per cent of


THE STUDY SHOWED


THAT SINGLE-SCALE MODELLING OF LARGE STRUCTURES BECOMES LESS EFFICIENT


our analysis is due to solving. Secondly, hex meshing also removes the requirement for seeking mesh independence solutions, which would be required for tetrahedral element meshes. Tirdly, our in-house modelling experience has also shown that hexahedral elements produce much more reliable and accurate results for pressure vessel design than even second-order tetrahedral elements of a similar size.’ Herdman also stressed that it seems


unlikely that NOV Elmar will need to use a


different soſtware package in the foreseeable future. Herdman said: ‘Te use of Altair’s other soſtware, such as RADIOSS, and HyperView seems a very natural solution, as we would now only ever use HyperMesh to pre-process. Herdman also explained that NOV Elmar


prefers Altair soſtware, such as HyperMesh as opposed to a more specialist platform tailored to oil and gas because of the support provided by Altair. Herdman said: ‘To some extent, the jointly developed plug-in has given NOV Elmar a level of industry-specific customisation, but there are no business needs to use anything specialist.’ Herdman continued: ‘It was principally


due to Altair’s willingness to provide free training in HyperMesh and extended use of trial soſtware that NOV in the UK began to use the HyperWorks. Tat was 18 months ago and now we have two UK sites using Altair products, including NOV Elmar. We have received regular training and Altair personnel are always on hand to support as required.’ Castro said: ‘MultiMechanics works closely


with users to add requested features and optimise code performance. Te soſtware has quickly evolved and will continue to do so over the future. Te fast computations presented in the study weren’t possible just a few years ago, and more breakthroughs in performance will continue to be made in the years to come.’ Although simulation is already complex,


there is still room from improvement, from specialised plug-ins for niche applications to parallelising code so that it can deliver results faster, or increasing the resolution of FEA to include the microstructural effects there is still much to be done to make oil and gas industry as safe and efficient as possible.


MultiMechanics


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32