This page contains a Flash digital edition of a book.
HPC: oil and gas ‘A further challenge is that MPI transactions


need to be very quick and very predictable, as well as feature low jitter, low latency and high bandwidth. ‘Te installation of the Gnodal fabric meant


it could feed and draw from our data storage much quicker than ever before, so we’ve ended up upgrading our storage significantly. ‘Te only other HPC user that comes close


to oil and gas in terms of compute intensity and data size is Nasa. We use so much memory and so much disk space that the cloud is simply not an option.’ Hardware specialists Supermicro, which


has several customers in the oil and gas space, builds its HPC solutions from the ground up, from the motherboard up to server configuration. Dr Tau Leng, VP and general manager


of HPC at Supermicro, says his company is well set-up to deal with the demands of the oil and gas industry. ‘We have a very broad range of products,’ he says. ‘And we specialise in providing application-optimised solutions, whether that be for seismic analysis or reservoir simulation. Each of these requires a very different solution.


A Total solution Marc Simon, technical director, SGI


Total has been a customer of SGI in France for more than 15 years. Marc Simon, technical director at SGI, says: ‘We supply them both the HPC and the storage – the two are very tightly connected. They need to manage both the processes and the big data that the processes create.


‘Our approach with them has always been to get a full understanding of the customer’s workflow, their approach to R&D, and how they use HPC. We have a team of people based at Total’s operation in the south of France, who not only address HPC projects, but also other aspects of Total’s requirements, such as data management or visualisation. ‘Last year, we won the latest contract on


offer from Total, which will enable them to take the next step in terms of HPC usage, using new algorithms, and also to allow it to cope with bigger data. We supplied them with an integrated cluster based on our Ice X architecture. It has 100,000 cores, more than 5400 terabytes of memory, and 8 petabytes of disk space. Like any customer, they wanted the solution to demonstrate a high level of performance, but it also needed to be energy-efficient. We achieved this through the improved density our products,


18 SCIENTIFIC COMPUTING WORLD


and also through a cooling system that was able to accept warm water cooling. ‘For the most part, Total uses the system for seismic processing. The new algorithms, together with the power of HPC, enable them to see what is under the earth much more clearly, and therefore determine whether oil or gas is present. ‘In the future, Total will also use the set-up for


reservoir simulation, which is a technique that helps oil and gas companies determine the most efficient method of extraction. ‘SGI is a partner in terms of the support we


can offer Total, ensuring that what they want to do is able to run efficiently and reliably on our system. Around half of our dedicated team works on system administration, while the other half concentrates on R&D support and development. This means helping the process move from a pure engineer’s algorithm to a smooth application of computer science on an HPC system.’ ‘By working with Total at the R&D stage, we are


aware of the algorithms they are working on, and can evaluate early on how to run these algorithms on emerging systems and processors. In turn, this helps Total select an appropriate configuration the next time they upgrade their system. We have teams looking at Nvidia GPUs and Intel’s latest chips all the time to assess their suitability to these new algorithms as they emerge.


‘Seismic analysis is all about processing


power, with very little communication required between nodes, so it scales very nicely to thousands of nodes. Reservoir simulation, by contrast, requires very high-speed interconnects, so we develop systems that have Infiniband built in.’ Reservoir simulation is the process by which


existing wells whose oil production is slowing are assessed for ongoing productivity. Te process enables the production company to take data from points underground, reconstruct the underground in a 3D picture to help establish where there is still oil, and run simulations that will inform future extraction plans. One of Supermicro’s recent projects has


been with CGGVeritas, a geophysical company delivering technologies and services to the global oil and gas industry. Supermicro worked with Green Revolution Cooling to deliver an HPC solution at CGGVeritas’ centre in Houston. Supermicro’s 1U dual-GPU SuperServer is


being used in conjunction with GRC’s CarnotJet fluid submersion cooling system. Together, they create a high-density, high-capacity computing solution that has reduced data


Subsurface topology with strata and faults


centre power needs by 40 per cent. ‘We found that CGGVeritas is a company that is always open to new technology,’ says Leng. ‘We used a GPU-based solution for the computation. Tis is a high-density solution, and also draws a lot of power, which is why the submerged solution was particularly suitable. Some cost- benefit analysis has been done on this, which suggests that if the power is above 25kW, then a submerged solution is oſten better.’


Workload management PBS Works is an HPC workload management suite, which is used extensively by the oil and gas industry. It includes a number of tools, including PBS Professional, which optimises HPC resource usage, and PBS Analytics, a web-based portal that visualises historical usage data by jobs, applications, users, projects, and other metrics so the user can capture trends for capacity planning and what-if scenarios. ‘Oil and gas users generally have large


clusters,’ says Rick Watkins, account manager at Altair, which produces PBS Works. ‘And clusters need management! Both seismic processing and reservoir simulation are compute-intensive, so oil and gas has always been an early adopter of HPC technology. It’s also about ensuring that any task is using the right resources. We have alternative resources now, such as GPUs and other accelerators, which oil and gas codes are starting to utilise. So, it’s just simple business sense to have cluster management soſtware in place of a team of people allocating jobs to resources. ‘Te clusters used in the oil and gas


industry are very large and that size dictates that functions such as health monitoring are essential. ‘HPC technology has improved research in


oil and gas. Jobs that used to take two or three weeks to complete can now be done in two or three days. Processors are better, codes are better optimised, and interconnects are faster. All of these component parts of an HPC setup have had to improve to keep up with the size of cluster demanded by oil and gas. ‘We have integrated PBS Works with a


@scwmagazine l www.scientific-computing.com


Altair


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36