This page contains a Flash digital edition of a book.
➤ expanding their work flows into EnginFrame. Ten, when they need more support or add Windows-based applications, they can have drop-in replacements of the open source components and get commercial support’. As an example of the support provided by a company like Nice, he pointed to the close links that Nice has with the application soſtware vendors so that: ‘In our lab, we have dozens of ISV applications that we test so we can ensure that when a new version hits the road it is performing optimally’.


Compute and visualise in the same machine For Linux applications, Nice can also provide 3D graphics using Nvidia’s Tesla card rather than the other graphics cards. Rodolico pointed out that this ‘is particularly intriguing for the HPC community: for on the same card you can run both graphics and HPC.’ Remote visualisation brings with it the ability to compute and visualise in the same place, he continued: ‘With the ability to put everything in the data centre, you have the ability to give every user the right-size machine, dynamically for their purposes. From project to project you can relocate the user on to a different virtual machine or on to a different physical machine depending on the actual purpose and you use a standard scheduler to do that. Te user does not see this’. To Nvidia’s Steve Parker, this is the most


exciting aspect of recent developments: ‘One of the things that I think is amazing is that it’s the first time we have a single device that is capable of both graphics and visualisation. It used to be there was a computational system and then a visualisation system. Separate resources: buy a big SGI and a cluster’. But GPU computing has ended that, he said. He cited a demonstration that Nvidia ran during SC14 with an astrophysics simulation code called Bonsai. Although the results were being displayed in New Orleans, the code was being run at the Swiss Supercomputing Centre, CSCS: ‘on hundreds and hundreds to about a thousand nodes. It would use the GPUs for computation, and then use the graphics horsepower in those same devices to rasterise the star field of the galaxy simulation’. It was, he said, computing and visualising at the same time: ‘Te data was never stored to a disk; although, if you were doing science with it, you would probably want to store some of it. Te data was streamed from the computation, to the visualisation system, to the video codecs, to a display halfway across the planet.’ Tis sort of capability is unprecedented, he


believes and the advantage of a unified resource is that it can be put together in many interesting ways. It allows a systems administrator or even the end-user scientist or engineer a lot more flexibility, for they can allocate a set of nodes as


24 SCIENTIFIC COMPUTING WORLD


ParaView, VTK, with Intel’s OSPRay library, integrated by TACC, displays water flow in a South Florida ground core sample using LB3D prime


the visualisation resource rather than it having to be distinct. ‘As far as future goes, having unified computation and visualisation is a key trend. Video streaming, rather than shipping large station wagons of tapes, is also going to be key. It will be the default way to work.’


Visualisation and graphics without GPUs? But for Intel, an important consideration is that there are a significant number of large HPC centres that are not using GPUs. As Jeffers explained: ‘Tey see the need for visualisation, so they carve off a smaller subset of a system to have GPUs in it.’ It is, he said, almost the default method that people are using, ‘but an issue with that is that you have to move the data over to that arena, and then display it with an OpenGL rasterisation environment on the GPU – which continues to mean moving the data in each step of the way.’ In his view, that entails delays sometimes of many days if not weeks before the scientist can get to see the visualisation and get that extra insight. For some time now, Intel has been adding


parallel compute capabilities to its products and, according to Jim Jeffers: ‘Tat parallelism adds capabilities across many workloads and graphics workloads, being parallel, can strongly benefit from it.’ So Intel is developing open source tools to allow those large-scale installations to do visualisation without having to have a peripheral, GPU-based system for the visualisation. ‘A peripheral GPU has a limited amount of memory but if you apply these techniques to the standard HPC cluster which has more memory for the compute applications you can


take advantage of that and not be limited in how you craſt your data set. Sometimes you have to decimate your data set to sit on the GPU. You don’t have to do that if you can spread it across a cluster,’ Jeffers said. ‘We’re building with industry partners, such as


TACC [the Texas Advanced Computer Centre] and Kitware, a stack that will scale across large HPC datacentres that have Xeon and Xeon Phi products as compute resources, without the need for a GPU, and render those in soſtware with a focus on enabling the visualisation to match the scale of the data, and drive towards real time remote visualisation of that data at reasonable frame rates of 10 to 20 frames per second. ‘Tat’s the direction we’re taking. Ultimately,


it is providing a faster time to science because of the memory size and efficiency. When the data begins to exceed the size that a GPU peripheral can cope with, then our performance is higher as well,’ he claimed. In Jeffers’ opinion, ‘TACC has one of the


world’s finest visualisation teams that both supports end users and enables development work. Tey are a perfect partner for us to work with.’ He described how Intel worked with TACC and Florida International University to visualise an aquifer near the Bay of Biscayne in Florida as part of a study to monitor salt water intrusion into the fresh water aquifer. If intrusion does occur, the questions were how does happen; how does it flow through; and how quickly can the aquifer recover? As important is to look at the model and see if there is any remedial action that can help repair the damage. Te issue is important not just scientifically but also in social terms: many of the freshwater


@scwmagazine l www.scientific-computing.com


Data: Michael Sukop, Sadé Garcia, Florida International University; Kevin Cunningham, US Geological Survey Visualization: Carson Brownlee, Aaron Knoll, TACC


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41