Southampton University/OCF
HIGH PERFORMANCE COMPUTING
“While promising orders oIt is difficult to provide an environment that engages not just at the novice level but also at the advanced user level”
researcher trying to move terabytes of data from that instrument to their desktop so they can analyse it.’ In addition to removing partitions
there have also been some surreptitious improvements that have allowed the Southampton facility to better serve a more varied user community as they have jumped from an old to a new system bridging four generations of Intel’s CPU microarchitecture. The nodes used in Iridis 5 use 192 GB per node which Parchment noted was ‘quite a lot’ for the average user. While this is split among 40 processor cores it is now getting to a stage where applications that do not scale particularly well are only using two to three nodes or 120 cores. Ultimately applications will decide the
Iridis 5, at Southampton University ‘In some ways, although the field might
be traditional and might have a very long history with HPC, the people that turn up don’t necessarily have that. We have to do some quite basic introductions to Linux, cluster computing or supercomputing. These are mainly online training courses that we push out to those people and sometimes face to face interactions if there challenges are particularly complicated,’ added Parchment. ‘We run the whole gamut of users
from novice to very experienced users that have quite complex challenges and problems. Being able to service that kind of diversity is incredibly difficult’ Parchment concluded.
Simplifying the supercomputer In order to help these new and inexperienced users to access the system the node architecture has been simplified to have the same memory and core count on each node. The previous supercomputer was been configured with several partitions of different hardware and memory. The aim is to provide a more simple
system for users to access as there have been issue with people placing jobs on hardware that would be sub-optimal for their project. ‘It is difficult to provide an environment
www.scientific-computing.com | @scwmagazine
that engages not just at the novice level but also at the advanced user level. You have got to ensure that on one hand you try and provide a simple environment that normally means you are restricting what people can do, whereas an advanced user will require a much more customisable environment. Providing the two of them is an issue of ongoing debate in the team’ stated Parchment.
One of the things that the team has
worked on with the introduction of Iridis 5 is helping to add value to the user community rather than just providing a supercomputer. This means thinking about the entire workflow and not just computational elements. A HPC system is a bit like a scientific instrument in that you run your experiment ion the instrument and you get data. And that is the end of the HPC component in some ways – certainly this was the case with our previous perspective,’ explained Parchment. ‘Visualisation has now been put into the
system, data analytics is a component, cloud is a component so we are thinking a little bit more about the workflow that people are using. It is not just about the scientific instrument generating the data but how can we help the researcher to gain insight within the system,’ added Parchment.
‘Clearly what we do not want is the
need for various hardware partitions and some HPC centres are focused specifically on testing new architectures. However, the decision to standardise the node architecture for a ‘one size fits all’ approach could be useful to other centres – such as university clusters with a varied user community. ‘What we have been able to do with the march of technology is to ride that wave a little bit. Memory per node is not much of an issue because it is actually quite large now by default. You can cope reasonably well with quite a wide spectrum of codes without having to do unnatural acts around partitioning up your system for specific application spaces which makes it complicated for everybody,’ said Parchment. The team has tried to keep the system as simple as possible to give the user community a much clearer understanding of what they have got to work with. ‘That determinism can be quite useful for a researcher to work with without having to make too many decisions on what to use,’ added Parchment. ‘We really are trying to get to the point where we are adding value to what the researcher does rather than just providing a facility and saying “here you go” and we will come back to you in three or four years’ time with a new one. We want to make or users more productive,’ Parchment concluded.
April/May 2018 Scientific Computing World 21
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40