HIGH PERFORMANCE COMPUTING
that many large companies face. This solution provides a special- purpose dedicated cloud that is typical of the kind of user that wants to build a cloud resource while still maintaining their on- premise cluster. ‘There are lots of our
customers that want to take the peaks off those dedicated clusters and manage that more efficiently, not necessarily build their cluster to match peak usage but some normalised usage,’ said Lalonde. ‘Then, when they have additional workloads, put that in the cloud so you can optimise your financial models so you are spending the least amount in the cloud while still getting your work done and achieving your results.’ ‘That is where a lot of our
customers come at the cloud, saying “how do we leverage our on-premise investment and still go the cloud to get more done when we need it or to access specialised resources such as GPUs because we do not have that on-premise to do a special project”,’ Lalonde added.
Building momentum Andy Dean, HPC business development manager at OCF, noted that in OCF’s higher education and research customer base the use of cloud is gathering momentum. While the technology is not being used in every installation, Dean stressed that the use of cloud is increasingly coming up in conversations. ‘It is an interesting time, The
HPC community is relatively new to adopt public cloud, at least in our customer base, but it is definitely picking up. A large proportion of our customer base is in higher education and research. In a lot of the conversations that we are having, they are discussing how they can adopt these technologies. One thing that Dean noted was that it is not security or other concerns that dominate these discussions, but talk around the pricing of cloud technologies. This is because the cloud allows HPC users to spin up specific architectures for each application, which means that many examples will be using
different hardware with different costs. This can make comparing the price of hybrid and public cloud installations quite difficult as there is little like for like comparison.
‘It is as much around how
you charge your users for these technologies or services, as much as, the technical questions around getting their research computing in the cloud. Developing the cost models as much as how do you run a job in the cloud,’ added Dean. Dean also said that while
costs were continually coming down that was where a lot of the conversations were taking place. Customers have different options available to them soi it is important they understand the cost and pro and cons of a public cloud service or developing a hybrid cloud strategy. Each can suit the needs of their users but, depending on the scale and requirements of the operation, the costs could be quite different.
‘What the public cloud gives
you is the ability to spin up the right infrastructure for the right workload,’ added Dean. ‘That makes figuring out the cost side of things quite complicated, because you are not really comparing like for like. This approach allows
datacentre operators to limit capital expenditure, as the GPU users or those who require a high-speed storage appliance can have their workloads run in the cloud while the main research cluster benefits from cost savings from not having to provide this specialised technology. ‘That means for some are
probably running their jobs on infrastructure that is too expensive, or others could have done with some faster storage or a faster processor, for example. That is where I think it is really interesting going forward,’ added Dean.
Developing a scheduler With the rising interest in cloud migration and hybrid cloud deployments in HPC, Univa saw the opportunity to integrate its existing expertise in scheduling
www.scientific-computing.com | @scwmagazine Feat. AMD EPYC 7002 Series
Introducing the Boston Quattro 12256-T02 featuring four hot-swappable dual- processor nodes, utilising the latest technologies available on x86.
Each node supports two 2nd Generation AMD EPYC processors, up to 4TB DDR4- 3200MHz memory in 16 DIMM slots, up to six SATA drives and multiple add-on card options. The 4th generation of PCI-E provides double the bandwidth over the previous generation, required for some high-performance network controllers and next- generation GPU’s.
with cloud migration software that could help to manage the flow of resources where they are needed. ‘Certainly there is lots of
activity for us now in the cloud, and that’s where Navops Launch comes in,’ commented Lalonde. ‘To oversimplify, we set a dial so an organisation can turn that dial and determine what goes to the cloud and what does not. In our world, you ideally abstract the complexity of the cloud from the end-users, those researchers and scientists that are submitting work.
The system works much in the same way as using a job scheduler – in this case, Univa’s Grid Engine. Users submit work that is then pushed to existing infrastructure or to cloud resources, depending on the policy and tags associated with
The Quattro 12256-
T02, featuring AMD EPYC™ 7002 Series, is designed with power and cost-effectiveness in mind, leveraging Titanium efficiency power supplies to reduce power consumption. The flexibility and rich
feature set of the Quattro 12256-T02 makes it a great solution for high performance and high-density computing. Contact Boston to arrange testing across a number of solutions and AMD EPYC™ 7002 Series CPU SKUs.
www.boston.co.uk 01727 876100
a particular job. They submit work in the same way that they always have to the Grid Engine master on-premise and then Grid Engine master determines in conjunction with [Navops] Launch and the policy where the workload is actually going to run,’ said Lalonde. An example would be a
workload that may be tagged with GPU, for example. If the organisation does not have GPUs on-site then Navops could push that workload out to the cloud, while Grid Engine manages the on-premise workloads as normal. Another exmaple would be a job that is tagged ‘secure’ so it will never be run outside of an organisation. ‘Through policy decisions,
you can determine how that cloud cluster interacts with the on-premise cluster. How it growsg
August/September 2019 Scientific Computing World 7 Boston FEATURED PRODUCT
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32