search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING


“You create a service and put it out there and then find that people will find novel ways to use it that you hadn’t considered because it fits a niche that they have”


g Xtreme-D has launched its Xtreme-


Stargate, a gateway appliance that delivers a new way to access and manage high- performance cloud resources at SC18, the largest US HPC conference, held in November each year. The Xtreme-Stargate system is a new


HPC and AI cloud platform from Xtreme-D. It provides high-performance computing and graphics processing and the company aims to provide a cost effective platform for both simulation and data analysis. The company has already announced a Proof of Concept (PoC) programme, which has been tested with several early adopters. Xtreme-D has announced that public


services will start at the end of 2018 in Japan, collaborating with Sakura Internet as the major Japanese-focused HPC and datacentre provider. Services in Europe and the US are expected to commence in 2019. Xtreme-D’s founder and CEO Shibata


commented that the main benefit of using the companies Xtreme-D DNA product is in establishing a quick optimal HPC virtual cluster on public cloud. ‘It saves hours of paid use on the cloud that it would have taken to create the configuration without the use of Xtreme-D’s HPC templates as a means for gaining access.’ ‘The tool also provides ongoing status of the job being processed, thus allowing better budget control of spending on the cloud and alerts for action in case resources are not sufficient so that work done will not be lost. Xtreme-D Stargate adds a higher level of data security through hardware-based data transfer. The ease of access and the resulting cost savings help accelerate the use of cloud for HPC workloads,’ comments Dr David Barkai, technical advisor, Xtreme-D. Shibata also notes that the bare metal infrastructure provides a unified environment that helps to reduce the burden of managing multiple types of computing resources. ‘It is difficult to manage a hybrid cloud


approach for an HPC solution. Customers incur a double cost and need to put in twice the effort in order to manage both


6 Scientific Computing World December 2018/January 2019


on-premise and cloud implementations,’ comments Shibata. ‘There is also the significant cost of uploading and downloading data to and from the cloud datacentre. Therefore, the HPC native cloud customer requires a separate workload setup for each environment (cloud and on-premise).’ ‘The bare metal approach allows


customers to manage a single environment. For example, AI and Deep Learning are best suited for cloud computing, but it is very difficult to size the infrastructure for Deep Learning. Xtreme-Stargate provides elastic AI and HPC-focused infrastructure, and is thus the best solution for both HPC and AI. It addresses both data I/O and general purpose computing,’ adds Barkai.


Serendipitous adoption of cloud services When OCF set out to create the cloud replication service the idea was fairly simple, to create service that could be used for disaster recovery that could mimic an in-house cluster. The service provides an opportunity to massively reduce downtime and give users the opportunity to get their applications up and running as quickly as possible. However, as Pancholi notes, by designing a robust service they had created something that could be used for a number of different use cases. ‘Initially the thought was “what if my cluster goes bang?” and that is how we came up with the idea but actually we started to realise that you can utilise it for testing or you can utilise it for providing a workspace for a group of users that are new or hasn’t necessarily got access rights to your cluster,’ stated Pancholi. ‘In the testing scenario let’s say there


is a critical update to a critical piece of software for your cluster. Other than deploying that on your private cluster, you could test that out on the cloud. You can see, as close as is possible, the consequences of that update and if you are happy with it you can deploy it across the main cluster. ‘It’s not going to be 100 per cent all of the time – nothing can be but it certainly provides a level of assurance that you didn’t have previously.’ Pancholi explained other use cases such as to create a workspace for a group of new users or as a testbed for new software or testing a new architecture. ‘You create a service and put it out there, and then find that people will find novel ways to use it that you hadn’t considered because it fits a niche that they have,’ adds Pancholi.


OCF’s Dean commented that these kinds of cloud services can be particularly suited for those kind of testbed activities. Even in the short time that this service has been developed OCF has already found users testing AI and IoT research, which can be carried out in this way. ‘You can imagine that with these


technologies they are cool and interesting technologies – a bit of a buzzword at the minute – when you are talking to an organisation a lot of people will say they want to do it but you don’t know what the uptake is going to be like until you build something,’ said Dean. ‘With computing resources being


per-built for you in the cloud you can grant access to that and then you can see how many of your users actually start using these services. Then you can start doing some analysis to find out if it will be viable to bring this service in-house,’ added Dean.


@scwmagazine | www.scientific-computing.com


Home Art/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28