HIGH PERFORMANCE COMPUTING Sponsored by Panasas Panasas FEATURED PRODUCT
”There is a PKMD available on ARM and emulation of NVDIMM on the Arm platform. We are also actively contributing to the mainline Linux kernel”
‘Making use of these
implemented. This includes the Slurm scheduler, TensorFlow and DCache for Mero, in addition to containers and high-speed object transfer, which are still currently in development.
In a blog post on the
Seagate website Shaun de Witt, Sage2 Application Owner at the UK’s Culham Centre for Fusion Energy, gives insight on some of the application partners ambitions for the project. In the post he states: ‘Science is generating masses and masses of data. It’s coming from new sensors that are becoming increasingly more accurate or generating more volumes of data. But data itself is pretty useless. It’s the information you can gather from that data that’s important.’
diverse data sets from all these different sources is a real challenging problem. You need to look at this data with different technology. So some of it is suitable for GPUs. Some of it is suitable for standard CPUs. Some of it you want to process very low to the actual data — so you don’t want to put it into memory, you actually want to analyse the data almost at the disk level,’ added Witt. Sage2 evaluated
applications using the prototype system housed at the Julich research centre. Principally this work will focus on operating with these new classes of workloads with optimal use of the latest generation of non-volatile memory. Sage2 also provides an opportunity to study emerging classes of disk drive technologies, for example those employing HAMR and multi-actuator technologies ‘The Sage2 architecture actually allows us this functionality,’ de Witt explains. ‘It is the first place where all of these different components are integrated into a single layer, with a high speed network which lets us push data up and down and between the layers as fast
www.scientific-computing.com | @scwmagazine
The ActiveStor® Ultra turnkey HPC storage appliance offers the extreme performance, enterprise grade reliability and manageability required to process the large and complex datasets associated with HPC workloads and emerging applications like precision medicine, AI and autonomous driving.
ActiveStor Ultra runs the PanFS® parallel file system to offer a storage solution that provides unlimited performance scaling and features a balanced node architecture that prevents hot spots and bottlenecks by automatically adapting to dynamically changing workloads and increasing demands - all at the industry’s lowest total-cost-of-ownership.
https://www.panasas.com/products/activestor-ultra/ ?utm_source=scw
as possible, allowing us to generate information from data at the fastest possible rate.’
The Sage2 project will
utilise the SAGE prototype, which has been co-developed by Seagate, Bull-ATOS and Jülich Supercomputing Centre (JSC) and is already installed at JSC in Germany. The SAGE prototype will run the object store software stack along with an application programming interface (API) further researched and developed by Seagate to accommodate the emerging class of AI/deep learning workloads and suitable for edge and cloud architectures. The HPC big data analysis
use cases and the technology ecosystem is continually evolving and there are new requirements and innovations that are brought
to the forefront from AI/Deep learning. It is critical to address
these challenges today without ‘reinventing the wheel’ leveraging existing initiatives and know-how to build the pieces of the Exascale puzzle as quickly and efficiently as possible.
If Sage2 intends to validate
this architecture then there is still some way to go but they are hoping to provide the groundwork to deliver reliability and performance for future, large-scale science and research projects. The storage system is being designed to provide significantly enhanced scientific throughput and improved scalability alongside an object storage system that can obfuscate the technical details of data tiering to achieve that performance. g
Summer 2020 Scientific Computing World 5
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38