HIGH PERFORMANCE COMPUTING
benchmark results from deployments were demonstrating resource utilisation and flexibility that is well-suited for data- intensive workloads such as HPC, AI, ML, and analytics. ‘With Excelero’s NVMesh, our customers
have access to ultra-low latency, high- performance approach to scale-out storage,’ said Frank Herold, CEO of ThinkParQ. ‘We’ve been impressed with NVMesh’s
ability to deliver the high IOPS and ultra- low latency of NVMe drives over the network with highly available volumes – as well as options for distributed erasure coding and BeeGFS’ unmatched ability to efficiently handle all kinds of access patterns and file sizes.’ To demonstrate the possibilities of
this new scale-out infrastructure, the companies used the industry-standard mdtest and IOR benchmarks. The test system was a compact 2U 4-server chassis, with a total of 24 NVMe drives, connected via a 100Gbit RDMA network to 8 BeeGFS client compute nodes. Tests were run on the exact same
hardware configuration with BeeGFS utilising the direct-attached NVMe vs BeeGFS utilising NVMesh logical volumes. Taking advantage of NVMesh to offload mirroring operations, BeeGFS file-create operations were boosted by up to 300 per cent, while metadata-read operations were boosted by 250 per cent. For small random file access, often
seen as especially critical for application efficiency, NVMesh’s low-latency technology boosted BeeGFS 4K write IOPS to 1.25 million per second, a 2.5x improvement. By leveraging NVMesh distributed erasure coding for BeeGFS, customers
www.scientific-computing.com | @scwmagazine
‘As traditional compute scaling ends, power will limit all supercomputers. The combination of Nvidia’s CUDA-accelerated computing and Arm’s energy-efficient CPU architecture will give the HPC community a boost to exascale’
can get up to 90 per cent usable capacity, while still tolerating drive failures. All the while, achieving 75GB/s streaming throughput from this entry-level system. ‘You can’t build tomorrow’s enterprise
on yesterday’s infrastructure,’ said Lior Gal, CEO and co-founder of Excelero. ‘We’re delighted at how smoothly NVMesh and BeeGFS work together, and the way we enable organisations to work without compromise to either IT teams or end-users. We look forward to working further with ThinkParQ, so that more organisations worldwide can maximise their NVMe ROI and the efficiency of their entire operations.’
DDN continued the storage innovation
at ISC with its reveal of EXA5, the next generation of its ExaScaler Storage system. EXA5 allows for scaling either with all NVMe flash or with a hybrid approach, offering flexible scaling according to needs with the performance of flash, or the economics of HDDs. Compatible with DDN’s A3I storage solutions, which are factory pre-configured and optimised for AI, the system has been designed to reduce deployment time for an AI-ready data centre. ‘We’ve built EXA5 directly for the
new era of HPC and AI in the context of multicloud. We are introducing an entirely new set of features aimed at today’s most demanding AI and HPC end-to-end requirements, at any scale,’ said James Coomer, senior vice president of products, DDN. ‘EXA5 leads on performance, on- premise and in the cloud, with unmatched capabilities in security, stability, user and data management. Dan Stanzione, executive director, Texas
Advanced Computing Center (TACC), said: ‘With the goal of expanding the boundaries of science as we know it today, we are excited about the arrival of advanced new technology that can dramatically increase the performance at scale of our systems, and specifically of our new Top 10 supercomputer, Frontera. ‘DDN’s new EXA5 has the power to
provide the best I/O performance our users have ever experienced, and greatly reduce the I/O bottlenecks in large scale computation. ‘We believe that EXA5 will play a role in
many of the groundbreaking discoveries scientists will make with Frontera,’ Stanzione said.
August/September 2019 Scientific Computing World 5
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32