for many applications, Kalray processors offer a unique, fully programmable alternative to GPU, ASIC and FPGA. The Coolidge processor has

the capability to run in parallel and in real-time a wide set of heterogeneous processing and control tasks such as AI, mathematical algorithms, inline signal processing, network or storage software stacks. Kingston server SSD and

In the Quest for Higher Learning, High Density Servers Hold the Key

With the right tools in place, scientists are able to accelerate their research, analyse massive amounts of information, and complete more data-intensive projects. Compute, storage, and networking are possible in multi-node servers at lower TCO and greater efficiency. Based on the AMD EPYC™ processors, GIGABYTE multi-

node systems are designed for high density and are one of the densest possible air-cooled AMD EPYC platforms on the market.

Using the 2U4N short depth chassis with its dedicated CPU

per node and leaving room for additional networking, one will be able to fit up to 10,240 cores and 20,480 threads in 40U of rack space. Providing an extremely dense compute configuration in a

compact footprint, but also helping to greatly reduce TCO. GIGABYTE’s H-Series systems deliver virtually identical

performance to four 1U servers while reducing rack space by 50%,power consumption by 4%, the number of power supplies by 75%, and the number of base power/ 1GbE/ management cables by 56%.

g The FPGA value proposition

for HPC has strengthened significantly in recent years. Working alongside CPUs, FPGAs provide part of a heterogeneous approach to computing. For certain workloads,

FPGAs provide significant speed-up versus CPU – in this case 50x faster for machine learning inference. Cerebras is a computer

systems company dedicated to accelerating deep learning. The Wafer-Scale Engine (WSE) – the largest chip ever built – is at the heart of its deep learning system, the Cerebras CS-1. Larger than any other chip by 56 times, the WSE delivers more compute, more memory and more communication bandwidth. This enables AI research at previously-impossible speeds and scale.

12 Scientific Computing World Winter 2021 Intel offers a

comprehensive portfolio to help customers achieve outstanding performance across diverse workloads. With deep learning

acceleration built directly into the chip, Intel hardware is designed to support the convergence of AI and HPC. And Intel software, including a development toolkit and multi-architecture programming model, plus a broad software ecosystem, helps ensure HPC users get more value from their hardware and application investments. Taking full advantage of

Kalray’s patented technology, the Kalray MPPA Coolidge processor is a scalable 80- core processor designed for intensive real-time processing. While data processing acceleration is getting critical

@scwmagazine |

memory products support the global demand to store, manage and instantly access large volumes of data in both traditional databases and big data infrastructure. The need to store and manage larger amounts of data has increased exponentially in recent years. Data centres, cloud services, edge computing, internet of things and co- locations are just some of the business models that amass tremendous volumes of data. Marvell offers a broad

portfolio of data infrastructure semiconductor solutions spanning compute, networking, security and storage. The company’s products are deployed by organisations in enterprise, data centre and automotive data infrastructure market segments that require ASIC or data processing units equipped with multi-core low- power ARM processors. MemVerge’s Memory Machine virtualises DRAM and persistent memory so

that data can be accessed, tiered, scaled and protected in-memory. The software-defined

memory service is designed to be compatible with existing applications. This service provides

access to persistent memory without changes to applications. Micron’s technology is

powering a new generation of faster, intelligent, global infrastructures that make mainstream AI possible. Their fast, vast storage and high-performance, high-capacity memory and multi-chip packages power AI training and inference engines – whether in the cloud or embedded in mobile and edge devices.

Scientists leverage HPC as their tool for discovery and path to solving some of the world’s most impactful and complex problems. Accelerating time to insight and deploying HPC at scale requires incredible amounts of raw compute capability, energy efficiency and adaptability that are uniquely found in the Xilinx platform. With custom datapaths and

memory hierarchies, and a rich developer toolset, Xilinx FPGA accelerated applications can enable optimised hardware and software implementations with the flexibility to adapt to changing requirements without sacrificing performance and energy efficiency.

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34