Memories of the future
Robert Roe investigates developments in memory
technology aimed at the next generation of HPC applications
T
he rise of data-centric computing, increasingly large data sets, and the convergence of traditional high-performance computing
with big data and real-time data analytics, is driving significant development in memory technology. Tis has led to several technologies that are changing the long-established architectures of HPC, from the introduction of 3D memory to some unconventional technologies which are blurring the lines between compute, memory and storage.
4 SCIENTIFIC COMPUTING WORLD
One of the biggest challenges to
a sustained increase in application performance is the ability to move data either from memory or storage into the computational elements of a cluster. Each jump – from computation to
memory and then from memory to storage – has an associated latency, as David Power, head of HPC at Boston explains. ‘If you look at the hierarchy of I/O you have the small caches [of memory] on the CPU. You then have a latency penalty going to main memory but as you go to the next level of I/O, which would be your disk layer, suddenly the latency and your performance are orders of magnitude slower than you would get from memory or cache.’ Tis I/O latency, combined with the huge
increases in processing power, means that many HPC users now find themselves in a situation where application performance is bound to memory bandwidth – the rate at which data can be read from or stored into a semiconductor memory by a processor).
Balancing the HPC ecosystem To keep pace with developments in processor technology, memory companies are exploring changes to current HPC architectures. One idea is to bring data closer to computation; an example of this is Diablo Technologies, a memory specialist and partner of Boston. Diablo has created a storage medium that
connects through the DIMM sockets used for system memory. Tis considerably reduces the latency penalty when compared to traditional storage mediums, as the storage is running through the same network interface as memory so it has the same associated latency. ‘We have been engaged with Diablo for a
number of years, we have had their products in the lab and have been engineering solutions with them’ said Power. ‘What Diablo is doing is integrating storage onto a memory form factor. We can, in effect, put a storage system into our memory DIMMs and provide that at the same sort of performance of
@scwmagazine l
www.scientific-computing.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36