This page contains a Flash digital edition of a book.
HPC 2012 Memory

Making memories

Although a fundamental

part of HPC, is memory being taken for granted? Gerd Schauss and Chris Gottbrath share their thoughts

Memory presents an unprecedented challenge, says Gerd Schauss, director of strategic business development in EMEA for Samsung

In general, the high- performance computing community believes that performance will increase, power consumption will automatically lower and that the memory makers

will solve any and all upcoming problems. Of course, we know this isn’t the case as memory is presenting an unprecedented challenge at the moment. We are reaching the boundaries of physics and shrinking dies to sub 20nm geometries will not be as easy as shrinking from 50 to 40, or 40 to 30mm. Te possibilities of improving performance

and reducing the power consumption are also somewhat limited. During the past 10 years, we have been able to reduce the power consumption annually on average by roughly 35 per cent. Between now and 2020, we believe that if business continues as planned – with DDR4 being introduced in 2014 and the possibility of DDR5 in the future – we will reduce the power consumption by an average of between 10 and 12 per cent. As an industry, every step we take from today will be a baby step compared to what we were able to achieve in the past. Tere

is some possibility for improvement, but they represent disruptive changes. One option is the introduction of some type

of 3D stacking, most prominent of which is currently HMC (hybrid memory cube). With this technology we predict that the bandwidth can be improved by a factor of 15 and at the same time the power consumption lowered by 60-70 per cent. On the surface, it seems like this would be a major step forward that solves all the problems, but the issue is the commercial viability of the solution. Tere is an underlying assumption that memory is a free part of a larger package and so it isn’t given as much attention as it deserves. HMC, for example, is a very

complex technology. Nine, 18 or possibly more dies are vertically assembled in one package and thousands of holes need to be drilled into the dies to connect them. Should just one connection fail to work properly, the entire stack will need to be discarded. Drilling the holes, making the connections, testing the stacks and including a logical layer, which is another extra piece of IP that everyone has to develop for themselves,

requires additional production steps that in turn need significant investments. Speaking for me personally, I do believe

“Tere is an underlying assumption that memory is a free part of a larger package and so it isn’t given as much attention as it deserves”

we would take this risk if we had enough confidence from our customer base that it was wanted. Tere needs to be a combination of HPC manufacturers, suppliers and the big supercomputing centres who all agree that the performance advantage is well worth the extra cost. Colleagues on the HPC centre side of the business comment that in most cases the CPU spends 95 per cent of the time waiting for data. It makes sense, therefore, to perhaps opt for a slower CPU so that memory can receive a larger investment. Te system will be more balanced and tasks will be performed faster depending on the programs. Tis wouldn’t be

desirable when it comes to Linpack tests, but the real-life benefits could be significant. Te main point that needs to be addressed

is ensuring that users are more involved in the process. We would like to see a memory-centric HPC user conference so that people can really be made aware of the challenges the memory industry is currently facing, all of the problems


Jules2000/ Vladgrin/Shutterstock

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32