high-performance computing
The Automata Processor, by Micon ➤
for a storage back-end. Tere is a scenario where you would see a hybrid back-end comprised of memory and storage that will use memory as your main storage rather than having a third tier of disk’ explained Power.
Shifting paradigms of HPC Te memory subsystem is the fastest component of an HPC cluster outside of the processing elements. But, as Power explains: ‘When you are looking at very large distributed workloads, the vast majority of your time can be spent just moving bits around, moving data around the place.’ Data sets are growing at an unprecedented
rate and the while CPUs, and other processing elements are doubling in speed in accordance with Moore’s law, the demand for data is outstripping our ability to feed the CPU. Tis demand necessitates that memory company’s look beyond iterative development to more disruptive technologies. Automata Processing ‘is a great
technology; it can site even quite large databases and accelerate applications by an order of magnitude just due to the increased throughput and IOPS capabilities. Optimising that part of the workflow in your application can have significant gains when you are starting to look at large-scale computing’ said Power.
A move away from commodity hardware Te importance of memory to the next- generation HPC application is clear as David Chapman, VP of marketing at Tezzaron explains: ‘How important are brakes to winning an F1 race? Tey may not do much to increase your speed on the straights, but surviving the corners is fairly critical
6 SCIENTIFIC COMPUTING WORLD
to winning. My point is that the whole system has got to work well.’ While the CPU and accelerators maybe be doing the computation without a balanced system to support this computation, application performance will suffer. ‘Having said that, it seems clear (to me at
least) that the memory industry has been on an Intel/PC-centric path for so long that all other application environments have found themselves with a very narrow and largely unsuitable selection of memory devices.’ Chapman went on to explain that they are
OPTIMISING THAT PART OF THE
WORKFLOW IN YOUR APPLICATION CAN HAVE SIGNIFICANT GAINS WHEN YOU ARE STARTING TO LOOK AT LARGE-SCALE COMPUTING
unsuitable because ‘RAM performance characteristics demanded by PC applications are not optimum for other applications.’ As we have seen from the emerging
memory technologies, the answer to HPC’s memory problems may lie in the development of bespoke technologies designed specifically for big data or large- scale HPC. ‘HPC really means “workloads that PCs don’t do very well.” From that point of view it is unsurprising that HPC needs different memory from commodity memory for PCs’, said Chapman. Te likely path for memory technology
over the next five to 10 years will be technology designed specifically to address the convergence of HPC and big data. Exotic technologies that disrupt the current design of HPC systems are already being readied to support the next generation of HPC systems. ‘Te prominent trend for memory over the
next 20 years is going to be fragmentation’ said Chapman. ‘Although memory standards may not go away, they will be confined to the very slow end of the market, which always includes the consumer market, where price always carries the day.’ ‘Te parts of the market where
system vendors thrive on performance differentiation first and price differentiation second will more and more readily embrace differentiated memory as a key tool for keeping their competitive edge sharp. In short, non-commodity memory fabricated with transistor-level 3D techniques will be one of the most visible trends in HPC going forward’, concluded Chapman. Another trend is that of fragmenting the
established subsystems of a cluster. Power explained that there are two main directions in which he sees memory technology evolving. Te first, 3D Cross Point ‘leverages advances in science and component physics to make memory faster and larger. But, what Micron is doing is almost the reverse. Rather than bringing everything closer to the CPU and making it faster, they are taking computational capabilities out of the CPU so you have another computational tier within your overall architecture.’ ‘You can see how the traditional structure
of a cluster is being turned on its head, and you will now see computational elements throughout the architecture. It is going to be fun to try and programme for all of that’, concluded Power. l
www.scientific-computing.com
Micon
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36