This page contains a Flash digital edition of a book.
supercomputing software challenges


future Facing the


Beth Sharp speaks to the experts Oliver Pell, VP of engineering at Maxeler A


lot of the challenges we face are very much the same as would be found in any kind of large-scale computing system. We have the


classical supercomputing challenge of limited memory bandwidth into our computational engine – the memory wall. In supercomputing, there is also a general ‘power wall’ where we simply can’t dissipate as much power within a limited space as we would like. Both of those problems are issues we can work to overcome. On top of this we have all the usual problems of parallelisation. A modern computing system may have multiple levels of parallelisation of an application, and this can lead to a great deal of complexity at both the hardware and software level. Again, this is something we work to minimise by making each compute unit much faster, so that less parallelisation is needed overall. In conventional software running on a


large HPC cluster, there are usually multiple levels of parallelisation that need to be managed. You have vector units within each processor core, then multicore within a single processor chip, and multiple processor chips within a compute node, that all have a slightly different parallelism characteristic. You then have multiple nodes in a cluster and may have different levels of hierarchy in the cluster as well. As you can imagine, efficiently managing the parallelisation across


www.scientific-computing.com


all of these levels can become incredibly complicated! As a system grows, users begin to run into


the fact that if a large commodity system is being used to run a single processing job, then at some point nodes will fail and the more nodes you have the more frequently one of them will fail. The mean-time-to- failure of an entire system therefore becomes very small and you almost get to the point where you can’t run any computation at all because all your time is spent check-pointing in case your system crashes. There’s a limit to the scalability of any system. We build very dense, fast individual compute units to limit these problems, which is different to the ‘scale-out large clusters of commodity machines’ approach that is relatively dominant in the Top500 list. Instead of having tens of thousands of relatively poorly performing compute nodes, we have a smaller number of fast compute nodes. Not only does this deliver better performance from small systems with a low number of nodes, but there are improvements in both power and space consumption. The push to exascale is interesting because


there seems to be a growing realisation that more of the same may not be the way forward. A lot of the programming challenges will come down to how the hardware reaches that scale. Hypothetically speaking, if you have a one-core computer that could deliver


an exaflop of computing power, then exploiting that performance will not be that difficult. Whether you can properly frame the science you want to do in a program is another question, however. If you take the opposite extreme and imagine a system with hundreds of millions of cores, you see that we will face a massive programming challenge as we have a hard enough time writing software that can run across four to eight cores inside a workstation – and we certainly don’t have much fun on a big HPC system with thousands of cores! One solution is to use fewer compute units that run at much greater speeds. Part of the driving force behind having


a large number of simple cores has been power efficiency – clock frequencies are not going up any more and if you can deliver a lot of performance in a low-power way then you don’t need to worry as much about parallelisation. If you’re going to use a hundred million cores, you won’t get down to a single chip – no matter how fast it is – but if you can reduce it by two orders of magnitude it makes a big difference to the complexity of the system. A lot of the software challenges we currently face then become much less of a problem.


JUNE/JULY 2011 33


Those who work with high-performance computers are often acutely aware of the challenges adapting to new architectures can bring.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52