This page contains a Flash digital edition of a book.
INSIDE VIEW


Cluster simulation for you, me, everybody


Bernt Nilsson, senior VP of marketing at Comsol, says that cluster simulation is about to take off


Chip manufacturers have boasted about the performance of their products inside PCs for decades. And it is quite impressive: over the past four decades, they have ramped up computing power by a factor of a million! A consistent boost in hardware performance has become something we take for granted. Year after year we have gained a new burst of speed that steps up the scope of application software. But now the exponential growth of computational power as we know it is coming to an end because the CPU clock frequency is levelling off to keep power consumption at bay. How is this affecting the simulation industry?


The trend is to go parallel and achieve more performance while reducing power consumption. Multicore processors and clusters, in particular, have emerged as the pillars for continued growth in computational power.


Mainstream multicore Multicore technology is already widely adopted. Buy any PC today and you will find that a quad- core or dual-core processor comes as standard. Shared memory models and multithreading are supported by most systems and compilers to deliver the parallel speedup. While multicore has advantages, the idea of going beyond a single PC isn’t that far-fetched. Previously, the set-up and operation of clusters has been a specialised enterprise and prohibitively expensive; this is about to change. In a recent survey that we conducted


among a group of more than 500 simulation professionals, 67 per cent are planning to start running simulation on clusters in a near future or have already started using clusters for simulation. Clearly, cluster simulation is just about to take off. Collaboration among software developers and hardware manufacturers bodes well for bringing it to more users. Microsoft, for example, is launching


technical computing for client, clusters, and the cloud; a major initiative that is focused on empowering a broader group of people in


34


academia, business and government to solve some of the world’s biggest challenges. With this user-ready cluster computing platform, what are the types of problems that can be solved? We have identified two common classes of simulations: solving the same problem over a wide range of parameter values for speedup, or the ability to solve one very large problem.


Speedup or extended memory For the purpose of optimal design and running parametric sweeps, cluster computing brings tremendous value while being easy to configure and solve. The boost in performance is proportional to the number of nodes in your cluster; the more nodes the shorter time to solution. The design of a microwave waveguide makes a great example. Waveguides are often used in mobile phone base stations and other microwave transmission systems to interconnect transmitters and receivers.


‘Multicore processors and clusters have emerged as the pillars for continued growth in computational power’


For each waveguide configuration, the designer needs to study a range of frequencies. By solving for multiple frequencies in parallel, each frequency on one node, the simulation can be achieved in minutes rather than days. In our test example, the simulation sweeps more than 1,000 different frequencies ranging from 4 to 5.2 GHz, distributed over 12 compute nodes on a Microsoft cluster. We achieved a speedup in solution time of a factor of nearly 11. While a parametric sweep on a cluster is intuitive and delivers amazing speedup, there is also great interest in solving very large problems, which is a much more complex undertaking.


SCIENTIFIC COMPUTING WORLD DECEMBER 2010/JANUARY 2011 The primary benefit of turning to clusters


for solving a very large problem is extended memory. Solving on a single node system often leads to the simulation running out of memory and crashing. Adding nodes and distributed memory lets you take on big problems. However, the cluster set up and management of the job requires an up-to- the-task operating system and fully integrated application software. Engineers and scientists are turning to


systems that allow them to spend more time solving engineering problems and less time dealing with underlying computer issues. As an example, consider the energy absorbed in the human brain from a mobile phone electromagnetic wave. This multiphysics simulation includes electromagnetic waves and bio-heat transfer. On a single node computer the simulation was killed after one hour because we ran out of RAM. By simply using a four node mini-cluster, the solution fits in RAM and solves in five minutes. No matter what kind of problem you have,


the various benefits of simulation on clusters ultimately make you more productive and let you run real-world models of the most complex design tasks. With parallel computing on clusters, we will continue on the path of exponential increases in computational performance for years to come.


References


1. Jack Dongarra, University of Tennessee, Oak Ridge National Laboratory (2009), An Overview of High Performance Computing and Future Requirements, available at www.netlib.org/utk/ people/JackDongarra/SLIDES/sc09-ms.pdf


2. Software Developer Sees Huge Business Opportunity with Parallel Computing (2010), available at www.microsoft. com/casestudies/Case_Study_Detail. aspx?CaseStudyID=4000007460


3. Multiphysics Simulation on Clusters (2010), available at spectrum.ieee.org/webinar/1705619


www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36