This page contains a Flash digital edition of a book.
HPC 2012 Integration


design. Instead, application-level support for scale-out will allow for the use of much simpler designs. We will still have big servers, but they will be the exception, and not the rule, at least in this space. No longer will there be maintenance


engineers scurrying around data centres, pulling apart servers and replacing parts. Instead, we’ll use Failure-in-Place (FiP) to allow server nodes to fail and be marked as ‘bad’, much as how we disregard a few dead pixels


“Performance in the Hyperscale world will be gained in aggregate, at low energy, not necessarily through having a small number of beefy and energy-inefficient servers”


on a laptop display, or a few bad blocks (or flash erase blocks) on a storage device. When you have 1,000+ server nodes in a single rack (and ultimately much more), the last thing you will see is an SLA that calls for individual node replacement. Instead, a certain number of bad servers will be tolerated before the whole system (or parts of it) are replaced as a scheduled event.


Oliver Tennert, director Technology Management and HPC Solutions at Transtec, explores HPC management challenges


An HPC system of today is similar to a mainframe system of the 60s in one respect: it’s a complex beast – and the management of such systems has not exactly become easier just by itself. HPC systems are required to peak-perform, as their name indicates, but that is not enough. Tese systems must be easy to handle and unwieldy design and operational complexity must be avoided, or at least hidden from administrators and users. Nowadays, given the huge variety of


systems, form factors, technologies and vendors to choose from, an HPC environment may be highly heterogeneous in nature. Resources may therefore be required to be provisioned dynamically and anyone discussing the option of HPC cloud services must first give consideration to the technology required to realise such environments. Te dynamic, on-demand, provisioning of HPC resources no longer constitutes a


“An HPC environment may be highly heterogeneous in nature”


problem, but when examining HPC clouds, the issue becomes the management of Big Data, such as the transfer of simulation data from the cluster to the user’s work place to be processed. Te solution is to use remote- visualisation technology, whereby the analysis and visualisation can be done at the point where the data was created. Of course, beyond that comes the challenge of commercial


independent soſtware vendors’ inadequate licensing models for simulation applications, when it comes to HPC on-demand. Stone-age concepts like node-locked licenses still exist, making the transition into an HPC cloud next to impossible. Here, soſtware vendors have to re-think their business models, as the demand for HPC cloud services is mainly being inhibited by the above-mentioned issues – some of which can be solved technically, but some must be solved commercially from the players the HPC markets. l


No other debugger can even come close!


Increase Density


Higher Processor Frequency Lower Operating Costs


WWW.COOLITSYSTEMS.COM


CoolIT Systems Inc. 3920 29 Street NE Calgary, AB, Canada Tel: (403) 235-4895


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32