This page contains a Flash digital edition of a book.
Infrastructure HPC 2012 Laying the foundation


Ensuring the right HPC infrastructure is in place requires detailed planning and careful monitoring. Industry experts offer their advice


Investigating the ideal infrastructure Richard Chapman, business development director at Aegis Data, questions whether traditional data centres are prepared for a paradigm shift


Te reliance on high- performance computing (HPC) is increasing at a dramatic rate as it penetrates more and more aspects of business and society. Te demands


on HPC are also changing as data crunching becomes as important a topic as number crunching. Fast, precise results are required from the most demanding applications, whether it is for advanced modelling techniques in manufacturing and analysis for oil production, or weather forecasting, financial analytics and computational chemistry. As this demand increases and data volumes continue to rise, the infrastructure to support it needs to adapt, with around-the-clock access to data that is stored in a reliable and secure location. Many HPC projects oſten only require a


large number of core processors and access to complex data structures over a defined timescale, such as a week or a month. Terefore, a flexible, yet powerful infrastructure is required to support these activities. Data centres need to be able to transform into ‘service centres’ that deliver applications on-demand and respond to changing requirements with speed and agility. Whereas HPC used to be reliant on grid


computing – oſten with thousands of desktops sharing the same resources in a distributed architecture – there appears to be a paradigm shiſt, moving away from the old technology


6


which was so reliant on the low power, high footprint infrastructure. Te next- generation HPC platforms are now moving to a virtualised environment that integrates virtualised networks, computing and storage. Storage virtualisation technologies can help organisations eliminate redundant data, reduce bandwidth requirements, gain flexibility and better utilise existing infrastructure to reduce space, power and cooling requirements. Virtualisation used to be off limits for HPC


because the added overhead and complexity would only reduce performance. However, there have been major technological advances in virtualisation and the development of virtual stacking via blade server technology, which provides the agility, flexibility and fast deployment required to support the crunching of terabytes of generated data. Blade servers allow more


processing power in less space, simplifying cabling as well as reducing power consumption. Blade servers are much smaller and compact, with a chassis taking up a quarter of a standard rack but drawing between 4 to 10kW. To put it into perspective, 100 old physical servers can now be reduced to one blade server chassis, which obviously results in a smaller footprint. However, these blade servers do require higher power density per U of rackspace and as such


require more advanced cooling technologies. Although the blade’s shared power and cooling means that it does not generate as much heat as the 100 traditional servers, the increased density of blade server configurations will result in higher overall demands on power consumption per rack, resulting in localised hotspots. Te issue that many organisations face,


“Many HPC projects oſten only require a large number of core processors and access to complex data structures over a defined timescale, such as a week or a month”


however, is that many legacy data centres were not built to accommodate this new evolution, and adapting existing facilities from a low average power over a large space to a higher average power over a small space is simply not cost effective. Tis raises the question of whether it is time to move to a new data centre that can accommodate this paradigm shiſt? Data centres designed for


this next generation of HPC infrastructure will comprise an optimised environment that makes better utilisation of power and space, has a lower total cost of


ownership and is able to cope with the demands and stresses of virtualisation. As the use of HPC proliferates within organisations, the demands for more processing power will grow and the amount of unstructured data that needs to be analysed will continue to double. Data centres will need to have a strong, dedicated power supply with the flexibility and scalability to match IT load and the capability of delivering power headroom to meet future growth.


Watcharakun/Shutterstock


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32