This page contains a Flash digital edition of a book.
high-performance computing


business models, more users will be able to take advantage of the flexibility of using the cloud for their HPC workloads. Tis will not only move many users from in-house facilities to the cloud, but will bring new users to HPC – users who have neither the skills nor the interest to procure and maintain complex, accelerated clusters – but who can benefit from running their applications on these types of systems.


processors will be used in a supporting role, such as storage systems. According to leading HPC analyst firm


Intersect360, 85 per cent of accelerated systems use Nvidia GPUs, while four per cent use Intel Xeon Phi. At the high end, 37 of the systems in the TOP500 list use Nvidia GPUs, while 12 use Xeon Phi. Te Knights Landing variant of Intel’s Xeon Phi won’t be with us until 2015, and Nvidia remains tight-lipped about when the next generation Maxwell GPU will appear – but the expectation is that Maxwell will ship during 2014. In addition to being rumoured to be four times faster than Nvidia’s current Kepler GPU, the next-generation Maxwell device will augment the GPU with ARM processor cores (a development from Project Denver), and will also offer Unified Virtual Memory. In the meantime, Nvidia’s Tesla K40 GPU (which at 12 GB has double the amount of memory of its predecessor) broadens the potential market for GPU-accelerated applications. Nvidia also has the Tegra K1 which offers a high- performance, heterogeneous architecture with a very low power consumption. Although this was developed for the mobile market, it will be deployed in new HPC architectures. In 2013, Imagination Technologies acquired


MIPS Technologies – remember that MIPS processors were the engines that powered many of SGI’s most successful HPC systems. During my research for this article, an off-the-wall idea was put to me: the idea of a multicore MIPS64 processor integrated with an Imagination Technologies GPU being targeted at the HPC market. Both architectures are supported by OpenCL, so perhaps it’s not a daſt idea! Another, different architecture being used in


22 SCIENTIFIC COMPUTING WORLD THE RAPID


EVOLUTION OF THE INTERNET OF THINGS WILL BRING MANY NEW OPPORTUNITIES TO EXPLOIT HPC TOOLS


HPC is TI’s Keystone 2, which combines ARM and DSP cores in a package that is well suited to the emerging needs of handling the data deluge generated by the ‘internet of things’ and other big data sources.


HPC in the cloud My recent article ‘HPC in the cloud – is it real?’ (Scientific Computing World Oct/Nov 2013 page 38) identified many offerings for HPC in the cloud, and a great deal of interest from users, but few companies have yet committed the majority of their HPC workload to the cloud. But I think that this will change in 2014, as many of the hurdles to exploiting the cloud for HPC applications are overcome. Although the use of cloud-based HPC facilities for production runs is still at an early stage, a significant number of companies are exploring how to exploit HPC in the cloud in order to meet peak demand, or to avoid complicated and time-consuming HPC procurements. But HPC in the cloud is not the best answer for all users. Tere are still privacy, trust, security, regulatory, and intellectual property issues that must be addressed, and a cloud-based solution will not tick these boxes for all organisations. As more HPC-tuned cloud facilities start to offer a diverse range of technologies and


Other HPC issues Tere is lots of hype surrounding the HPC industry, related to both products and trends. Perhaps 2014 will be the year when we stop believing some of the hype and take a more realistic perspective. One example of hype is that the industry will deliver a programmable, affordable Exascale system that operates with a reasonable power consumption by the end of the decade. Although progress has been made, and there are now a handful of systems whose peak performance exceeds 10 Petaflop/s, the current power consumption is still a factor of 20 too high, and product roadmaps do not project that this will be solved by the end of the decade. At the other end of the scale, very good HPC systems can be built using low-cost components (e.g. last year’s processors) instead of always going for the latest, greatest, most complex and expensive devices. Te consolidation of HPC and big data


technologies will continue during 2014. Many HPC vendors have invested in high-end storage capabilities, and the challenges of big data will bring new opportunities for the HPC industry beyond traditional HPC market segments. But big data isn’t just about storing the data; it’s about processing it as well. Many big data solutions today developed outside of traditional HPC market segments, but use a typical HPC approach of batch-processing large lumps of data. An emerging approach is to analyse streaming data in flight, when a response is required within milliseconds, or even microseconds, or the moment is gone. Some of the large internet companies such as Google, PayPal, and Twitter are exploring these techniques in order to analyse their customer’s behaviour. Te rapid evolution of the internet of things


will bring many new opportunities to exploit HPC tools and techniques in embedded or distributed environments.


With more than 30 years’ experience in the IT industry, initially writing compilers and development tools for HPC platforms, John Barr is an independent HPC industry analyst specialising in technology transitions.


@scwmagazine l www.scientific-computing.com


Spectral-Design/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40