This page contains a Flash digital edition of a book.
IDC analysis HPC 2012 Industry developments in 2012


An IDC report on the use of high-end HPC


Tere is a growing contingent of nations around the world that are investing considerable resources to install supercomputers with the potential to reshape how science and engineering are accomplished. In many cases, the cost of admission for each supercomputer now exceeds $100 million and could soon reach $1 billion. Some countries are laying the groundwork to build ‘fleets’ of these truly large-scale computers to create a sustainable long-term competitive advantage. In a growing number of cases, it is becoming a requirement to show or ‘prove’ the return on investment (ROI) of the supercomputer and/or the entire centre. With the current global economic pressures,


nations need to better understand the ROI of making these large investments and decide if even larger investments should be made. It’s not an easy path, as there are sizable challenges in implementing these truly super supercomputers: along with the cost of the computers, there are substantial costs for power, cooling and storage; in most cases, new data centres have to be built and oſten entire new research organisations are created around the new or expanded mission; and the application soſtware oſten needs to be rewritten or even fundamentally redesigned in order to perform well on these systems. But, the most important ingredient is still the research team. Te pool of talent that can take advantage of these systems is very limited and competition is heating up to attract them.


Lay of the land Over the past five years, there has been a global race to build supercomputers with peak and actual (sustained) performance of one Pflop (a petaflop equals a computer that can calculate a million million operations every second). Next-generation supercomputers are targeting 1,000 times more performance, called exascale computing (1018


supercomputers that also addressed a growing concern around user productivity. Te great unanswered (and poorly addressed) question in the high-end HPC market is: how can a broad set of users actually make use of the new generation of computers that are so highly parallel? Some of the complexities of this problem are that processor speeds in general are not getting faster and that the memory bandwidth in and out of each processor core (and the per-core memory size) is declining at an alarming rate and is projected to continue to decline over the foreseeable future. In addition, many user applications were designed and/ or written oſten more than 20 years ago. Tese codes have been enhanced and improved over the years, but in most cases, they need to be fundamentally redesigned to work at scale on the newest supercomputers. Te current situation


is opening the door for new applications that can take advantage of the latest computers. Many current applications have a more difficult time going through this level of transformation because of the lack of talent, the steep costs of redesigns, the need to bring all of the legacy features forward, and an unclear business model. A debate is growing about whether a more open source application model may work.


only a few in number, many thousands of users will have access to these systems (e.g., the U.S. Department of Energy’s INCITE programme and analogous undertakings). Industrial users will also gain access to these systems for their most advanced research. Petascale and exascale initiatives will


“Te most important ingredient is still the research team. Te pool of talent that can take advantage of these systems is very limited and competition is heating up to attract them”


intensify efforts to increase scalable application performance. Sites that are advancing in phases toward petascale/exascale systems, oſten in direct or quasi competition with other sites for funding, will be highly motivated to demonstrate impressive performance gains that may also result in scientific breakthroughs. To achieve these performance gains, researchers will need to intensify their efforts to boost scalable application performance by taming large-scale, many-core parallelism. Tese efforts will then benefit the larger HPC community. Over time, it will likely also benefit the computer industry as a whole. Some important


applications will likely be leſt behind. Petascale initiatives will not benefit applications that do not scale well on today’s HPC systems, unless these applications are fundamentally rewritten. Rewriting is an especially daunting and costly proposition for some important engineering codes that require a decade or more


operations per second), starting


in 2018. Currently, systems in the 10–20 Pflops range are being installed and 100 Pflops systems are expected by 2015. In the United States, the race for petascale


supercomputers was started when DARPA launched a multiphase runoff programme to design innovative, sustained petascale-class


18


Plans for petascale Te competition among nations for economic, scientific and military leadership and advantage is increasingly being driven by computational capabilities. Very large-scale HPC systems will tend to push nations that employ them ahead of those that don’t. A set of petascale supercomputers, along with the researchers to use them, can reshape how a nation advances itself in both scientific innovation and economic industrial growth. Tis is why countries/regions like the United States, China, the European Union, Japan and Russia are working to install multiple petascale computers with an eye on exascale in the future. Petascale initiatives will benefit a substantial


number of users. Although the initial crop of petascale systems and their antecedents are still


to certify and incrementally enhance. Some petascale developments and


technologies will likely ‘trickle down’ to the mainstream HPC market, while others may not. Evolutionary improvements, especially in programming languages and file systems, will be most readily accepted by mainstream HPC users but may be inadequate for exploiting the potential of petascale/exascale systems. Conversely, revolutionary improvements (e.g., new programming languages) that greatly facilitate work on petascale systems may be rejected by some HPC users as requiring too painful a change. Te larger issue is whether developments will bring the government-driven high-end HPC market closer to the HPC mainstream or push these two segments further apart. Tis remains to be seen. l Original report by Earl C. Joseph, Chirag Dekate and Steve Conway.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32