This page contains a Flash digital edition of a book.
data-intensive computing


Sanjay Bhal, focused end equipment manager, HPC and cloud at Texas Instruments, and Arnon Friedmann, business manager, DSP at Texas Instruments


I


t is apparent that the evolution of a generally accepted architecture such as x86 will not reach the power efficiency required to power


an exascale computer. Architecture approaches such as GPUs have been tried but so far have not shown the path to achieve the necessary scale of power efficiency. Current architectures can be extended to exascale but would take 100 megawatts of power and therefore is not cost effective. Investment in the system architecture is required to enable scalability at the lower power cost. Tis includes research in new types of memory, interconnect, and I/O technology. In terms of getting the GFlops/


Watt at higher orders of magnitude, it does seem that we are on the right track with new acceleration ideas. Tis leads directly into the next challenge, data access. While the processors are getting higher order multicores


EXASCALE WILL PRESENT CHALLENGES THAT WE CANNOT EVEN PREDICT


and larger numbers of compute elements, feeding the data to these computing behemoths is becoming increasingly challenging. New memory technology like HMC and WideIO may help mitigate the problem but as they are new it’s unclear if they will provide the answer yet. As we continue scaling away from the processing cores, the interconnect is the next challenge. Again, as we hit unprecedented levels of cores operating together, we are going to need orders of magnitude improvement in interconnects and it


remains to be seen if we know the way forward for this or not. Simply scaling Infiniband or GigE may not be sufficient. Te final aspect is the soſtware. Current MPI- based programming methods may not provide


the efficiencies needed to hit the coming wave of exascale and beyond computing machines. Combine this with the new programming models required for the accelerators being built to achieve the necessary GFlops/Watt and the view of supercomputing soſtware for exascale becomes quite murky. Achieving exascale will also present new


challenges that we cannot even predict at this point. As we get closer those challenges should crystallise and solutions will then be found. Te most likely timeframe is probably in 2020-2023. Tere is a strong push to get there by 2020 but this would necessitate more investment than we currently see in the community. It also depends on the efficiency that people are willing to accept for exascale, one could be a very inefficient machine in a shorter timeframe if there was the willingness to supply vast amounts of power to such a system.


Dr Thomas Schulthess, professor of Computational Physics and director at the Swiss National Supercomputing Center (CSCS)


T


he industry at large is in a state of confusion, because exascale as a goal is not well defined. Simply


extrapolating from sustained tera- to peta- and on to exaflops in terms of the High- Performance Linpack (HPL) benchmark produces machines that may not be useful in applications. Since these machines will be expensive to produce and operate, they will need a real purpose, which HPL no longer represents well. We have two fundamental problems: the


frequency of individual processor cores no longer increases; and moving data over macroscopic distances costs time (latency) and energy. Te former translates into an explosion of concurrency and the latter requires algorithms that minimise data movement. Critically, we don’t have good abstractions or adequate programming models for the type of architectures required to address these challenges. It will thus be difficult to develop application codes that use exascale systems effectively. Tese challenges can be overcome if


the HPC community approaches exascale in new ways. Irrespective of architectural


36 SCIENTIFIC COMPUTING WORLD


direction, a massive investment in soſtware and application code development will be required. Only if we leverage investments in other areas that face the same challenges, such as mobile devices and the gaming industry, will we be able to sustain the path to exascale. Particularly technologies from the gaming industry will attract recent graduates to the HPC developer community – our field will need many new developers. Te recently


announced Open Power Consortium is something to watch. It will bring fresh competition to the market for latency-optimised cores and open new avenues in hybrid multi-core design. Tis architecture could rapidly develop beyond the accelerator model of today’s CPU-GPU systems, providing key innovation toward exascale for several application domains. We will see first exascale machines by the end of this decade, although not traditional supercomputers designed for HPL. Several


scientific domains, such as climate or brain research require machines of this scale – it will happen if there is a real need and a path to solution. We should focus on science areas that


OPEN POWER CONSORTIUM IS SOMETHING TO WATCH


require 100- to 1000-fold performance improvements over what is available today, and design supercomputers specifically to solve their problems. Problem owners, i.e. the domain science communities, have to take charge and the HPC industry at large should provide the necessary support. Tis is how large, expensive-to- operate scientific research infrastructures are built.


Furthermore, if we adopt this approach and the purpose of a particular exascale system is well understood and articulated, fewer concerns will be raised about development and operational costs – questions keep being raised about power bills of supercomputers, while hardly anybody discusses the power consumption of the Large Hadron Collider.


@scwmagazine l www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52