search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING g


promises, investment and funding slowed to a trickle. The field reemerged in the early 2000s with the advent of DNNs and their ability to perform commercially viable image recognition. Further, these DNNs could be trained in reasonable amounts of time, due to the significant increase in computational power that modern CPUs and GPUs could provide. Currently, people refer collectively


to all these technologies as AI, but it is important to heed the warnings from the past and not over-promise. In fact, the use of ‘AI’ is in itself misleading, as is the use of the word ‘learning’ for ANNs, although both are commonly used. For example, it has been known since the 1980s that ANNs do not ‘learn’ or ‘think’, but rather fit multidimensional surfaces which are used for inferencing.


New analytic techniques Two highly impactful projects at CERN and the ALCF demonstrate the power of AI as a new analytic technique. An award-winning effort at CERN has demonstrated the adoption and the ability of AI-based models to act as orders-of-magnitude-faster replacements for computationally expensive tasks in simulation. More specifically, the team demonstrated that energy showers detected by calorimeters can be interpreted as a 3D image. Adapting existing deep learning techniques, they then decided to train a GAN to act as a replacement for the expensive Monte Carlo methods used in HEP simulations. During validation, they observed a ‘remarkable’ agreement between the images from the GAN generator and the Monte Carlo images, along with orders- of-magnitude-faster runtime. The CERN team points out that the impact for High- Energy Physics (HEP) scientists could be substantial as, ‘currently, most of the LHC’s worldwide distributed CPU budget – in the range of half a million CPU-years equivalent – is dedicated to simulation’. Dr Federico Carminati, CERN’s


project coordinator, explains: ‘This work demonstrates the potential of machine learning models in physics-based simulations’. Previously, AI models were excluded


because it was not possible for the data scientist to guarantee the data-derived AI model would not introduce non-physical artifacts. As CERN’s Dr Sofia Vallecorsa points out, ‘the data distributions coming from the trained machine learning model are effectively indistinguishable from real and simulated data,’ so why not use the faster computational version?


6 Scientific Computing World October/November 2018


Traditonal HPC


systems


Large-scale numerical simulation


Scalable data analytics


Supercomputers and exascale systems


CORAL


learning


Deep


 DOE objective: drive integration of simulation, data analytics and machine learning


“Our motivation is to create increasingly rich data services, so people don’t just come to the ALCF for simulation, but for simulation and data- centric activities,” Ian Foster, Argonne Data Science and Learning division director and distinguished fellow


The CERN work reflects a huge change in thinking in the physics and modeling simulation community – a change in mindset that can benefit others in a wide range of fields realise similar orders- of-magnitude-faster speedups for computationally expensive simulation tasks.


The work is part of a CERN openlab


project in collaboration with Intel Corporation, who partially funded the endeavor through the Intel Parallel Computing Center (IPCC) programme. At Argonne National Laboratory,


researchers are exploring ways to improve collaboration and eliminate barriers to using next-generation exascale systems like Aurora. However, the Argonne effort is but one example of a number of efforts being explored to help scientists exploit the convergence of big data, machine learning and extreme-scale HPC. In particular, a team of researchers from the ALCF and Argonne’s Data Science and


Learning Division is developing a service that will make it easier for users to access, share, analyse, and reuse both large-scale datasets and associated data analysis and learning methods. The service leverages well-known tools including Globus for research data management and Argonne’s Petrel storage system. ‘Our motivation,’ Ian Foster (Argonne


Data Science and Learning Division director and Distinguished Fellow) explains, ‘is to create increasingly rich data services, so people don’t just come to the ALCF for simulation but for simulation and data-centric activities.’ The ALCF is a US Department of Energy Office of Science user facility.


Indirect impacts – a huge change in mindset The prevalence of AI-optimised hardware, in particular the reduced-precision convolutions used in image recognition that have been added to CPUs and GPUs plus expanding support for FPGAs, has given the HPC community new hardware capabilities to exploit for increased library and application performance – including new optimised versions of the workhorse BLAS linear algebra libraries.


Reduced precision maths libraries Of extreme interest to the HPC community is work on optimised batched and reduced precision BLAS libraries. Over the past few decades there has been a tremendous amount of work performed by vendors and the HPC community to create optimised libraries that implement the BLAS specification. The reason is, as Jack Dongarra noted


@scwmagazine | www.scientific-computing.com


ANLi


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36