search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HPC 2017-18 | High-performance computing


mass storage and all levels in between. With this level of detail you can forecast performance by understanding the scale of the problem. ‘Yet, that is usually the first thing


benchmarks discard: they fix the size of the problem!’ said John Gustafson, currently a visiting scientist at the A*STAR (Agency for Science, Technology and Research) in Singapore. Gustafson is an accomplished expert on supercomputer systems and creator of Gustafson’s law in computer engineering. According to Gustafson, the original


Linpack was a fixed size benchmark that did not scale. By persuading Jack Dongarra to switch to a ‘weak scaling’ model – for which Gustafson’s law applies instead of Amdahl’s law – this helped the TOP500 list to endure for 25 years. Since the 1980s, the benchmark’s definition has become more goal-oriented, amenable to parallel methods and less subject to cheating.


From floats to a posit-based approach Tis year a peer-reviewed research paper titled Beating Floating Point at its Own Game: Posit Arithmetic was published in the journal Supercomputing Frontiers and Innovations. Te paper’s authors believe this data type has the potential to revolutionise the supercomputing community’s approach and attitudes to performance, both of applications and the systems they are run on. ‘Benchmarks should always be goal-


based, but usually they are activity-based,’ said Gustafson. ‘Which is where you get silly metrics like ‘floating point operations per second’ that do not correlate well with getting a useful answer in the smallest amount of time,’ said Gustafson. In the paper, Gustafson and his co-author,


Isaac Yonemoto from the Interplanetary Robot and Electric Brain Company in California, US, conclude that the ‘posit’ data type can act as a direct drop-in replacement for IEEE Standard 754 floats, yet have higher accuracy, larger dynamic range and better closure – without any need to reduce transistor size and cost. In short, posits could soon prove floats


obsolete and further steer the community away from one-dimensional benchmarks altogether. In another experiment by Gustafson,


when comparing posit-based arithmetic with floats, the posit approach again came up on top. Gustafson ran random data through a standard Fast Fourier Transform (FFT) algorithm. Ten he inverse transformed it and compared it with the original signal.


22 ‘For a 1024-point FFT and a 12-bit


analogue-to-digital convertor data I was able to get back the original signal, exactly, every bit, using only a 21-bit posit representation,’ says Gustafson. ‘Tat’s something even 32-bit floats cannot do. I can preserve 100 per cent of the measurement information with fewer bits.’


Maverick thinkers In April of this year, Paul Messina, director of the US Exascale Computing Project (ECP), presented a wide-ranging review of ECP’s evolving plans for the delivery of the first exascale machine – which has now moved its launch to 2021 – at the HPC User Forum in Santa Fe, US. Messina stated that from the very start the


exascale project has steered clear of flops and Linpack as the best measure of success. Tis trend has only grown stronger with attention focused on defining success as performance on useful applications and the ability to tackle problems that are intractable on today’s petaflops machines. For well over 25 years, leading voices in the


supercomputing community have urged users to measure systems by their capabilities to solve real problems. A few of these individuals include Messina, Gustafson and Horst Simon.


I believe that, just as with the first petascale systems, the first exascale systems will run applications and problem sizes that would normally not run on smaller systems


Back in 2014, Simon said in an interview


that calling a system exa-anything was a bad idea, because it becomes a bad brand, associated with buying big machines for a few national labs; therefore, if exaflops are not achieved, this will likely be seen as a failure, no matter how much great science can be done on the systems being developed. ‘My views on naming anything ‘exa’ are


still the same,’ said Simon. ‘However, what has changed is that we now have a well-defined exascale computing project in the US. Tis project includes a significant number of exascale applications – in the order of 24.’ Tese applications range from cosmology


to genomics and materials science. Europe is also working towards a


supercomputing ecosystem effort known


as EuroHPC. In July 2017, on a European Commission blog, an established voice in this field said the European supercomputing community faces a weak spot in technologies, such as the development and commercialisation of domestic computer or CMOS chips and processor technologies. Tis view came from Wolfgang Marquardt,


scientific director and chairman of the board of directors of Forschungszentrum Jülich (Jülich Research Centre), in Germany, home to a supercomputing centre and one of Europe’s largest research centres. ‘A single nation cannot make significant


progress in this endeavour, and we need to work together to advance in this field,’ said Marquardt. For Europe’s EuroHPC, or any exascale,


effort the success factors that should take precedence have shiſted from traditional metrics, according to Marquardt. ‘In my opinion, there are more appropriate


ways to discuss the power of supercomputers, rather than rigid benchmarking lists: energy- efficiency, scalability and adaptability for a


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32