This page contains a Flash digital edition of a book.
HPC 2013-14 | Big data


the industry can point to and say that this is what we’re actively doing about it is machine learning,’ remarked Sumit Gupta, GM of the Tesla Accelerated Computing Group at Nvidia. Machine learning is yet another broad


term, but essentially it involves teaching the computer to not just identify precise things, such as key words or images, but to recognise characteristics and determine patterns. Traditionally, techniques are oſten


developed in the HPC community and then filter down into the consumer sphere. Te opposite is true, however, for machine learning. ‘Te economics of the situation has


switched to a point where companies like Google, Facebook and Microsoſt are leading these trends and the resulting techniques then reach the HPC community,’ said Gupta. ‘Both sides have the same big data challenges, but the commercial organisations are able to put in the funding to do the work.’ For this reason, academic institutions


like Stanford University are collaborating with companies like Google and Nvidia. Tis specific project focused on one of the popular


methods of machine learning; the creation of large-scale neural networks. In June 2013, the Stanford team, led


by Andrew Ng, director of the university’s artificial intelligence laboratory, created the world’s largest artificial neural network – built to model the ‘learning’ behaviour of the human brain. Te network is 6.5 times bigger than the previous record-setting network, developed by Google in 2012. Google used approximately 1,000 CPU-based servers, or 16,000 CPU cores, to develop its neural network, whereas the Stanford researchers created an equally large network with only three servers using Nvidia GPU accelerators to speed up the processing of the big data generated by the network. With 16 Nvidia GPU-accelerated servers, the team then created an 11.2 billion-parameter neural network. Hardware accelerators, such as GPUs, are


becoming a fundamental part of the big data analytics community, solving what Sumit Gupta calls a critical problem in the sector: affordability. ‘Every enterprise would like to make better use of their data, but not many can afford to put in 1,000 servers just to run


the analysis,’ he commented. ‘Accelerators enable users to get this same benefit but from a much smaller infrastructure.’


Bringing HPC to the masses One final, but significant, point: there was a time when HPC was exclusive to national laboratories and high-end academic institutions. Today, it has become a tool for the commercial sector and traditional commercial enterprises are using HPC techniques in order to improve productivity. Terascala’s Peter Piela has observed this trend in a wide range of sectors from life sciences, and oil and gas, to product manufacturing. ‘Being able to deliver big data technology in a way that commercial practitioners can take advantage of is a big challenge for the HPC industry,’ said Piela. ‘No longer can there be a team of researchers in lab coats watching over the HPC infrastructure; it has to be delivered in a package that can be readily adopted in a standard enterprise infrastructure.’ Tis drive towards availability, reliability and serviceability will continue to drive advancements in HPC and influence big data trends. l


transtec HPC Solutions Performance turns into Productivity


MORE THAN 30 YEARS OF EXPERIENCE IN HIGH PERFORMANCE COMPUTING


transtec AG • Waldhörnlestrasse 18 • Tübingen Telefon: 07071/703-0 • E-Mail: transtec@transtec.dewww.transtec.de


Anzeige_HPC_Yearbook_2013.indd 1 01.10.2013 17:05:18


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36