search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING The true cost of AI innovation


Tate Cantrell, CTO, Verne Global, comments on the cost of the AI computing revolution


We should all be thankful that the progress of AI is moving at lightning speed. No longer strictly the province of the research lab, instead it is expanding into everyday life. From neural networks that predict fraudulent credit card transactions to AI-powered Google Maps that can ascertain the speed of traffic using anonymised location data from smartphones. But while everyone is looking to AI’s bright future, the rapid growth has the potential to do more harm than good.


Carbon cost of AI AI is data-driven and heavily dependent on computing power. Depending on the complexity of the machine learning or deep learning model at hand, AI can involve a staggering amount of data, requiring massive computational resources. Given the sizable energy requirements of modern tensor processing hardware, this results in enormously high power consumption. Since 2012, the amount


of computing power used for deep learning research has been doubling every 3.4 months, according to OpenAI researchers Dario Amodei and Danny Hernandez. This equates to an estimated 300,000-fold increase from 2012 to 2018, far outpacing Moore’s Law, which states that the overall processing power for computers will double every two years. And as the world’s demand


for such AI technology continues to grow, so


14 Scientific Computing World Spring 2020


does the AI industry’s energy consumption. In an environmentally hostile chain reaction, rapidly increasing computational needs will unavoidably escalate carbon costs.


Environmentally accountable AI By its nature, deep learning is extremely compute-intensive. Deep learning is based on neural networks that are comprised of multiple layers, with manifold parameters that can number in the billions. The greater the network depth, the greater the compute complexity, which requires high-performance computational power and longer training times. Canadian researchers, Victor Schmidt et al. report that state-of- the-art neural architectures are frequently trained on multiple GPUs for weeks, or even months, to beat existing achievements. At present, the vast majority


of AI research is focused on achieving the highest levels of accuracy, without much concern paid to computational or energy efficiency. In fact, competition for accuracy in the AI community is robust, with numerous leaderboards tracking which AI system is performing a given AI task the best. Regardless of whether the AI leader board is tracking AI programs for image recognition or language


comprehension, accuracy is by far the most important metric of success. But as the world’s attention has shifted to climate change, the field of AI is beginning to take note of its carbon cost. Research done at the Allen Institute for AI by Roy Schwartz et al. raises the question of whether efficiency, alongside accuracy, should become an important factor in AI research, and suggests that AI scientists ought to deliberate if the massive computational


Since 2012, the amount of computing power used for deep learning research has been doubling every 3.4 months, according to OpenAI researchers Dario Amodei and Danny Hernandez


power needed for expensive processing of models, colossal amounts of training data, or huge numbers of experiments is justified by the degree of improvement in accuracy. Research by the University of Massachusetts (Strubell et al., 2019) demonstrates the unsustainable costs of AI. It analysed the computational requirements for a neural architecture search for machine translation and language modelling. The model ran for a total of 979 million training steps, and took 10 hours to train for 300,000 steps on one TPUv2 core, equating to 274,120 hours on eight P100 GPUs. The estimated carbon


cost of training the model was 626,155 lbs of carbon dioxide emissions, which is


@scwmagazine | www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32