EFFICIENCY GAINS AND JEVONS PARADOX: THE HIDDEN CATCH
At first glance, improving energy efficiency seems like the golden ticket for managing AI’s growing power needs. Hardware advancements, such as Nvidia’s Blackwell GPUs, promising to be 25 times more energy- efficient than their predecessors, are celebrated as breakthroughs. Yet history shows that efficiency alone rarely reduces overall consumption. This is the crux of Jevons Paradox, first articulated by economist William Stanley Jevons in the 19th century: as efficiency increases, the cost of using a resource falls, which often leads to greater overall consumption.
This paradox has echoed across industries. During the Industrial Revolution, more efficient steam engines reduced the cost of coal use but spurred industrial expansion, increasing total coal consumption. Similarly, fuel- efficient cars enabled longer, cheaper journeys, resulting in more driving and higher aggregate fuel use. In AI, the same dynamic is unfolding. Innovations in hardware and algorithms drastically lower the energy cost per computation, but they also enable the development of larger models, faster inference, and greater adoption, driving energy consumption further.
Even as GPUs and other hardware become more energy-efficient, the construction of hyperscale data centers imposes physical and economic constraints. Recently, President Trump, in collaboration with OpenAI, SoftBank, and Oracle, announced the ambitious Stargate Initiative, a private-sector investment of up to $500 billion in artificial intelligence infrastructure, aiming to build 20 data centers across the United States. These hyperscale facilities, with massive fixed costs for land, construction, cooling, and power infrastructure, are built with
an economic imperative: maximize
utilization. As GPUs become more efficient, the cost per computation decreases, encouraging companies to increase the density of computational tasks within the same infrastructure. This ensures data centers often operate at or near their maximum power capacity. Efficiency gains, rather than reducing total energy consumption, drive computational intensity within existing infrastructure, further amplifying demand, a classic example of the rebound effect described by Jevons Paradox.
HISTORY SHOWS THAT EFFICIENCY ALONE RARELY REDUCES OVERALL CONSUMPTION. THIS IS THE CRUX OF JEVONS PARADOX
For decades, Moore’s Law described exponential improvements in computing, with transistor density doubling every two years. As Satya Nadella has pointed out, in the AI era, a new scaling law governs progress: AI performance doubling every six months. This rapid growth reflects not just advancements in model capabilities but also a dramatic rise in computational demands, particularly for inference and test-time compute. This new scaling trajectory highlights the accelerating resource intensity of AI adoption. OpenAI’s release of GPT-4o mini in 2024, a smaller, more efficient model, lowered costs and expanded access to businesses and consumers alike. Within a year, ChatGPT’s active user base doubled to over 200 million weekly users, driving query volumes to an estimated 3 billion per week. Despite reduced energy
7 | ADMISI - The Ghost In The Machine | Q1 Edition 2025
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44