America
AI’s Threat to Humanity
Geoff rey Hinton, often called “the godfather of artificial intelligence,” issued the
following stark warning while accepting the 2024 Nobel Prize in physics.
T
he nobel committees in physics and chemis- try have recognized the dramatic progress being made in a new form of artifi cial intelligence that uses artifi cial neural networks to learn how to
solve diffi cult computational problems. This new form of AI excels at modeling human intuition
rather than human reasoning, and it will enable us to create highly intelligent and knowledgeable assistants who will increase productivity in almost all industries. If the benefi ts of the increased productivity can be shared
equally, it will be a wonderful advance for all humanity. Unfortunately, the rapid progress in AI comes with
many short-term risks. It has already created divisive echo chambers by off er-
ing people content that makes them indignant. It is already being used by authoritarian governments
for massive surveillance and by cybercriminals for phish- ing attacks. In the near future, AI may be used to create terrible
new viruses and horrendous lethal weapons that decide by themselves who to kill or maim. All these short-term risks require urgent and forceful
It Will Be Smarter Than Us And we won’t be able to control it.
BY KATHRYN MCKENZIE G
eoff rey Hinton, a Nobel Prize winner in physics for
his pioneering work in artificial intelligence, worries that AI could one day outsmart and control humanity. Hinton believes the
technology he helped create could take control of humans. He told CBS Saturday
Morning that a profit-driven AI arms race may be speeding up the danger.
12 NEWSMAX | SEPTEMBER 2025 Hinton, who is one of the
creators of key neural network technologies, has long warned that AI could pose an existential threat to humans. In 2023, he left Google after
becoming concerned about the risks that artificial intelligence could pose to humanity. He has previously warned
that AI systems might one day become smarter than humans and act in ways we cannot control — potentially posing a
We have no idea whether we can stay in control. But we now have evidence that if they are created by companies motivated by short-term profi ts, our safety will not be the top priority.
attention from governments and international organizations. There is also a longer-term existential threat that will
arise when we create digital beings that are more intel- ligent than ourselves. We have no idea whether we can stay in control. But we
now have evidence that if they are created by companies motivated by short-term profi ts, our safety will not be the top priority. We urgently need research on how to prevent these
new beings from wanting to take control. They are no longer science fi ction.
threat to humanity itself. “If you consider the
possibility that these things will get much smarter than us and then just take control away from us, just take over, the probability of that happening is very likely more than 1% and very likely less than 99%. Pretty much all the experts can agree on that,” he said. “The best way to
understand it emotionally is we are like somebody who has this really cute tiger cub,” Hinton explained. “Unless you can be very sure that it’s not gonna want to kill you when it’s
grown up, you should worry.” He stressed that if the world
continued to approach AI with a profit-driven mindset, there would be a greater likelihood of an AI takeover or bad actors co-opting the technology for dangerous means, such as mass surveillance. Despite his concerns,
Hinton said there was also reason to be optimistic about the impact AI will have on the world, pointing to healthcare, drug development, and education as areas where AI could improve the human experience.
AI/JONATHAN KITCHEN/GETTY IMAGES
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80 |
Page 81 |
Page 82 |
Page 83 |
Page 84 |
Page 85 |
Page 86 |
Page 87 |
Page 88 |
Page 89 |
Page 90 |
Page 91 |
Page 92 |
Page 93 |
Page 94 |
Page 95 |
Page 96 |
Page 97 |
Page 98 |
Page 99 |
Page 100