
Godfather of AI” Geoffrey Hinton warns AI could take control from humans: “People haven’t understood what’s coming
26. April 2025
“Godfather of AI” Geoffrey Hinton was awakened in the middle of the night last year with news he had won the Nobel Prize in physics. He said he never expected such recognition.
“I dreamt about winning one for figuring out how the brain works. But I didn’t figure out how the brain works, but I won one anyway,” Hinton said.
The 77-year-old researcher earned the award for his pioneering work in neural networks — proposing in 1986 a method to predict the next word in a sequence — now the foundational concept behind today’s large language models.
While Hinton believes artificial intelligence will transform education and medicine and potentially solve climate change, he’s increasingly concerned about its rapid development.
“The best way to understand it emotionally is we are like somebody who has this really cute tiger cub,” Hinton explained. “Unless you can be very sure that it’s not gonna want to kill you when it’s grown up, you should worry.”
The AI pioneer estimates a 10% to 20% risk that artificial intelligence will eventually take control from humans.
“People haven’t got it yet, people haven’t understood what’s coming,” he warned.
His concerns echo those of industry leaders like Google CEO Sundar Pichai, X-AI’s Elon Musk, and OpenAI CEO Sam Altman, who have all expressed similar worries. Yet Hinton criticizes these same companies for prioritizing profits over safety.
“If you look at what the big companies are doing right now, they’re lobbying to get less AI regulation. There’s hardly any regulation as it is, but they want less,” Hinton said.
Hinton appears particularly disappointed with Google, where he previously worked, for reversing its stance on military AI applications.
According to Hinton, AI companies should dedicate significantly more resources to safety research — “like a third” of their computing power, compared to the much smaller fraction currently allocated.
CBS News asked all the AI labs mentioned how much of their compute is used for safety research. None of them gave a number. All have said safety is important and they support regulation in general but have mostly opposed the regulations lawmakers have put forward so far.