At Technology Review, senior editor Will Douglas Heaven offers a corrective to the continuous yelp in legacy media that AGI — machines that think like people — is just around the corner:
For many, AGI is more than just a technology. In tech hubs like Silicon Valley, it’s talked about in mystical terms. Ilya Sutskever, cofounder and former chief scientist at OpenAI, is said to have led chants of “Feel the AGI!” at team meetings. And he feels it more than most: In 2024, he left OpenAI, whose stated mission is to ensure that AGI benefits all of humanity, to cofound Safe Superintelligence, a startup dedicated to figuring out how to avoid a so-called rogue AGI (or control it when it comes). Superintelligence is the hot new flavor—AGI but better! —introduced as talk of AGI becomes commonplace.
“How AGI became the most consequential conspiracy theory of our time,” October 30, 2025
Mind Matters News readers who have followed the work of Gary Smith (here, for example) will certainly know better. But fame and fortune lie in spinning the tale that terrifies, not reporting the plain old facts.
Heaven is blunt:
Here’s what I think: AGI is a lot like a conspiracy theory, and it may be the most consequential one of our time.
I have been reporting on artificial intelligence for more than a decade, and I’ve watched the idea of AGI bubble up from the backwaters to become the dominant narrative shaping an entire industry. A onetime pipe dream now props up the profit lines of some of the world’s most valuable companies and thus, you could argue, the US stock market. It justifies dizzying down payments on the new power plants and data centers that we’re told are needed to make the dream come true. Fixated on this hypothetical technology, AI firms are selling us hard. “Conspiracy theory“
To continue reading this article, click here.