Machine Learning Times
EXCLUSIVE HIGHLIGHTS
2 More Ways To Hybridize Predictive AI And Generative AI
  Originally published in Forbes Predictive AI and generative AI...
How To Overcome Predictive AI’s Everyday Failure
  Originally published in Forbes Executives know the importance of predictive...
Our Last Hope Before The AI Bubble Detonates: Taming LLMs
  Originally published in Forbes To know that we’re in...
The Agentic AI Hype Cycle Is Out Of Control — Yet Widely Normalized
  Originally published in Forbes I recently wrote about how...
SHARE THIS:

5 months ago
Most Researchers Do Not Believe AGI Is Imminent. Why Do Policymakers Act Otherwise?

 

Originally published on Tech Policy Press, March 19, 2025.

AI research—these days primarily driven by corporate interests—often embraces strange priorities. Amidst multiple crises in public health, climate, and democracy, we could do better than synthetic image and text generation products and personalized chatbots as the defining technologies of our era.

But the companies hyping these products tell us every improvement demonstrates progress toward an even stranger goal: artificial general intelligence, or AGI. The splashy announcement of any new model is cast as evidence of the inevitable trajectory toward machines that learn and act as humans do. New capabilities are pitched as steps toward the goal of machines that may even outperform humans. Investment in the companies that build these systems is, of course, heavily dependent on this promise.

Around the world, policymakers appear increasingly eager to satisfy the interests of tech firms that claim they can deliver AGI. Perhaps it’s natural—if you were a politician or a head of state confronted with a complex, interconnected set of problems with no immediate solution, you might crave the answer these companies are selling. And you might be more than a little hungry for the type of transformation that such technology might create under your leadership.

However, there is danger in making AI policy goals just as invested in the promise of AGI as are the tech sector’s leaders. When policymakers buy the hype, the public pays for it.

AGI is Unlikely in the Near Term

First, it’s important to establish that there is good reason for skepticism about claims that AGI is imminent, despite the speculative fever amongst industry figures and some in the press. A recent survey of 475 AI researchers by the Association for the Advancement of Artificial Intelligence (AAAI) conducted as part of its panel on the future of AI research found that “[t]he majority of respondents (76%) assert that ‘scaling up current AI approaches’ to yield AGI is ‘unlikely’ or ‘very unlikely’ to succeed, suggesting doubts about whether current machine learning paradigms are sufficient for achieving general intelligence.” The many limitations of transformer-based architectures suggest AGI is hardly right around the corner.

To continue reading this article, click here.

Comments are closed.