First, It Makes You Money. Then, It Solves Physics.
The definition of "artificial general intelligence" has always been a moving target, a convenient fiction for marketing departments and venture capitalists. But the latest proposal from OpenAI's Sam Altman—that a future "GPT-8" could be considered AGI if it solves quantum gravity—is a new level of absurdity. It's a smokescreen, a deflection, and a perfect example of how the tech industry uses grandiose promises to distract from the mundane harms of its products.
For years, the benchmark for AGI was the Turing Test. Then it was beating a grandmaster at chess, then Go. As each milestone was reached and the magic wore off, the goalposts were moved. The Microsoft/OpenAI agreement offered a more honest, if cynical, definition: AGI is a system that can generate $100 billion in profit. At least that's transparent about the real goal.
Now, we have "solving quantum gravity." This is a particularly insidious benchmark. It's unfalsifiable for the foreseeable future, allowing OpenAI to claim they're on the path to some god-like intelligence without ever having to prove it. It also conflates pattern recognition with genuine understanding. A language model might be able to string together enough existing research to produce a plausible-looking theory, but that's not the same as the intuitive, creative leap of a human mind. It's a party trick, not a paradigm shift.
And while we're being distracted by these sci-fi fantasies, the real-world consequences of today's AI are piling up. Cognitive degradation, algorithmic bias, labor displacement—these are not future risks, they are present-day harms. The AGI hype is a convenient way to avoid talking about them. It's a promise of a magical future that justifies the damage being done in the present.
It's time to stop playing this game. "AGI" is a meaningless term. We should be focused on the AI that exists today, not the fairy tales being spun about the AI of tomorrow. The question isn't whether a machine can solve quantum gravity. The question is how we can mitigate the damage that these systems are already doing to our societies, our economies, and our minds.