
After GPT-5’s rocky debut drew widespread criticism, Sam Altman and OpenAI defended the model as a critical scientific tool and reaffirmed their commitment to scaling, reinforcement learning and the long-term pursuit of artificial general intelligence. (Source: Image by RR)
Altman Reaffirms OpenAI’s Confidence in Scale and Self-Improving Models
OpenAI’s August rollout of GPT-5 was anything but smooth. The livestream showcasing the new model was riddled with technical glitches, including incorrect charts and awkward pauses, while users quickly took to Reddit to complain that the model felt “less friendly” and “colder” than GPT-4. Many begged for a rollback to the previous version. Critics such as Gary Marcus called GPT-5 the most overhyped system in AI history, saying it failed to deliver the promised leaps toward artificial general intelligence (AGI) or human-level reasoning. To them, GPT-5’s debut marked a turning point—proof that OpenAI’s scaling strategy had peaked and the long-predicted “AI winter” might be beginning.
Sam Altman disagrees. A month after the shaky launch, he insists GPT-5 is every bit the game-changer he said it would be, just misunderstood. “The vibes were bad at launch, but now they’re great,” he said in an interview. OpenAI’s leadership, as noted in wired.com, argues that the model’s real power lies in its ability to accelerate scientific discovery. Altman claims that researchers in physics and biology are already using GPT-5 to uncover new insights, calling it the first model that can “actually help humanity discover science.” OpenAI president Greg Brockman added that the jump from GPT-4 to GPT-5 was massive, but because OpenAI rolled out smaller, significant improvements over the past year, users didn’t fully notice how far the technology had advanced.
While critics accused OpenAI of hitting the limits of scale, the company says GPT-5’s progress came from innovation, not size. The team emphasized that instead of simply increasing dataset size or compute power, GPT-5’s strength comes from reinforcement learning—fine-tuning the model through expert human feedback and self-generated data. Brockman explained that this marks a turning point: “When the model is dumb, you make it bigger. When it’s smart, you train it on its own data.” Still, OpenAI continues to invest heavily in massive infrastructure projects, such as the $500 billion Stargate data center initiative, designed to power even more advanced systems. Altman bristled when asked whether scaling had reached its limits, insisting that GPT-6 and GPT-7 will be “significantly better,” with OpenAI maintaining an unbroken record of improvement.
Despite the controversy, OpenAI’s leaders continue to describe GPT-5 as a milestone on the road to AGI. But they’ve quietly changed how they define that goal. Instead of treating AGI as a single destination, Altman and Brockman now present it as an ongoing process—a continuous evolution that will transform the economy and human work. “We used to think of AGI as an endpoint,” Brockman said. “Now it’s a continuous transformation.” Within OpenAI’s headquarters, AGI has also become part of the company’s culture and branding. Stickers and posters reading “FEEL THE AGI” adorn the walls, while Altman continues to promise that each new model will bring the world closer to the next phase of intelligence. Whether GPT-5 marks a misstep or a quiet leap forward remains to be seen, but OpenAI’s leaders are betting billions that the journey to AGI is still very much alive.
read more at wired.com
Leave A Comment