Nvidia CEO Jensen Huang claims the company’s AI chips are advancing faster than Moore’s Law, with breakthroughs like the GB200 NVL72 promising to drastically improve AI inference performance, reduce costs, and accelerate AI model capabilities. (Source: Image by RR)

Huang Predicts that AI Computing Power Will Continue to Skyrocket Beyond Expectations

Nvidia CEO Jensen Huang asserts that the company’s AI chips are advancing at a rate far exceeding Moore’s Law, which historically dictated the doubling of transistor density and computing power every two years. Huang attributes Nvidia’s rapid progress to its ability to simultaneously innovate across chip architecture, system design, software libraries and algorithms, leading to significant performance gains in AI inference tasks. The latest GB200 NVL72 data center superchip, for example, is reportedly 30 to 40 times faster than its predecessor, suggesting a dramatic reduction in the cost of AI inference over time.

As reported in techcrunch.com, Huang dismisses claims that AI progress is slowing, arguing instead that three key scaling laws—pre-training, post-training and test-time compute—are driving continuous improvements in AI capabilities. He emphasizes that AI inference costs, which are currently a major concern due to models like OpenAI’s o3 requiring extensive computation, will eventually decline as Nvidia continues to enhance chip performance. Nvidia’s focus, he says, is not only on building more powerful AI hardware but also on making AI operations increasingly cost-effective.

Despite Nvidia’s dominance in AI hardware, questions remain about whether its expensive chips will retain their leadership as companies shift focus from training AI models to inference, which demands cost-efficient processing. Test-time compute, a method allowing AI models to “think” longer for better responses, remains an expensive operation today, with OpenAI reportedly spending nearly $20 per task to achieve human-level intelligence scores. However, Huang argues that advancements in AI chips will make this approach more affordable in the long term, mirroring past reductions in AI model pricing.

Overall, Huang asserts that Nvidia’s AI chips are already 1,000 times more powerful than those from a decade ago, and he expects this accelerated trajectory to continue. As Nvidia continues to push the boundaries of AI hardware, its innovations will likely shape the future of AI applications, making once-costly AI reasoning models more accessible and practical for widespread use.

read more at techcrunch.com