Lisa Su, the president, CEO and chair of AMD, announced new releases of MI300 chips and the Ryzen AI Software Platform at a recent conference. (Source: Wikipedia/AMD Global)

AMD Ups the Ante by Revealing New AI Chip, Advanced Processor to Compete with Nvidia’s H100

When building the computer chips that run the world, you want your chip to be the fastest. It’s like having the fastest racehorse or Formula 1 racer. You want first place.

AMD thinks it has the new leader in the chip race, according to a new article from theverge.com. AMD wants people to remember that Nvidia is not the only company selling AI chips. This article is full of computer speeds and other numbered items that may or may not mean much to the average person, but it means a great deal to chip sellers like NVIDIA.

“The chipmaker unveiled the Instinct MI300X accelerator and the Instinct M1300A accelerated processing unit (APU), which the company said works to train and run LLMs. The company said the MI300X has 1.5 times more memory capacity than the previous M1250X version. Both new products have better memory capacity and are more energy-efficient than their predecessors, said AMD.”

“LLMs continue to increase in size and complexity, requiring massive amounts of memory and compute,” AMD CEO Lisa Su said. “And we know the availability of GPUs is the single most important driver of AI adoption.”

Su called the MI300X “the highest-performing accelerator in the world,” comparable to Nvidia’s H100 chips in training LLMs. However, she claimed the MI300X performed better in terms of inference—1.4 times better than H100 when working with Meta’s Llama 2, a 70 billion parameter LLM.

What It Means

Su’s presentation teased some other advancements planned at AMD. The Ryzen AI Software Platform is now widely available, which will let developers building AI models on Ryzen-powered laptops offload models into the NPU so the CPU can reduce power consumption. Users will get support for foundation models like the speech recognition model Whisper and LLMs like Llama 2.

AMD, Nvidia and Intel are all racing to produce better AI chips, though Nvidia is far ahead of the others in market share with its H100 GPUs used to train models like OpenAI’s GPT.

As they say, the race is on.

read more at theverge.com