AMD AI Chip Unveiling - Assembly - Salesforce Research
AMD Preps GPU to Challenge Nvidia's Grip on the Generative AI Market
pcmag.com - None - Read On Original Website
Nvidia has been dominating the market for chips capable of training generative AI programs, but AMD is now trying to claim its share of the pie through a new enterprise-grade GPU.
The company today announced the AMD Instinct MI300X, a so-called "accelerator" chip designed to train large language models that can power programs such as OpenAI's ChatGPT.
More Context
keyboard_arrow_down keyboard_arrow_right What is AMD's latest chip that has been unveiled?
videocardz.com Instinct MI300X GPU
servethehome.com Instinct MI300
wepc.com MI300X
crn.com Instinct MI300X, EPYC 97X4
finance.yahoo.com AI superchip
neowin.net 128-core EPYC 97X4 Series
insidehpc.com 4th Generation EPYC
latestly.com EPYC 97X4
techradar.com 144-Core EPYC Bergamo
wccftech.com Instinct MI300 APUs
seekingalpha.com MI300 series
"AI is really the defining technology that's shaping the next generation of computing, and frankly it's AMD's largest and most strategic long-term growth opportunity," said AMD CEO Lisa Su during the product's unveiling(Opens in a new window).
The MI300X tries to beat the competition by featuring up to "an industry-leading" 192GB of HMB3 memory while being built on AMD's data center-focused CDNA 3(Opens in a new window) architecture, which is meant for AI-based workloads. Customers will be able to pack eight MI300X accelerators into a single system, enabling the GPUs to train larger AI models over the competition.
"For the largest models, it actually reduces the number of GPUs you need, significantly speeding up the performance, especially for inference, as well as reducing total costs of ownership," Su said.
The MI300X is also based on AMD's other AI-focused chip, the MI300A, which is slated to arrive in supercomputers. The difference is that the company swapped out the Zen 4 CPU chiplets in the MI300A, turning the MI300X into a pure GPU processor.
"You might see it looks very, very similar to MI300A, cause basically we took three chiplets off and put two (GPU) chiplets on, and we stacked more HBM3 memory," Su added. "We truly designed this product for generative AI."
In a demo, Su also showed a single MI300X equipped with 192GB of memory running the open-source large language model, Falcon-40B. The program was asked to write a poem about San Francisco and it created the text in several seconds.
"What's special about this demo is it's the first time a large language model of this size can be run entirely in memory on a single GPU," she added.
The new hardware will arrive as Nvidia expects its sales to skyrocket in the coming quarters, thanks to demand for generative AI chatbots. To develop the technology, companies across the industry have been buying Nvidia's A100 GPU, which can cost around $10,000. In addition, Nvidia is also selling the H100 GPU, which can now be configured with up to 188GB of HMB3 memory.