AMD's CDNA 3-based MI300X accelerator takes AI to a new level
wepc.com - 1 year, 7 months ago - Read On Original Website
AMD's CDNA 3-based MI300X accelerator takes AI to a new level
AMD announced a plethora of new technology at the recent AMD Data Center & AI Technology Premiere. Within the impressive myriad of tech announced came the MI300X, a variant of the AI-accelerated MI300A APU. This is how AMD's CDNA 3-based Mi300X Accelerator takes AI to a new level.
More Context
What is AMD's latest chip that has been unveiled?
phoronix.com
Genoa-X
fool.com
MI300
cnbc.com
A.I
zdnet.com
MI300x
videocardz.com
Instinct MI300X GPU
servethehome.com
Instinct MI300
wepc.com
MI300X
crn.com
Instinct MI300X, EPYC 97X4
finance.yahoo.com
AI superchip
anandtech.com
EPYC
neowin.net
128-core EPYC 97X4 Series
insidehpc.com
4th Generation EPYC
latestly.com
EPYC 97X4
techradar.com
144-Core EPYC Bergamo
benzinga.com
Genoa
wccftech.com
Instinct MI300 APUs
pcmag.com
Instinct MI300X
seekingalpha.com
MI300 series
prnewswire.com
EPYC(tm)
AMD has shared new information about their AMD Instinct MI300 Series accelerator family. They have introduced the AMD Instinct MI300X accelerator, which is considered the most advanced accelerator for generative AI.
More Context
What is the impact of AMD's new chip on the PC market?
anandtech.com
More accurate and better models
finance.yahoo.com
'incredibly powerful' technology
marketwatch.com
'closest competitor' to Nvidia in AI hardware
videocardz.com
offer leadership performance in cloud native and technical computing
latestly.com
drive leadership performance and energy efficiency
crn.com
help it compete against the likes of Nvidia, Intel and others
zdnet.com
enormous memory and data throughput
wepc.com
takes AI to a new level
neowin.net
revolutionizes business laptops by enabling premium AI experiences
If you think this is great, you should take a look at the 128-core Genoa-X CPU AMD just announced.
This accelerator is based on the next-generation AMD CDNA 3 architecture. It and can support up to 192 GB of HBM3 memory. The MI300X is designed to provide efficient computing and memory capabilities for the training and inference of large language models in generative AI tasks.
With the AMD Instinct MI300X's ample memory, customers can now run large language models, such as Falcon-40, which has 40 billion parameters, on a single MI300X accelerator. Not only that, but the MI300X can support running an 80 billion parameter AI model Using FP16 inferencing.
AMD has also unveiled the AMD Instinct Platform, which brings together eight MI300X accelerators in a standard design. This platform offers a comprehensive solution for AI inference and training.
More Context
What are the other applications that the chip could be used for?
crn.com
AI, Cloud Expansion
neowin.net
business and cloud
videocardz.com
cloud native and technical computing
servethehome.com
NICs, storage, and even memory
anandtech.com
AI tasks
wepc.com
high-performance computing (HPC) and AI workloads
finance.yahoo.com
large language models and generative AI
latestly.com
Software Enablement for Generative AI (Artificial Intelligence)
benzinga.com
health care to 5G networks and data centers
zdnet.com
large language models
wccftech.com
various core IPs, memory interfaces, interconnects
beststocks.com
high-performance computing, graphics, and visualization technologies
seekingalpha.com
AI segment
prnewswire.com
technical computing
The MI300X accelerator is currently being sampled for key customers, with availability expected in the third quarter.
Additionally, AMD has announced the sampling of the AMD Instinct MI300A, the world's first APU Accelerator for high-performance computing (HPC) and AI workloads, to customers.
More Context
What are the businesses that AMD is targeting with the new chip?
crn.com
AI, Cloud
anandtech.com
AI/HPC
techradar.com
data center & AI
finance.yahoo.com
the latest stock market news and
videocardz.com
cloud native and technical computing
latestly.com
data centre
wccftech.com
CPU / GPU workloads
servethehome.com
PyTorch, Hugging Face
neowin.net
and for cloud data centers
cnbc.com
developers and server makers
benzinga.com
health care to 5G networks and data centers
zdnet.com
artificial intelligence computing
beststocks.com
high-performance computing, graphics, and visualization technologies
channelnewsasia.com
cloud computing providers and other large chip buyers
seekingalpha.com
supercomputers and traditional high-performance computing
prnewswire.com
growing cloud native environments
AMD CEO, DR. Lisa, Su wasn't afraid to talk numbers on the upcoming MI300X AI Accelerator.
When compared to Nvidias H100 AI GPU, the MI300X has up to 2.4X the HBM density and 1.6X the HBM bandwidth. This could be a chance for AMD to chip into some of Nvidias sweet AI market share. This is because, currently, Nvidia is seen as leading the charge on AI-enabled hardware.
More Context
How will AMD's new chip impact the AI industry?
zdnet.com
generative AI accelerator
wepc.com
to a new level
anandtech.com
More accurate and better models
wccftech.com
Accelerate HPC
marketwatch.com
'closest competitor' to Nvidia
finance.yahoo.com
could be ripe to wrestle market share away from Nvidia
crn.com
help it compete against the likes of Nvidia, Intel and others
fool.com
take on Nvidia's H100 systems in AI servers
neowin.net
revolutionizes business laptops by enabling premium AI experiences
Whilst we do not have any more performance metrics for you just yet, it's exciting to see AMD dive head-first into AI acceleration. And show just how much they can achieve in such a short space of time. This was how AMD's CDNA 3-based Mi300X Accelerator takes AI to a new level.
If you missed the AMD data center and AI technology premiere, you can watch it on Youtube.