24 Zen 4 CPU Cores, 146 Billion Transistors, 128 GB HBM3, Up To 8x Faster Than MI250X

24 Zen 4 CPU Cores, 146 Billion Transistors, 128 GB HBM3, Up To 8x Faster Than MI250X

AMD has just confirmed the specs of its Instinct MI300 ‘CDNA 3’ accelerator which uses Zen 4 CPU cores in a 5nm 3D chiplet package.

AMD Instinct MI300 ‘CDNA 3’ Specs: 5nm Chipset Design, 146 Billion Transistors, 24 Zen 4 CPU Cores, 128GB HBM3

The latest leaked specs for the AMD Instinct MI300 accelerator confirm that this exascale APU will be a monster of a chiplet design. The CPU will include several 5nm 3D chiplet packages, all combined to house an insane 146 billion transistors. These transistors include various core IPs, memory interfaces, interconnects and much more. The CDNA 3 architecture is the underlying DNA of the Instinct MI300, but the APU also comes with a total of 24 Zen Data Center 4 CPU cores and 128GB of next-generation HBM3 memory running in the 8192-bit wide bus configuration which is really mind-blowing.

During the AMD Financial Day 2022, the company confirmed that the MI300 will be a multi-chip accelerator and a multi-IP Instinct that not only contains the next-generation CDNA 3 GPU cores, but is also equipped with the next-generation CPU Zen 4 cores.

To enable more than 2 exaflops of double-precision processing power, the US Department of Energy, Lawrence Livermore National Laboratory and HPE have teamed up with AMD to design El Capitan, which is expected to be the world’s fastest supercomputer by delivering expected in early 2023. El Capitan will use next-generation products that include improvements from Frontier’s custom processor design.

Next-generation AMD EPYC processors, codenamed “Genoa,” will feature the “Zen 4” processor core to support next-generation memory and I/O subsystems for AI and HPC workloads Next-generation GPU AMD Instinct based on new compute-optimized architecture for HPC and AI workloads will leverage next-generation high-bandwidth memory for optimal deep learning performance

This design will leverage AI and machine learning data analysis to create models that are faster, more accurate, and able to quantify the uncertainty of their predictions.

via AMD

In the latest performance comparisons, AMD showed that the Instinct Mi300 delivers an 8x increase in AI performance (TFLOPs) and a 5x increase in AI performance per watt (TFLOPs/watt) over the Instinct MI250X.

AMD will use 5nm and 6nm process nodes for its Instinct MI300 ‘CDNA 3’ APUs. The chip will be equipped with the next generation Infinity Cache and will have the 4th generation Infinity architecture which enables the support of the CXL 3.0 ecosystem. The Instinct MI300 accelerator will rock a unified memory APU architecture and new Math Formats, allowing for a 5x increase in performance per watt over the massive CDNA 2. AMD is also projecting over 8x AI performance over the CDNA 2 Instinct MI250X based accelerators. CDNA 3 GPU’s UMAA will connect the CPU and GPU with a unified HBM memory package, eliminating redundant memory copies while providing low TCO.

The AMD Instinct MI300 APU accelerators are expected to be available by the end of 2023, which coincides with the deployment of the aforementioned El Capitan supercomputer.

Share this story


I tweet

Leave a Reply

Your email address will not be published. Required fields are marked *