A AMD Last verified

AMD Instinct MI300A

OAM In production Released 2023 cdna3-apu
BF16
TFLOP/s
981 厂商声称
FP8
TFLOP/s
1962 厂商声称
FP4
TFLOP/s
unsupported
Memory
GB
128 厂商声称
Mem BW
GB/s
5300 厂商声称
TDP
W
760 厂商声称

Full specs

Compute

FP4 TFLOPS
unsupported
FP8 TFLOPS
1962
BF16 TFLOPS
981
FP16 TFLOPS
981
INT8 TOPS
1962

Memory

Capacity
128 GB
Bandwidth
5300 GB/s
Type
HBM3

Die architecture 🟢 vendor floorplan

CU count
228
L2 cache
256 MB
HBM stacks
8
Process
5 nm
Die area
1017 mm²
Transistors
146 B
PCIe
Gen 5 ×16

Scale-Up (intra-node)

Protocol
Infinity-Fabric
Per-link BW
768 GB/s
World size
4
Topology
fully-connected
Switch

Scale-Out (inter-node)

Per-card NIC
400 Gbps
Protocol
RoCEv2
NIC

Topology

拓扑结构 · Topology
4 卡 scale-up domain
芯片内部 / Die-level architecture
HBM HBM HBM HBM HBM HBM HBM HBM AMD Instinct MI300A L2 / shared cache · NoC L1$ / register file (per CU) 228 CUs · darker block = tensor / matrix engine 981 TFLOPS BF16 · 1962 FP8 · 128 GB HBM3 @ 5.3 TB/s · 760 W TDP

🟢 vendor floorplan 228 CUs · 8× HBM · 256 MB L2 · 5 nm · 146 B transistors · 1017 mm²


集群拓扑 / Cluster topology · Infinity-Fabric @ 768 GB/s
Infinity-Fabric switch 768 GB/s/link · all-to-all GPU 0 128GB GPU 1 128GB GPU 2 128GB GPU 3 128GB 4 cards · fully-connected topology · scale-out: 400 Gbps/card
Scale-Up · 域内
Infinity-Fabric
768 GB/s · 拓扑: fully-connected
world_size = 4
Scale-Out · 跨域
RoCEv2
400 Gbps/卡 NIC

Which models can it run?

Quick estimates · decode tok/s/card 上界

TP=4 · FP8 · batch=16 · prefill=1024 · decode=256 · 已应用 efficiency 校准

在计算器中调整 →
模型 参数 (active) Decode tok/s/card 瓶颈
DeepSeek V4 Pro
deepseek
49B 显存不足
DeepSeek V4 Flash
deepseek
13B 151 内存带宽
Mistral Small 4
mistral
22B 69 内存带宽
GLM-5 Reasoning
zhipu
32B 57 内存带宽
GLM-5.1
zhipu
32B 显存不足
Qwen3.6 Plus
alibaba
35B 37 内存带宽
Kimi K2.6
moonshot
32B 显存不足
MiniMax M2.7
minimax
46B 25 内存带宽

Operator-level fit · per-model bottleneck + upper bound

算子级 fit · operator-level fit (per-token roofline)

基于每个模型 operator_decomposition + 本卡 BF16 981 TFLOPS / 5,300 GB/s 计算 · ridge point ≈ 185 FLOPs/byte

上界 = min(计算屋顶, 内存带宽屋顶) · efficiency 未应用
模型 domain 主导算子 AI · F/B 瓶颈 tok/s 上界
DeepSeek V4 Pro llm matmul 245.5 🔥 计算 163k
GraphCast scientific graph-message-passing 0.9 💾 内存带宽 9779
AlphaFold 3 scientific pair-bias-attention 2.3 💾 内存带宽 2938
GPT-OSS llm matmul 0.7 💾 内存带宽 428
Gemma 4 26B llm matmul 0.7 💾 内存带宽 318
DeepSeek V4 Flash llm matmul 0.8 💾 内存带宽 301
Mistral Small 4 llm matmul 0.6 💾 内存带宽 137
Llama 4 Maverick llm matmul 0.8 💾 内存带宽 136
需要 efficiency 校准 + concurrency 扫描 + TCO 估算 → 在计算器中评估 →

Operator support & optimization headroom

算子支持 & 优化空间 / Operator support & headroom

Per-operator support derived from software_support.engines + scale-up topology. Optimization headroom from measured efficiency factor.

Optimization headroom
+50 pp
moderate

No cases yet — using default 0.5 efficiency. Real headroom unknown until first measurement lands.

Communication (collective)
All-to-All 🟡 partial
small scale-up domain; expert-parallel needs careful sharding
AllReduce 🟢 mature
Infinity-Fabric ring all-reduce
Attention
Multi-Head Attention 🟢 mature
paged-attention via vLLM/SGLang/MindIE
FlashAttention-3 🟢 mature
FA-3 on modern engine + tensor cores
Matrix multiply (GEMM)
Matrix Multiplication 🟢 mature
GEMM supported on all inference engines
MoE routing
MoE Routing 🟢 mature
MoE gating supported via vLLM ≥0.4 / SGLang
Normalization
RMSNorm 🟢 mature
fused into engine kernels
Embedding
fused into engine kernels
Activation
SiLU / Swish 🟢 mature
fused into engine kernels
Softmax 🟢 mature
fused into engine kernels

Software-stack support

Engine Status BF16FP16FP4FP8 E4M3FP8 E5M2INT4 AWQ
HanGuangAI unconfirmed
LMDeploy unconfirmed
MindIE unconfirmed
MoRI unconfirmed
SGLang unconfirmed
TensorRT-LLM (Dynamo) unconfirmed
vLLM official

Existing deployment cases (0)

No measured cases yet for this card. Be the first contributor?

Citations

  1. [1] AMD MI300A APU product page (24× Zen-4 cores + 228 CDNA-3 CUs unified memory) — https://www.amd.com/en/products/accelerators/instinct/mi300/mi300a.html · accessed 2026-04-28 厂商声称
  2. [2] CDNA 3 APU: 228 CUs (vs 304 in MI300X) + 24× Zen-4 cores in same package, 256 MB Infinity Cache, 8× HBM3 ⇒ 128 GB unified, 146B transistors @ TSMC 5+6nm chiplets — https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/white-papers/amd-cdna-3-white-paper.pdf · accessed 2026-04-28 厂商声称
⚠ MI300A is an APU (CPU + GPU integrated); used in El Capitan supercomputer + emerging AI training systems.
⚠ Smaller GPU CU count vs MI300X (228 vs 304) trades for embedded Zen-4 cores + unified memory model.
⚠ All performance figures are vendor-claimed unless tier=measured.