AMD Instinct MI300A
OAM 在售 发布于 2023 cdna3-apu
BF16
TFLOP/s
981 厂商声称
FP8
TFLOP/s
1962 厂商声称
FP4
TFLOP/s
不支持
Memory
GB
128 厂商声称
Mem BW
GB/s
5300 厂商声称
TDP
W
760 厂商声称
完整规格
算力
FP4 TFLOPS
不支持
FP8 TFLOPS
1962
BF16 TFLOPS
981
FP16 TFLOPS
981
INT8 TOPS
1962
显存
容量
128 GB
带宽
5300 GB/s
类型
HBM3
芯片架构 🟢 vendor floorplan
CU count
228
L2 cache
256 MB
HBM stacks
8
制程
5 nm
Die area
1017 mm²
Transistors
146 B
PCIe
Gen 5 ×16
Scale-Up (节点内)
协议
Infinity-Fabric
单链带宽
768 GB/s
World size
4
拓扑
fully-connected
交换机
—
Scale-Out (节点间)
单卡出口
400 Gbps
协议
RoCEv2
NIC
—
拓扑示意
拓扑结构 · Topology
4 卡 scale-up domain
芯片内部 / Die-level architecture
🟢 vendor floorplan 228 CUs · 8× HBM · 256 MB L2 · 5 nm · 146 B transistors · 1017 mm²
集群拓扑 / Cluster topology · Infinity-Fabric @ 768 GB/s
Scale-Up · 域内
Infinity-Fabric
768 GB/s · 拓扑: fully-connected
world_size = 4
Scale-Out · 跨域
RoCEv2
400 Gbps/卡 NIC
能跑哪些模型?
Quick estimates · decode tok/s/card 上界
TP=4 · FP8 · batch=16 · prefill=1024 · decode=256 · 已应用 efficiency 校准
| 模型 | 参数 (active) | Decode tok/s/card | 瓶颈 |
|---|---|---|---|
| DeepSeek V4 Pro deepseek | 49B | — | 显存不足 |
| DeepSeek V4 Flash deepseek | 13B | 151 | 内存带宽 |
| Mistral Small 4 mistral | 22B | 69 | 内存带宽 |
| GLM-5 Reasoning zhipu | 32B | 57 | 内存带宽 |
| GLM-5.1 zhipu | 32B | — | 显存不足 |
| Qwen3.6 Plus alibaba | 35B | 37 | 内存带宽 |
| Kimi K2.6 moonshot | 32B | — | 显存不足 |
| MiniMax M2.7 minimax | 46B | 25 | 内存带宽 |
算子级 fit · 任意模型瓶颈类型 + 上界
算子级 fit · operator-level fit (per-token roofline)
基于每个模型 operator_decomposition + 本卡 BF16 981 TFLOPS / 5,300 GB/s 计算 · ridge point ≈ 185 FLOPs/byte
| 模型 | domain | 主导算子 | AI · F/B | 瓶颈 | tok/s 上界 |
|---|---|---|---|---|---|
| DeepSeek V4 Pro | llm | matmul | 245.5 | 🔥 计算 | 163k |
| GraphCast | scientific | graph-message-passing | 0.9 | 💾 内存带宽 | 9779 |
| AlphaFold 3 | scientific | pair-bias-attention | 2.3 | 💾 内存带宽 | 2938 |
| GPT-OSS | llm | matmul | 0.7 | 💾 内存带宽 | 428 |
| Gemma 4 26B | llm | matmul | 0.7 | 💾 内存带宽 | 318 |
| DeepSeek V4 Flash | llm | matmul | 0.8 | 💾 内存带宽 | 301 |
| Mistral Small 4 | llm | matmul | 0.6 | 💾 内存带宽 | 137 |
| Llama 4 Maverick | llm | matmul | 0.8 | 💾 内存带宽 | 136 |
需要 efficiency 校准 + concurrency 扫描 + TCO 估算 → 在计算器中评估 →
算子支持 & 优化空间
算子支持 & 优化空间 / Operator support & headroom
Per-operator support derived from software_support.engines + scale-up topology. Optimization headroom from measured efficiency factor.
Optimization headroom
+50 pp
moderate
No cases yet — using default 0.5 efficiency. Real headroom unknown until first measurement lands.
Communication (collective)
All-to-All 🟡 partial
small scale-up domain; expert-parallel needs careful sharding
AllReduce 🟢 mature
Infinity-Fabric ring all-reduce
Attention
Multi-Head Attention 🟢 mature
paged-attention via vLLM/SGLang/MindIE
FlashAttention-3 🟢 mature
FA-3 on modern engine + tensor cores
Matrix multiply (GEMM)
Matrix Multiplication 🟢 mature
GEMM supported on all inference engines
MoE routing
MoE Routing 🟢 mature
MoE gating supported via vLLM ≥0.4 / SGLang
Normalization
RMSNorm 🟢 mature
fused into engine kernels
Embedding
Rotary Position Embedding 🟢 mature
fused into engine kernels
Activation
SiLU / Swish 🟢 mature
fused into engine kernels
Softmax 🟢 mature
fused into engine kernels
最接近的替代卡 (按规格相似度)
基于 BF16 算力 / 显存 / 显存带宽 / FP8 加权欧氏距离。供选型决策参考。
软件栈支持
| 引擎 | 状态 | BF16 | FP16 | FP4 | FP8 E4M3 | FP8 E5M2 | INT4 AWQ |
|---|---|---|---|---|---|---|---|
| HanGuangAI | 未确认 | — | — | — | — | — | — |
| LMDeploy | 未确认 | — | — | — | — | — | — |
| MindIE | 未确认 | — | — | — | — | — | — |
| MoRI | 未确认 | — | — | — | — | — | — |
| SGLang | 未确认 | — | — | — | — | — | — |
| TensorRT-LLM (Dynamo) | 未确认 | — | — | — | — | — | — |
| vLLM | 官方 | ✓ | ✓ | — | ✓ | — | — |
已有部署案例 (0)
暂无该硬件的实测案例。
成为第一个贡献者?
引证
- [1] AMD MI300A APU product page (24× Zen-4 cores + 228 CDNA-3 CUs unified memory) — https://www.amd.com/en/products/accelerators/instinct/mi300/mi300a.html · 访问于 2026-04-28 厂商声称
- [2] CDNA 3 APU: 228 CUs (vs 304 in MI300X) + 24× Zen-4 cores in same package, 256 MB Infinity Cache, 8× HBM3 ⇒ 128 GB unified, 146B transistors @ TSMC 5+6nm chiplets — https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/white-papers/amd-cdna-3-white-paper.pdf · 访问于 2026-04-28 厂商声称
⚠ MI300A is an APU (CPU + GPU integrated); used in El Capitan supercomputer + emerging AI training systems.
⚠ Smaller GPU CU count vs MI300X (228 vs 304) trades for embedded Zen-4 cores + unified memory model.
⚠ All performance figures are vendor-claimed unless tier=measured.