A AWS Annapurna Labs Last verified

AWS Inferentia 2

PROPRIETARY In production Released 2023 inf2
BF16
TFLOP/s
190 厂商声称
FP8
TFLOP/s
unsupported
FP4
TFLOP/s
unsupported
Memory
GB
32 厂商声称
Mem BW
GB/s
820 厂商声称
TDP
W
175 厂商声称

Full specs

Compute

FP4 TFLOPS
unsupported
FP8 TFLOPS
unsupported
BF16 TFLOPS
190
FP16 TFLOPS
190
INT8 TOPS
380

Memory

Capacity
32 GB
Bandwidth
820 GB/s
Type
HBM2e

Die architecture 🟢 vendor floorplan

XPU count
2
HBM stacks
2
Process
7 nm

Scale-Up (intra-node)

Protocol
NeuronLink
Per-link BW
384 GB/s
World size
12
Topology
ring
Switch

Scale-Out (inter-node)

Per-card NIC
100 Gbps
Protocol
EFA
NIC

Topology

拓扑结构 · Topology
12 卡 scale-up domain
芯片内部 / Die-level architecture
HBM HBM AWS Inferentia 2 L2 / shared cache · NoC L1$ / register file (per XPU) 2 XPUs · darker block = tensor / matrix engine 190 TFLOPS BF16 · 32 GB HBM2e @ 0.8 TB/s · 175 W TDP

🟢 vendor floorplan 2 XPUs · 2× HBM · 7 nm


集群拓扑 / Cluster topology · NeuronLink @ 384 GB/s
ToR · NeuronLink Node 1 Node 2 2 节点 × 8 卡 = 12 卡 · 节点内 384 GB/s · 节点间 100 Gbps RoCE/IB
Scale-Up · 域内
NeuronLink
384 GB/s · 拓扑: ring
world_size = 12
Scale-Out · 跨域
EFA
100 Gbps/卡 NIC

Which models can it run?

Quick estimates · decode tok/s/card 上界

TP=8 · BF16 · batch=16 · prefill=1024 · decode=256 · 已应用 efficiency 校准

在计算器中调整 →
模型 参数 (active) Decode tok/s/card 瓶颈
DeepSeek V4 Pro
deepseek
49B 显存不足
DeepSeek V4 Flash
deepseek
13B 显存不足
Mistral Small 4
mistral
22B 显存不足
GLM-5 Reasoning
zhipu
32B 9 内存带宽
GLM-5.1
zhipu
32B 显存不足
Qwen3.6 Plus
alibaba
35B 显存不足
Kimi K2.6
moonshot
32B 显存不足
MiniMax M2.7
minimax
46B 显存不足

Operator-level fit · per-model bottleneck + upper bound

算子级 fit · operator-level fit (per-token roofline)

基于每个模型 operator_decomposition + 本卡 BF16 190 TFLOPS / 820 GB/s 计算 · ridge point ≈ 232 FLOPs/byte

上界 = min(计算屋顶, 内存带宽屋顶) · efficiency 未应用
模型 domain 主导算子 AI · F/B 瓶颈 tok/s 上界
DeepSeek V4 Pro llm matmul 245.5 🔥 计算 32k
GraphCast scientific graph-message-passing 0.9 💾 内存带宽 1513
AlphaFold 3 scientific pair-bias-attention 2.3 💾 内存带宽 455
GPT-OSS llm matmul 0.7 💾 内存带宽 66
Gemma 4 26B llm matmul 0.7 💾 内存带宽 49
DeepSeek V4 Flash llm matmul 0.8 💾 内存带宽 47
Mistral Small 4 llm matmul 0.6 💾 内存带宽 21
Llama 4 Maverick llm matmul 0.8 💾 内存带宽 21
需要 efficiency 校准 + concurrency 扫描 + TCO 估算 → 在计算器中评估 →

Operator support & optimization headroom

算子支持 & 优化空间 / Operator support & headroom

Per-operator support derived from software_support.engines + scale-up topology. Optimization headroom from measured efficiency factor.

Optimization headroom
+50 pp
moderate

No cases yet — using default 0.5 efficiency. Real headroom unknown until first measurement lands.

Communication (collective)
All-to-All 🟢 mature
all-to-all via NeuronLink world_size=12
AllReduce 🟢 mature
NeuronLink ring all-reduce
Attention
Multi-Head Attention 🟢 mature
paged-attention via vLLM/SGLang/MindIE
FlashAttention-3 🔴 gap
No FA-3 path; falls back to FA-2 / vanilla SDPA
Matrix multiply (GEMM)
Matrix Multiplication 🟢 mature
GEMM supported on all inference engines
MoE routing
MoE Routing 🟢 mature
MoE gating supported via vLLM ≥0.4 / SGLang
Normalization
RMSNorm 🟢 mature
fused into engine kernels
Embedding
fused into engine kernels
Activation
SiLU / Swish 🟢 mature
fused into engine kernels
Softmax 🟢 mature
fused into engine kernels

Software-stack support

Engine Status BF16FP16FP4FP8 E4M3FP8 E5M2INT4 AWQ
HanGuangAI unconfirmed
LMDeploy unconfirmed
MindIE unconfirmed
MoRI unconfirmed
SGLang unconfirmed
TensorRT-LLM (Dynamo) unconfirmed
vLLM community

Existing deployment cases (0)

No measured cases yet for this card. Be the first contributor?

Citations

  1. [1] AWS Inferentia 2 product page — https://aws.amazon.com/ai/machine-learning/inferentia/ · accessed 2026-04-28 厂商声称
  2. [2] Inferentia 2: 2 NeuronCore-v2 engines (each tensor + scalar + vector + GPSIMD), 2× HBM2e ⇒ 32 GB; TSMC 7nm-class — https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/arch/neuron-hardware/inferentia2.html · accessed 2026-04-28 厂商声称
⚠ Inferentia2 only available via EC2 Inf2 instances.