N NVIDIA Last verified

NVIDIA B200 SXM 180GB

SXM 在售 发布于 2024 blackwell-gen1
BF16
TFLOP/s
2250 厂商声称
FP8
TFLOP/s
4500 厂商声称
FP4
TFLOP/s
9000 厂商声称
Memory
GB
180 厂商声称
Mem BW
GB/s
8000 厂商声称
TDP
W
1000 厂商声称

完整规格

算力

FP4 TFLOPS
9000
FP8 TFLOPS
4500
BF16 TFLOPS
2250
FP16 TFLOPS
2250
INT8 TOPS
4500

显存

容量
180 GB
带宽
8000 GB/s
类型
HBM3e

芯片架构 🟢 vendor floorplan

SM count
160
Tensor cores / SM
4
L2 cache
100 MB
HBM stacks
8
制程
4 nm
Die area
1600 mm²
Transistors
208 B
PCIe
Gen 5 ×16

Scale-Up (节点内)

协议
NVLink-5.0
单链带宽
1800 GB/s
World size
8
拓扑
switched
交换机
nvswitch-gen4

Scale-Out (节点间)

单卡出口
800 Gbps
协议
InfiniBand-XDR
NIC
ConnectX-8

拓扑示意

拓扑结构 · Topology
8 卡 scale-up domain
芯片内部 / Die-level architecture
HBM HBM HBM HBM HBM HBM HBM HBM NVIDIA B200 SXM 180GB L2 / shared cache · NoC L1$ / register file (per SM) 160 SMs · darker block = tensor / matrix engine 2250 TFLOPS BF16 · 4500 FP8 · 180 GB HBM3e @ 8.0 TB/s · 1000 W TDP

🟢 vendor floorplan 160 SMs · 8× HBM · 100 MB L2 · 4 nm · 208 B transistors · 1600 mm²


集群拓扑 / Cluster topology · NVLink-5.0 @ 1800 GB/s
nvswitch-gen4 1800 GB/s/link · all-to-all GPU 0 180GB GPU 1 180GB GPU 2 180GB GPU 3 180GB GPU 4 180GB GPU 5 180GB GPU 6 180GB GPU 7 180GB 8 cards · switched topology · scale-out: 800 Gbps/card
Scale-Up · 域内
NVLink-5.0
1800 GB/s · 拓扑: switched
world_size = 8
Scale-Out · 跨域
InfiniBand-XDR
800 Gbps/卡 NIC
ConnectX-8

能跑哪些模型?

Quick estimates · decode tok/s/card 上界

TP=8 · FP4 · batch=16 · prefill=1024 · decode=256 · 已应用 efficiency 校准

在计算器中调整 →
模型 参数 (active) Decode tok/s/card 瓶颈
DeepSeek V4 Pro
deepseek
49B 163,265 内存带宽
DeepSeek V4 Flash
deepseek
13B 227 内存带宽
Mistral Small 4
mistral
22B 104 内存带宽
GLM-5 Reasoning
zhipu
32B 86 内存带宽
GLM-5.1
zhipu
32B 58 内存带宽
Qwen3.6 Plus
alibaba
35B 56 内存带宽
Kimi K2.6
moonshot
32B 48 内存带宽
MiniMax M2.7
minimax
46B 38 内存带宽

算子级 fit · 任意模型瓶颈类型 + 上界

算子级 fit · operator-level fit (per-token roofline)

基于每个模型 operator_decomposition + 本卡 BF16 2,250 TFLOPS / 8,000 GB/s 计算 · ridge point ≈ 281 FLOPs/byte

上界 = min(计算屋顶, 内存带宽屋顶) · efficiency 未应用
模型 domain 主导算子 AI · F/B 瓶颈 tok/s 上界
DeepSeek V4 Pro llm matmul 245.5 💾 内存带宽 327k
GraphCast scientific graph-message-passing 0.9 💾 内存带宽 15k
AlphaFold 3 scientific pair-bias-attention 2.3 💾 内存带宽 4435
GPT-OSS llm matmul 0.7 💾 内存带宽 647
Gemma 4 26B llm matmul 0.7 💾 内存带宽 481
DeepSeek V4 Flash llm matmul 0.8 💾 内存带宽 455
Mistral Small 4 llm matmul 0.6 💾 内存带宽 207
Llama 4 Maverick llm matmul 0.8 💾 内存带宽 205
需要 efficiency 校准 + concurrency 扫描 + TCO 估算 → 在计算器中评估 →

算子支持 & 优化空间

算子支持 & 优化空间 / Operator support & headroom

Per-operator support derived from software_support.engines + scale-up topology. Optimization headroom from measured efficiency factor.

Optimization headroom
+50 pp
moderate

No cases yet — using default 0.5 efficiency. Real headroom unknown until first measurement lands.

Communication (collective)
All-to-All 🟢 mature
all-to-all via NVLink-5.0 world_size=8
AllReduce 🟢 mature
NVLink-5.0 ring all-reduce
Attention
Multi-Head Attention 🟢 mature
paged-attention via vLLM/SGLang/MindIE
FlashAttention-3 🟢 mature
FA-3 on modern engine + tensor cores
Matrix multiply (GEMM)
Matrix Multiplication 🟢 mature
GEMM supported on all inference engines
MoE routing
MoE Routing 🟢 mature
MoE gating supported via vLLM ≥0.4 / SGLang
Normalization
RMSNorm 🟢 mature
fused into engine kernels
Embedding
fused into engine kernels
Activation
SiLU / Swish 🟢 mature
fused into engine kernels
Softmax 🟢 mature
fused into engine kernels

软件栈支持

引擎 状态 BF16FP16FP4FP8 E4M3FP8 E5M2INT4 AWQ
HanGuangAI 未确认
LMDeploy 未确认
MindIE 未确认
MoRI 未确认
SGLang 官方
TensorRT-LLM (Dynamo) 官方
vLLM 官方

已有部署案例 (0)

暂无该硬件的实测案例。 成为第一个贡献者?

引证

  1. [1] NVIDIA Blackwell B200 product specifications — https://www.nvidia.com/en-us/data-center/dgx-b200/ · 访问于 2026-04-28 厂商声称
  2. [2] Blackwell B200 architecture: dual-die package (2× 104 SMs ⇒ 160 enabled), 100 MB L2, 8× HBM3e stacks (180 GB), 208B transistors @ TSMC 4NP — https://resources.nvidia.com/en-us-blackwell-architecture · 访问于 2026-04-28 厂商声称
⚠ All performance figures are vendor-claimed unless tier=measured.