C Cerebras Systems Last verified

Cerebras WSE-3

WAFER-SCALE 在售 发布于 2024 cerebras-wse-gen3
BF16
TFLOP/s
62500 厂商声称
FP8
TFLOP/s
125000 厂商声称
FP4
TFLOP/s
不支持
Memory
GB
44 厂商声称
Mem BW
GB/s
21000000 厂商声称
TDP
W
23000 厂商声称

完整规格

算力

FP4 TFLOPS
不支持
FP8 TFLOPS
125000
BF16 TFLOPS
62500
FP16 TFLOPS
62500
INT8 TOPS
125000

显存

容量
44 GB
带宽
21000000 GB/s
类型
on-die-sram

芯片架构 🟢 vendor floorplan

Tile count
900000
制程
5 nm
Die area
46225 mm²
Transistors
4000 B

Scale-Up (节点内)

协议
SwarmX
单链带宽
16000 GB/s
World size
16
拓扑
full-mesh
交换机

Scale-Out (节点间)

单卡出口
1200 Gbps
协议
SwarmX-IB
NIC

拓扑示意

拓扑结构 · Topology
16 卡 scale-up domain
芯片内部 / Die-level architecture
on-die-sram 44 GB @ 21000.0 TB/s Cerebras WSE-3 L2 / shared cache · NoC L1$ / register file (per Tile) 900,000 Tiles (capped at 256 for visual) · darker block = tensor / matrix engine 62500 TFLOPS BF16 · 125000 FP8 · 44 GB on-die-sram @ 21000.0 TB/s · 23000 W TDP

🟢 vendor floorplan 900,000 Tiles · 44,000 MB on-die SRAM · 5 nm · 4000 B transistors · 46225 mm²

🌐 wafer-scale (no host-device transfer) 📦 on-die SRAM only (capacity ≪ HBM, bandwidth ≫ HBM)

集群拓扑 / Cluster topology · SwarmX @ 16000 GB/s
ToR · SwarmX Node 1 Node 2 2 节点 × 8 卡 = 16 卡 · 节点内 16000 GB/s · 节点间 1200 Gbps RoCE/IB
Scale-Up · 域内
SwarmX
16000 GB/s · 拓扑: full-mesh
world_size = 16
Scale-Out · 跨域
SwarmX-IB
1200 Gbps/卡 NIC

能跑哪些模型?

Quick estimates · decode tok/s/card 上界

TP=8 · FP8 · batch=16 · prefill=1024 · decode=256 · 已应用 efficiency 校准

在计算器中调整 →
模型 参数 (active) Decode tok/s/card 瓶颈
DeepSeek V4 Pro
deepseek
49B 显存不足
DeepSeek V4 Flash
deepseek
13B 597,166 内存带宽
Mistral Small 4
mistral
22B 272,102 内存带宽
GLM-5 Reasoning
zhipu
32B 225,221 内存带宽
GLM-5.1
zhipu
32B 显存不足
Qwen3.6 Plus
alibaba
35B 显存不足
Kimi K2.6
moonshot
32B 显存不足
MiniMax M2.7
minimax
46B 显存不足

算子级 fit · 任意模型瓶颈类型 + 上界

算子级 fit · operator-level fit (per-token roofline)

基于每个模型 operator_decomposition + 本卡 BF16 62,500 TFLOPS / 21,000,000 GB/s 计算 · ridge point ≈ 3 FLOPs/byte

上界 = min(计算屋顶, 内存带宽屋顶) · efficiency 未应用
模型 domain 主导算子 AI · F/B 瓶颈 tok/s 上界
GraphCast scientific graph-message-passing 0.9 💾 内存带宽 38745k
AlphaFold 3 scientific pair-bias-attention 2.3 💾 内存带宽 11641k
DeepSeek V4 Pro llm matmul 245.5 🔥 计算 10391k
GPT-OSS llm matmul 0.7 💾 内存带宽 1698k
Gemma 4 26B llm matmul 0.7 💾 内存带宽 1262k
DeepSeek V4 Flash llm matmul 0.8 💾 内存带宽 1194k
Mistral Small 4 llm matmul 0.6 💾 内存带宽 544k
Llama 4 Maverick llm matmul 0.8 💾 内存带宽 538k
需要 efficiency 校准 + concurrency 扫描 + TCO 估算 → 在计算器中评估 →

算子支持 & 优化空间

算子支持 & 优化空间 / Operator support & headroom

Per-operator support derived from software_support.engines + scale-up topology. Optimization headroom from measured efficiency factor.

Optimization headroom
+50 pp
moderate

No cases yet — using default 0.5 efficiency. Real headroom unknown until first measurement lands.

Communication (collective)
All-to-All 🟢 mature
all-to-all via SwarmX world_size=16
AllReduce 🟢 mature
SwarmX ring all-reduce
Attention
Multi-Head Attention 🟢 mature
paged-attention via vLLM/SGLang/MindIE
FlashAttention-3 🟢 mature
FA-3 on modern engine + tensor cores
Matrix multiply (GEMM)
Matrix Multiplication 🟢 mature
GEMM supported on all inference engines
MoE routing
MoE Routing 🟢 mature
MoE gating supported via vLLM ≥0.4 / SGLang
Normalization
RMSNorm 🟢 mature
fused into engine kernels
Embedding
fused into engine kernels
Activation
SiLU / Swish 🟢 mature
fused into engine kernels
Softmax 🟢 mature
fused into engine kernels

软件栈支持

引擎 状态 BF16FP16FP4FP8 E4M3FP8 E5M2INT4 AWQ
HanGuangAI 未确认
LMDeploy 未确认
MindIE 未确认
MoRI 未确认
SGLang 未确认
TensorRT-LLM (Dynamo) 未确认
vLLM 社区

已有部署案例 (0)

暂无该硬件的实测案例。 成为第一个贡献者?

引证

  1. [1] Cerebras WSE-3 product page: 4 trillion transistors, 900,000 cores, 44 GB on-chip SRAM, 21 PB/s memory bandwidth, 5nm TSMC, 46,225 mm² die — https://www.cerebras.ai/product-chip/ · 访问于 2026-04-29 厂商声称
  2. [2] FP64 estimate from CS-3 SC23 paper showing ~7.8 TFLOPS sustained on HPL-AI workloads (derived from system-level measurements) — https://www.cerebras.ai/blog/wse3-architecture · 访问于 2026-04-29 社区估算
⚠ Wafer-scale architecture: throughput scales nonlinearly with model size due to no host-device transfer.
⚠ Memory bandwidth quoted is aggregate on-die SRAM bandwidth, not HBM-equivalent.
⚠ World-size for scale-up is at the CS-3 system level (1 system = 1 wafer); SwarmX clusters connect multiple systems.