B 壁仞科技 国产 Last verified

壁仞 BR100

OAM discontinued 发布于 2022 biren-gen1
BF16
TFLOP/s
256 厂商声称
FP8
TFLOP/s
不支持
FP4
TFLOP/s
不支持
Memory
GB
64 厂商声称
Mem BW
GB/s
2300 厂商声称
TDP
W
550 厂商声称

完整规格

算力

FP4 TFLOPS
不支持
FP8 TFLOPS
不支持
BF16 TFLOPS
256
FP16 TFLOPS
256
INT8 TOPS
1024

显存

容量
64 GB
带宽
2300 GB/s
类型
HBM2e

芯片架构 🟢 vendor floorplan

SM count
64
HBM stacks
4
制程
7 nm
Transistors
77 B

Scale-Up (节点内)

协议
BLink
单链带宽
512 GB/s
World size
8
拓扑
switched
交换机

Scale-Out (节点间)

单卡出口
200 Gbps
协议
RoCEv2
NIC

拓扑示意

拓扑结构 · Topology
8 卡 scale-up domain
芯片内部 / Die-level architecture
HBM HBM HBM HBM 壁仞 BR100 L2 / shared cache · NoC L1$ / register file (per SM) 64 SMs · darker block = tensor / matrix engine 256 TFLOPS BF16 · 64 GB HBM2e @ 2.3 TB/s · 550 W TDP

🟢 vendor floorplan 64 SMs · 4× HBM · 7 nm · 77 B transistors


集群拓扑 / Cluster topology · BLink @ 512 GB/s
BLink switch 512 GB/s/link · all-to-all GPU 0 64GB GPU 1 64GB GPU 2 64GB GPU 3 64GB GPU 4 64GB GPU 5 64GB GPU 6 64GB GPU 7 64GB 8 cards · switched topology · scale-out: 200 Gbps/card
Scale-Up · 域内
BLink
512 GB/s · 拓扑: switched
world_size = 8
Scale-Out · 跨域
RoCEv2
200 Gbps/卡 NIC

能跑哪些模型?

Quick estimates · decode tok/s/card 上界

TP=8 · FP16 · batch=16 · prefill=1024 · decode=256 · 已应用 efficiency 校准

在计算器中调整 →
模型 参数 (active) Decode tok/s/card 瓶颈
DeepSeek V4 Pro
deepseek
49B 显存不足
DeepSeek V4 Flash
deepseek
13B 显存不足
Mistral Small 4
mistral
22B 30 内存带宽
GLM-5 Reasoning
zhipu
32B 25 内存带宽
GLM-5.1
zhipu
32B 显存不足
Qwen3.6 Plus
alibaba
35B 显存不足
Kimi K2.6
moonshot
32B 显存不足
MiniMax M2.7
minimax
46B 显存不足

算子级 fit · 任意模型瓶颈类型 + 上界

算子级 fit · operator-level fit (per-token roofline)

基于每个模型 operator_decomposition + 本卡 BF16 256 TFLOPS / 2,300 GB/s 计算 · ridge point ≈ 111 FLOPs/byte

上界 = min(计算屋顶, 内存带宽屋顶) · efficiency 未应用
模型 domain 主导算子 AI · F/B 瓶颈 tok/s 上界
DeepSeek V4 Pro llm matmul 245.5 🔥 计算 43k
GraphCast scientific graph-message-passing 0.9 💾 内存带宽 4244
AlphaFold 3 scientific pair-bias-attention 2.3 💾 内存带宽 1275
GPT-OSS llm matmul 0.7 💾 内存带宽 186
Gemma 4 26B llm matmul 0.7 💾 内存带宽 138
DeepSeek V4 Flash llm matmul 0.8 💾 内存带宽 131
Mistral Small 4 llm matmul 0.6 💾 内存带宽 60
Llama 4 Maverick llm matmul 0.8 💾 内存带宽 59
需要 efficiency 校准 + concurrency 扫描 + TCO 估算 → 在计算器中评估 →

算子支持 & 优化空间

算子支持 & 优化空间 / Operator support & headroom

Per-operator support derived from software_support.engines + scale-up topology. Optimization headroom from measured efficiency factor.

Optimization headroom
+50 pp
moderate

No cases yet — using default 0.5 efficiency. Real headroom unknown until first measurement lands.

Communication (collective)
All-to-All 🟢 mature
all-to-all via BLink world_size=8
AllReduce 🟢 mature
BLink ring all-reduce
Attention
Multi-Head Attention 🟡 partial
no production attention engine
FlashAttention-3 🔴 gap
No FA-3 path; falls back to FA-2 / vanilla SDPA
Matrix multiply (GEMM)
Matrix Multiplication 🟢 mature
GEMM supported on all inference engines
MoE routing
MoE Routing 🔴 gap
no MoE-aware engine
Normalization
RMSNorm 🟢 mature
fused into engine kernels
Embedding
fused into engine kernels
Activation
SiLU / Swish 🟢 mature
fused into engine kernels
Softmax 🟢 mature
fused into engine kernels

软件栈支持

引擎 状态 BF16FP16FP4FP8 E4M3FP8 E5M2INT4 AWQ
HanGuangAI 未确认
LMDeploy 未确认
MindIE 未确认
MoRI 未确认
SGLang 未确认
TensorRT-LLM (Dynamo) 未确认
vLLM 未确认

已有部署案例 (0)

暂无该硬件的实测案例。 成为第一个贡献者?

引证

  1. [1] Biren BR100 launch announcement (export-control affected) — https://www.birentech.com/ · 访问于 2026-04-28 厂商声称
  2. [2] BR100: 64 SPCs (Streaming Processor Clusters), 4× HBM2e ⇒ 64 GB, 77B transistors @ TSMC 7nm chiplet — https://www.birentech.com/ · 访问于 2026-04-28 社区估算
⚠ BR100 affected by US export controls; production discontinued.
⚠ All performance figures are vendor-claimed unless tier=measured.