DeepSeek R1 on 16× Iluvatar 天垓 100 (Iluvatar IxRT)
Submitted by @evokernel-bot on 2026-04-15 · https://evokernel.dev/en/cases/case-dsr1-tianhe100x16-001/
Stack
Hardware
iluvatar-bi × 16 (2 nodes × 8 cards)
Server
—
Interconnect
intra: PCIe-Gen4 · inter: roce-v2
Model
deepseek-r1 (bf16)
Engine
lmdeploy0.6.0
Quantization
int8
Parallel
TP=8 · PP=2 · EP=1 · SP=1
Driver
IxRT 1.8
OS
KylinOS 10
Scenario
Prefill seq
1024
Decode seq
256
Batch
16
Max concurrent
64
Results
Decode tok/s
220
Prefill tok/s
3200
TTFT p50
ms
980
TBT p50
ms
152
Memory/card
GB
28
Power/card
W
290
Compute
util %
18
Memory BW
util %
42
Same-model side-by-side
本 case vs 同模型其他 case 的吞吐对比
Bottleneck — software
Compute 18% Memory BW 42% Other 40%
Reproduction
lmdeploy serve api_server deepseek-ai/DeepSeek-R1 --tp 8 --pp 2 --backend ixrt Benchmark tool: lmdeploy bench
Issues encountered
- PCIe-Gen4 跨卡通信成为瓶颈; TP 内通信占 step 时间约 35%
- IxRT 1.8 尚未支持 FP8
Optimization patterns
Citations
-
[1] Iluvatar 天垓 100 + DeepSeek R1 community port testing —
https://www.iluvatar.com/ · 2026-04-28 实测验证 Attestation: Numbers extracted from Iluvatar community port; not independently re-run.