Llama 4 Scout on 8× Hygon DCU K100 with vLLM
Submitted by @evokernel-bot on 2026-04-25 · https://evokernel.dev/en/cases/case-llama4scout-dcuk100x8-001/
Stack
Hardware
dcu-k100 × 8 (single-node OAM)
Server
—
Interconnect
intra: Hygon-Link · inter: none
Model
llama-4-scout (bf16)
Engine
vllm0.6.0
Quantization
bf16
Parallel
TP=8 · PP=1 · EP=1 · SP=1
Driver
DTK 24.04
OS
KylinOS 10
Scenario
Prefill seq
1024
Decode seq
256
Batch
16
Max concurrent
64
Results
Decode tok/s
850
Prefill tok/s
12500
TTFT p50
ms
320
TBT p50
ms
42
Memory/card
GB
36
Power/card
W
580
Compute
util %
32
Memory BW
util %
64
Same-model side-by-side
本 case vs 同模型其他 case 的吞吐对比
Bottleneck — software
Compute 32% Memory BW 64% Other 4%
Reproduction
vllm serve meta-llama/Llama-4-Scout --device hygon --tp 8 Benchmark tool: vllm benchmark_serving.py
Issues encountered
- DTK 24.04 vLLM-rocm fork compatibility — needed manual patch for 4096-block KV
Citations
-
[1] Hygon DCU K100 + vLLM community port benchmark sharing —
https://www.hygon.cn/ · 2026-04-28 实测验证 Attestation: Numbers extracted from Hygon community port testing; not independently re-run.