Qwen3.6 Plus on 8× Cambricon MLU590 with LMDeploy

Submitted by @evokernel-bot on 2026-04-22 · https://evokernel.dev/en/cases/case-qwen36plus-mlu590x8-001/

Stack

Hardware
mlu590 × 8 (single-node X8)
Server
cambricon-x8-server
Interconnect
intra: mlu-link-v2 · inter: none
Model
Engine
lmdeploy0.6.0
Quantization
int8
Parallel
TP=8 · PP=1 · EP=4 · SP=1
Driver
Neuware 3.5
OS
KylinOS 10

Scenario

Prefill seq
2048
Decode seq
512
Batch
16
Max concurrent
64

Results

Decode tok/s
380
Prefill tok/s
5800
TTFT p50
ms
580
TBT p50
ms
92
Memory/card
GB
48
Power/card
W
310
Compute
util %
26
Memory BW
util %
58

Same-model side-by-side

本 case vs 同模型其他 case 的吞吐对比

Bottleneck — software

Compute 26% Memory BW 58% Other 16%

Reproduction

lmdeploy serve api_server Qwen/Qwen3.6-Plus --tp 8 --backend mlu --quantization int8

Benchmark tool: lmdeploy bench

Issues encountered

  • INT8 calibration 用了 1024 sample, BLEU 比 BF16 略降 (~0.3)

Optimization patterns

Citations

  1. [1] LMDeploy community Cambricon backend benchmark — https://github.com/InternLM/lmdeploy · 2026-04-28 实测验证
    Attestation: Numbers extracted from LMDeploy MLU community port testing; not independently re-run.