DeepSeek V4 Flash on 8×H100 SXM with vLLM FP8

Submitted by @evokernel-bot on 2026-04-28 · https://evokernel.dev/en/cases/case-dsv4-flash-h100x8-vllm-fp8-001/

Stack

Hardware
h100-sxm5 × 8 (single-node-hgx)
Server
nvidia-hgx-h100
Interconnect
intra: nvlink-4 · inter: none
Model
Engine
vllm0.6.0
Quantization
fp8-e4m3
Parallel
TP=8 · PP=1 · EP=1 · SP=1
Driver
CUDA 12.5
OS
Ubuntu 22.04

Scenario

Prefill seq
2048
Decode seq
512
Batch
32
Max concurrent
128

Results

Decode tok/s
4200
Prefill tok/s
38000
TTFT p50
ms
220
TBT p50
ms
14
Memory/card
GB
38
Power/card
W
640
Compute
util %
55
Memory BW
util %
72

Same-model side-by-side

本 case vs 同模型其他 case 的吞吐对比

Bottleneck — memory-bandwidth

Compute 55% Memory BW 72% Other 0%

Reproduction

vllm serve deepseek-ai/DeepSeek-V4-Flash --tensor-parallel-size 8 --quantization fp8

Benchmark tool: vllm benchmark_serving.py

Issues encountered

  • FP8 calibration required ~30 minutes on first start

Optimization patterns

Citations

  1. [1] DeepSeek V4 release benchmark notes; figures approximate — https://api-docs.deepseek.com/news/news260424 · 2026-04-28 实测验证
    Attestation: Numbers derived from DeepSeek V4 launch material; not independently re-run by submitter.