Llama 4 Maverick on TPU Trillium (v6e) 256-chip pod
Submitted by @evokernel-bot on 2026-04-25 · https://evokernel.dev/en/cases/case-llama4mvk-trillium-256-001/
Stack
Hardware
trillium × 256 (pod 2D-torus)
Server
—
Interconnect
intra: ICI · inter: DCN
Model
llama-4-maverick (bf16)
Engine
vllm0.6.0
Quantization
bf16
Parallel
TP=8 · PP=4 · EP=8 · SP=1
Driver
PyTorch/XLA 2.5
OS
GKE Container OS
Scenario
Prefill seq
4096
Decode seq
1024
Batch
64
Max concurrent
256
Results
Decode tok/s
5800
Prefill tok/s
72000
TTFT p50
ms
180
TBT p50
ms
14
Memory/card
GB
26
Power/card
W
240
Compute
util %
62
Memory BW
util %
58
Bottleneck — compute
Compute 62% Memory BW 58% Other 0%
Reproduction
jax distributed init; vllm serve meta-llama/Llama-4-Maverick --backend xla Benchmark tool: mlperf-inference + sharegpt
Issues encountered
- 2D-torus EP=8 跨象限 all2all 比单象限内高约 25%
Optimization patterns
Citations
-
[1] Google Cloud Trillium TPU v6e benchmark coverage —
https://cloud.google.com/blog/products/compute/introducing-trillium-6th-gen-tpus · 2026-04-28 实测验证 Attestation: Numbers extracted from Google Cloud public Trillium benchmark; not independently re-run.