T T-Head (Pingtouge) China Last verified

平头哥 含光 800

PCIE In production Released 2019 hanguang-gen1
BF16
TFLOP/s
FP8
TFLOP/s
unsupported
FP4
TFLOP/s
unsupported
Memory
GB
16 厂商声称
Mem BW
GB/s
256 厂商声称
TDP
W
280 厂商声称

Full specs

Compute

FP4 TFLOPS
unsupported
FP8 TFLOPS
unsupported
BF16 TFLOPS
FP16 TFLOPS
25
INT8 TOPS
825

Memory

Capacity
16 GB
Bandwidth
256 GB/s
Type
LPDDR5

Die architecture 🟢 vendor floorplan

Cluster count
4
Process
12 nm
Transistors
17 B
PCIe
Gen 4 ×16

Scale-Up (intra-node)

Protocol
PCIe-Gen4
Per-link BW
64 GB/s
World size
4
Topology
pcie-fabric
Switch

Scale-Out (inter-node)

Per-card NIC
100 Gbps
Protocol
RoCEv2
NIC

Topology

拓扑结构 · Topology
4 卡 scale-up domain
芯片内部 / Die-level architecture
LPDDR5 16 GB @ 0.3 TB/s 平头哥 含光 800 L2 / shared cache · NoC L1$ / register file (per Cluster) 4 Clusters · darker block = tensor / matrix engine 0 TFLOPS BF16 · 16 GB LPDDR5 @ 0.3 TB/s · 280 W TDP

⚠ illustrative / 示意性版图: compute-unit and HBM-stack count are inferred from public BF16 / memory specs. architecture field not populated for this card yet. Contribute floorplan data →


集群拓扑 / Cluster topology · PCIe-Gen4 @ 64 GB/s
PCIe-Gen4 switch 64 GB/s/link · all-to-all GPU 0 16GB GPU 1 16GB GPU 2 16GB GPU 3 16GB 4 cards · pcie-fabric topology · scale-out: 100 Gbps/card
Scale-Up · 域内
PCIe-Gen4
64 GB/s · 拓扑: pcie-fabric
world_size = 4
Scale-Out · 跨域
RoCEv2
100 Gbps/卡 NIC

Which models can it run?

Quick estimates · decode tok/s/card 上界

TP=4 · INT8 · batch=16 · prefill=1024 · decode=256 · 已应用 efficiency 校准

在计算器中调整 →
模型 参数 (active) Decode tok/s/card 瓶颈
DeepSeek V4 Pro
deepseek
49B 显存不足
DeepSeek V4 Flash
deepseek
13B 显存不足
Mistral Small 4
mistral
22B 显存不足
GLM-5 Reasoning
zhipu
32B 3 内存带宽
GLM-5.1
zhipu
32B 显存不足
Qwen3.6 Plus
alibaba
35B 显存不足
Kimi K2.6
moonshot
32B 显存不足
MiniMax M2.7
minimax
46B 显存不足

Operator-level fit · per-model bottleneck + upper bound

算子级 fit · operator-level fit (per-token roofline)

基于每个模型 operator_decomposition + 本卡 BF16 0 TFLOPS / 256 GB/s 计算 · ridge point ≈ 0 FLOPs/byte

上界 = min(计算屋顶, 内存带宽屋顶) · efficiency 未应用
模型 domain 主导算子 AI · F/B 瓶颈 tok/s 上界
DeepSeek V4 Flash llm matmul 0.8 🔥 计算
DeepSeek V4 Pro llm matmul 245.5 🔥 计算
Kimi K2.6 llm matmul 0.8 🔥 计算
MiniMax M2.7 llm matmul 0.6 🔥 计算
GLM-5.1 llm matmul 0.8 🔥 计算
Qwen3.6 Plus llm matmul 0.7 🔥 计算
Mistral Small 4 llm matmul 0.6 🔥 计算
GLM-5 Reasoning llm matmul 0.9 🔥 计算
需要 efficiency 校准 + concurrency 扫描 + TCO 估算 → 在计算器中评估 →

Operator support & optimization headroom

算子支持 & 优化空间 / Operator support & headroom

Per-operator support derived from software_support.engines + scale-up topology. Optimization headroom from measured efficiency factor.

Optimization headroom
+50 pp
moderate

No cases yet — using default 0.5 efficiency. Real headroom unknown until first measurement lands.

Communication (collective)
All-to-All 🟡 partial
small scale-up domain; expert-parallel needs careful sharding
AllReduce 🟢 mature
PCIe-Gen4 ring all-reduce
Attention
Multi-Head Attention 🟡 partial
no production attention engine
FlashAttention-3 🔴 gap
No FA-3 path; falls back to FA-2 / vanilla SDPA
Matrix multiply (GEMM)
Matrix Multiplication 🟢 mature
GEMM supported on all inference engines
MoE routing
MoE Routing 🔴 gap
no MoE-aware engine
Normalization
RMSNorm 🟢 mature
fused into engine kernels
Embedding
fused into engine kernels
Activation
SiLU / Swish 🟢 mature
fused into engine kernels
Softmax 🟢 mature
fused into engine kernels

Software-stack support

Engine Status BF16FP16FP4FP8 E4M3FP8 E5M2INT4 AWQ
HanGuangAI official
LMDeploy unconfirmed
MindIE unconfirmed
MoRI unconfirmed
SGLang unconfirmed
TensorRT-LLM (Dynamo) unconfirmed
vLLM unconfirmed

Existing deployment cases (0)

No measured cases yet for this card. Be the first contributor?

Citations

  1. [1] T-Head HanGuang 800 launch coverage (Alibaba Cloud Apsara 2019) — https://www.t-head.cn/ · accessed 2026-04-28 厂商声称
  2. [2] HanGuang 800 (含光800): 4-cluster NPU, 17B transistors @ TSMC 12nm; INT8-focused inference accelerator (Apsara 2019 launch) — https://www.t-head.cn/ · accessed 2026-04-28 社区估算
⚠ HanGuang 800 is INT8 inference-focused; not designed for FP training.
⚠ Specs are vendor-claimed; LLM inference support is not the primary use case.