CHINA HUB

Chinese AI inference hardware

Coverage: 9 vendors · 14 accelerators · 2 super-pod blueprints (incl. CloudMatrix 384). All numbers carry evidence citations; missing fields are honestly flagged.

Chinese silicon × frontier models matrix

🟢 measured · 🟡 vendor-claimed (unverified) · — unsupported / unknown. Click a cell for detail.

Generation genealogy

Year-by-year evolution per vendor. Click a node for detail.

Vendor 20192020202120222023202420252026
Biren Technology · · · BR100 BR104 · · ·
Cambricon · · · MLU370-X8 590 · · ·
Enflame · · · · T21 · · ·
Huawei Ascend · · · · 910B 910C · 950
Hygon · · · Z100 · K100 · ·
Iluvatar CoreX · · · · 100 · · ·
MetaX · · · · · C500 · ·
Moore Threads · · · · · S4000 · ·
T-Head (Pingtouge) 800 · · · · · · ·

Software ecosystem comparison

Programming model · operator library · inference engine · model zoo.

Vendor Programming model Operator library Inference engines Model zoo
Biren Technology BIRENSUPA BIRENSUPA op library
Cambricon BANG / Neuware CNNL
LMDeploy
Enflame TopsRider Enflame SDK
vLLM
Huawei Ascend CANN / Ascend C AscendCL
MindIEvLLMLMDeploy
link ↗
Hygon DTK / HIP DCU op library
vLLM
Iluvatar CoreX IxRT / CoreX CoreX op library
MetaX MACA / MetaX SDK MetaX op library
Moore Threads MUSA MUSA Toolkit
vLLM
T-Head (Pingtouge) HanGuangAI HanGuang op library
HanGuangAI

Chinese super-pods

Rack-scale scale-up systems benchmarked against NVIDIA NVL72.

Huawei Atlas 900 SuperPoD A2

Atlas 900 SuperPoD A2
Cards
256
Scale-up domain
256
Total memory
32.0 TB
Rack power
200 kW
Scale-up
HCCS-v2
Cooling
liquid

Huawei CloudMatrix 384

昇腾超节点 CloudMatrix 384
Cards
384
Scale-up domain
384
Total memory
48.0 TB
Rack power
600 kW
Scale-up
lingqu
Cooling
liquid