M 摩尔线程 国产 Last verified

摩尔线程 MTT S4000

PCIE 在售 发布于 2024 kuae-s4000
BF16
TFLOP/s
100 厂商声称
FP8
TFLOP/s
不支持
FP4
TFLOP/s
不支持
Memory
GB
48 厂商声称
Mem BW
GB/s
768 厂商声称
TDP
W
450 厂商声称

完整规格

算力

FP4 TFLOPS
不支持
FP8 TFLOPS
不支持
BF16 TFLOPS
100
FP16 TFLOPS
100
INT8 TOPS
200

显存

容量
48 GB
带宽
768 GB/s
类型
GDDR6

芯片架构 🟢 vendor floorplan

Cluster count
48
制程
7 nm
PCIe
Gen 5 ×16

Scale-Up (节点内)

协议
MTLink
单链带宽
240 GB/s
World size
8
拓扑
ring
交换机

Scale-Out (节点间)

单卡出口
200 Gbps
协议
RoCEv2
NIC

拓扑示意

拓扑结构 · Topology
8 卡 scale-up domain
芯片内部 / Die-level architecture
GDDR6 48 GB @ 0.8 TB/s 摩尔线程 MTT S4000 L2 / shared cache · NoC L1$ / register file (per Cluster) 48 Clusters · darker block = tensor / matrix engine 100 TFLOPS BF16 · 48 GB GDDR6 @ 0.8 TB/s · 450 W TDP

⚠ illustrative / 示意性版图: compute-unit and HBM-stack count are inferred from public BF16 / memory specs. architecture field not populated for this card yet. Contribute floorplan data →


集群拓扑 / Cluster topology · MTLink @ 240 GB/s
MTLink switch 240 GB/s/link · all-to-all GPU 0 48GB GPU 1 48GB GPU 2 48GB GPU 3 48GB GPU 4 48GB GPU 5 48GB GPU 6 48GB GPU 7 48GB 8 cards · ring topology · scale-out: 200 Gbps/card
Scale-Up · 域内
MTLink
240 GB/s · 拓扑: ring
world_size = 8
Scale-Out · 跨域
RoCEv2
200 Gbps/卡 NIC

能跑哪些模型?

Quick estimates · decode tok/s/card 上界

TP=8 · FP16 · batch=16 · prefill=1024 · decode=256 · 已应用 efficiency 校准

在计算器中调整 →
模型 参数 (active) Decode tok/s/card 瓶颈
DeepSeek V4 Pro
deepseek
49B 显存不足
DeepSeek V4 Flash
deepseek
13B 显存不足
Mistral Small 4
mistral
22B 9 内存带宽
GLM-5 Reasoning
zhipu
32B 8 内存带宽
GLM-5.1
zhipu
32B 显存不足
Qwen3.6 Plus
alibaba
35B 显存不足
Kimi K2.6
moonshot
32B 显存不足
MiniMax M2.7
minimax
46B 显存不足

算子级 fit · 任意模型瓶颈类型 + 上界

算子级 fit · operator-level fit (per-token roofline)

基于每个模型 operator_decomposition + 本卡 BF16 100 TFLOPS / 768 GB/s 计算 · ridge point ≈ 130 FLOPs/byte

上界 = min(计算屋顶, 内存带宽屋顶) · efficiency 未应用
模型 domain 主导算子 AI · F/B 瓶颈 tok/s 上界
DeepSeek V4 Pro llm matmul 245.5 🔥 计算 17k
GraphCast scientific graph-message-passing 0.9 💾 内存带宽 1417
AlphaFold 3 scientific pair-bias-attention 2.3 💾 内存带宽 426
GPT-OSS llm matmul 0.7 💾 内存带宽 62
Gemma 4 26B llm matmul 0.7 💾 内存带宽 46
DeepSeek V4 Flash llm matmul 0.8 💾 内存带宽 44
Mistral Small 4 llm matmul 0.6 💾 内存带宽 20
Llama 4 Maverick llm matmul 0.8 💾 内存带宽 20
需要 efficiency 校准 + concurrency 扫描 + TCO 估算 → 在计算器中评估 →

算子支持 & 优化空间

算子支持 & 优化空间 / Operator support & headroom

Per-operator support derived from software_support.engines + scale-up topology. Optimization headroom from measured efficiency factor.

Optimization headroom
+54 pp
big

Currently reaching 46% of theoretical roofline. Massive kernel-tuning headroom — every +0.05 in efficiency ≈ +10% effective throughput.

Communication (collective)
All-to-All 🟢 mature
all-to-all via MTLink world_size=8
AllReduce 🟢 mature
MTLink ring all-reduce
Attention
Multi-Head Attention 🟢 mature
paged-attention via vLLM/SGLang/MindIE
FlashAttention-3 🔴 gap
No FA-3 path; falls back to FA-2 / vanilla SDPA
Matrix multiply (GEMM)
Matrix Multiplication 🟢 mature
GEMM supported on all inference engines
MoE routing
MoE Routing 🟢 mature
MoE gating supported via vLLM ≥0.4 / SGLang
Normalization
RMSNorm 🟢 mature
fused into engine kernels
Embedding
fused into engine kernels
Activation
SiLU / Swish 🟢 mature
fused into engine kernels
Softmax 🟢 mature
fused into engine kernels

软件栈支持

引擎 状态 BF16FP16FP4FP8 E4M3FP8 E5M2INT4 AWQ
HanGuangAI 未确认
LMDeploy 未确认
MindIE 未确认
MoRI 未确认
SGLang 未确认
TensorRT-LLM (Dynamo) 未确认
vLLM 社区
实测校准 efficiency factor

基于 1 个该硬件的实测案例计算得出, 计算器使用此值替代默认 0.5。

0.46
measured / theoretical (n=1)

已有部署案例 (1)

引证

  1. [1] Moore Threads MTT S4000 product page — https://www.mthreads.com/product/S4000 · 访问于 2026-04-28 厂商声称
  2. [2] KUAE S4000 (MUSA architecture): 48 compute clusters; PCIe Gen5 x16; SMIC 7nm-class fabrication — https://www.mthreads.com/product/S4000 · 访问于 2026-04-28 社区估算
⚠ All performance figures are vendor-claimed unless tier=measured.
⚠ MUSA programming model is CUDA-compatible at source level.