摩尔线程 MTT S4000
Full specs
Compute
Memory
Die architecture 🟢 vendor floorplan
Scale-Up (intra-node)
Scale-Out (inter-node)
Topology
⚠ illustrative / 示意性版图: compute-unit and HBM-stack count are inferred from public BF16 / memory specs. architecture field not populated for this card yet. Contribute floorplan data →
Which models can it run?
Quick estimates · decode tok/s/card 上界
TP=8 · FP16 · batch=16 · prefill=1024 · decode=256 · 已应用 efficiency 校准
| 模型 | 参数 (active) | Decode tok/s/card | 瓶颈 |
|---|---|---|---|
| DeepSeek V4 Pro deepseek | 49B | — | 显存不足 |
| DeepSeek V4 Flash deepseek | 13B | — | 显存不足 |
| Mistral Small 4 mistral | 22B | 9 | 内存带宽 |
| GLM-5 Reasoning zhipu | 32B | 8 | 内存带宽 |
| GLM-5.1 zhipu | 32B | — | 显存不足 |
| Qwen3.6 Plus alibaba | 35B | — | 显存不足 |
| Kimi K2.6 moonshot | 32B | — | 显存不足 |
| MiniMax M2.7 minimax | 46B | — | 显存不足 |
Operator-level fit · per-model bottleneck + upper bound
算子级 fit · operator-level fit (per-token roofline)
基于每个模型 operator_decomposition + 本卡 BF16 100 TFLOPS / 768 GB/s 计算 · ridge point ≈ 130 FLOPs/byte
| 模型 | domain | 主导算子 | AI · F/B | 瓶颈 | tok/s 上界 |
|---|---|---|---|---|---|
| DeepSeek V4 Pro | llm | matmul | 245.5 | 🔥 计算 | 17k |
| GraphCast | scientific | graph-message-passing | 0.9 | 💾 内存带宽 | 1417 |
| AlphaFold 3 | scientific | pair-bias-attention | 2.3 | 💾 内存带宽 | 426 |
| GPT-OSS | llm | matmul | 0.7 | 💾 内存带宽 | 62 |
| Gemma 4 26B | llm | matmul | 0.7 | 💾 内存带宽 | 46 |
| DeepSeek V4 Flash | llm | matmul | 0.8 | 💾 内存带宽 | 44 |
| Mistral Small 4 | llm | matmul | 0.6 | 💾 内存带宽 | 20 |
| Llama 4 Maverick | llm | matmul | 0.8 | 💾 内存带宽 | 20 |
Operator support & optimization headroom
算子支持 & 优化空间 / Operator support & headroom
Per-operator support derived from software_support.engines + scale-up topology. Optimization headroom from measured efficiency factor.
Currently reaching 46% of theoretical roofline. Massive kernel-tuning headroom — every +0.05 in efficiency ≈ +10% effective throughput.
Communication (collective)
Attention
Matrix multiply (GEMM)
MoE routing
Normalization
Embedding
Activation
最接近的替代卡 (按规格相似度)
基于 BF16 算力 / 显存 / 显存带宽 / FP8 加权欧氏距离。供选型决策参考。
Software-stack support
| Engine | Status | BF16 | FP16 | FP4 | FP8 E4M3 | FP8 E5M2 | INT4 AWQ |
|---|---|---|---|---|---|---|---|
| HanGuangAI | unconfirmed | — | — | — | — | — | — |
| LMDeploy | unconfirmed | — | — | — | — | — | — |
| MindIE | unconfirmed | — | — | — | — | — | — |
| MoRI | unconfirmed | — | — | — | — | — | — |
| SGLang | unconfirmed | — | — | — | — | — | — |
| TensorRT-LLM (Dynamo) | unconfirmed | — | — | — | — | — | — |
| vLLM | community | — | ✓ | — | — | — | — |
Computed from 1 measured cases for this card. The calculator uses this value in place of the default 0.5.
Existing deployment cases (1)
Citations
- [1] Moore Threads MTT S4000 product page — https://www.mthreads.com/product/S4000 · accessed 2026-04-28 厂商声称
- [2] KUAE S4000 (MUSA architecture): 48 compute clusters; PCIe Gen5 x16; SMIC 7nm-class fabrication — https://www.mthreads.com/product/S4000 · accessed 2026-04-28 社区估算