RMSNorm
norm
Root mean square normalization; cheaper than LayerNorm, common in modern LLMs
公式
FLOPs
5 * batch * seq * hiddenBytes
2 * batch * seq * hidden硬件适配性 / Hardware fitness for this operator
基于代表模型 Llama 4 Scout 的此算子参数 (FLOPs/token = 2.46e+6, Bytes/token = 9.83e+5, 算术强度 = 2.5 FLOP/byte)。 每行展示该硬件在 BF16 精度下的 ridge point、瓶颈类型、以及该算子可达吞吐 (TFLOP/s)。
| 硬件 | Ridge point | 瓶颈 | 可达 TFLOP/s | 峰值利用率 |
|---|---|---|---|---|
| Cerebras WSE-3 peak BF16 62500 TF · BW 21000 TB/s | 3.0 | 内存带宽 mem-bw | 52500 | 84% |
| Groq LPU (TSP v1) peak BF16 188 TF · BW 80 TB/s | 2.4 | 计算 compute | 188 | 100% |
| NVIDIA R200 SXM (Vera Rubin) peak BF16 7500 TF · BW 13 TB/s | 576.9 | 内存带宽 mem-bw | 33 | 0% |
| AMD Instinct MI355X peak BF16 2300 TF · BW 8 TB/s | 287.5 | 内存带宽 mem-bw | 20 | 1% |
| NVIDIA B200 SXM 180GB peak BF16 2250 TF · BW 8 TB/s | 281.3 | 内存带宽 mem-bw | 20 | 1% |
| NVIDIA B300 SXM 288GB peak BF16 3750 TF · BW 8 TB/s | 468.8 | 内存带宽 mem-bw | 20 | 1% |
| NVIDIA GB200 NVL72 peak BF16 2250 TF · BW 8 TB/s | 281.3 | 内存带宽 mem-bw | 20 | 1% |
| NVIDIA GB300 NVL72 peak BF16 3750 TF · BW 8 TB/s | 468.8 | 内存带宽 mem-bw | 20 | 1% |
| 昇腾 950 🇨🇳 peak BF16 1500 TF · BW 6.4 TB/s | 234.4 | 内存带宽 mem-bw | 16 | 1% |
| SambaNova SN40L peak BF16 638 TF · BW 6.4 TB/s | 99.7 | 内存带宽 mem-bw | 16 | 3% |
| AMD Instinct MI325X peak BF16 1307 TF · BW 6 TB/s | 217.8 | 内存带宽 mem-bw | 15 | 1% |
| Etched Sohu peak BF16 1125 TF · BW 5.76 TB/s | 195.3 | 内存带宽 mem-bw | 14 | 1% |
显示前 12 张 (按可达 TFLOP/s 降序) · 共 38 张可比较
使用此算子的模型 (18)
- DeepSeek V4 FlashFLOPs/token: 1.31e+6 · Bytes/token: 5.24e+5
- DeepSeek V4 ProFLOPs/token: 5.00e+6 · Bytes/token: 1.00e+6
- Kimi K2.6FLOPs/token: 4.30e+6 · Bytes/token: 1.72e+6
- MiniMax M2.7FLOPs/token: 4.92e+6 · Bytes/token: 1.97e+6
- GLM-5.1FLOPs/token: 3.93e+6 · Bytes/token: 1.57e+6
- Qwen3.6 PlusFLOPs/token: 3.93e+6 · Bytes/token: 1.57e+6
- Mistral Small 4FLOPs/token: 2.05e+6 · Bytes/token: 8.19e+5
- GLM-5 ReasoningFLOPs/token: 3.07e+6 · Bytes/token: 1.23e+6
- Qwen3.5 397B ReasoningFLOPs/token: 3.28e+6 · Bytes/token: 1.31e+6
- Gemma 4 26BFLOPs/token: 1.31e+6 · Bytes/token: 5.24e+5
- Mistral Large 3FLOPs/token: 1.47e+7 · Bytes/token: 1.97e+6
- GPT-OSSFLOPs/token: 1.04e+6 · Bytes/token: 4.15e+5
- Llama 4 MaverickFLOPs/token: 2.46e+6 · Bytes/token: 9.83e+5
- Llama 4 ScoutFLOPs/token: 2.46e+6 · Bytes/token: 9.83e+5
- DeepSeek R1FLOPs/token: 4.37e+6 · Bytes/token: 1.75e+6
- Llama 3.3 70B InstructFLOPs/token: 8.40e+6 · Bytes/token: 1.31e+6
- Qwen2.5-Coder 32B InstructFLOPs/token: 6.55e+6 · Bytes/token: 8.19e+5
- AlphaFold 3FLOPs/token: 8.00e+6 · Bytes/token: 4.00e+6