Multi-Head Attention
attention flash-attention-2 flash-attention-3 mla csa hca
Standard multi-head attention; FLOPs and bytes are per layer per token
公式
FLOPs
4 * batch * seq * hidden^2 + 2 * batch * heads * seq^2 * head_dimBytes
2 * batch * seq * hidden * (1 + 2/heads)硬件适配性 / Hardware fitness for this operator
基于代表模型 Llama 4 Scout 的此算子参数 (FLOPs/token = 7.05e+9, Bytes/token = 1.09e+10, 算术强度 = 0.6 FLOP/byte)。 每行展示该硬件在 BF16 精度下的 ridge point、瓶颈类型、以及该算子可达吞吐 (TFLOP/s)。
| 硬件 | Ridge point | 瓶颈 | 可达 TFLOP/s | 峰值利用率 |
|---|---|---|---|---|
| Cerebras WSE-3 peak BF16 62500 TF · BW 21000 TB/s | 3.0 | 内存带宽 mem-bw | 13611 | 22% |
| Groq LPU (TSP v1) peak BF16 188 TF · BW 80 TB/s | 2.4 | 内存带宽 mem-bw | 52 | 28% |
| NVIDIA R200 SXM (Vera Rubin) peak BF16 7500 TF · BW 13 TB/s | 576.9 | 内存带宽 mem-bw | 8 | 0% |
| AMD Instinct MI355X peak BF16 2300 TF · BW 8 TB/s | 287.5 | 内存带宽 mem-bw | 5 | 0% |
| NVIDIA B200 SXM 180GB peak BF16 2250 TF · BW 8 TB/s | 281.3 | 内存带宽 mem-bw | 5 | 0% |
| NVIDIA B300 SXM 288GB peak BF16 3750 TF · BW 8 TB/s | 468.8 | 内存带宽 mem-bw | 5 | 0% |
| NVIDIA GB200 NVL72 peak BF16 2250 TF · BW 8 TB/s | 281.3 | 内存带宽 mem-bw | 5 | 0% |
| NVIDIA GB300 NVL72 peak BF16 3750 TF · BW 8 TB/s | 468.8 | 内存带宽 mem-bw | 5 | 0% |
| 昇腾 950 🇨🇳 peak BF16 1500 TF · BW 6.4 TB/s | 234.4 | 内存带宽 mem-bw | 4 | 0% |
| SambaNova SN40L peak BF16 638 TF · BW 6.4 TB/s | 99.7 | 内存带宽 mem-bw | 4 | 1% |
| AMD Instinct MI325X peak BF16 1307 TF · BW 6 TB/s | 217.8 | 内存带宽 mem-bw | 4 | 0% |
| Etched Sohu peak BF16 1125 TF · BW 5.76 TB/s | 195.3 | 内存带宽 mem-bw | 4 | 0% |
显示前 12 张 (按可达 TFLOP/s 降序) · 共 38 张可比较
使用此算子的模型 (17)
- DeepSeek V4 FlashFLOPs/token: 3.22e+9 · Bytes/token: 4.83e+9
- DeepSeek V4 ProFLOPs/token: 1.20e+9 · Bytes/token: 4.50e+6
- Kimi K2.6FLOPs/token: 1.59e+10 · Bytes/token: 2.55e+10
- MiniMax M2.7FLOPs/token: 1.61e+10 · Bytes/token: 2.52e+10
- GLM-5.1FLOPs/token: 1.29e+10 · Bytes/token: 2.04e+10
- Qwen3.6 PlusFLOPs/token: 1.29e+10 · Bytes/token: 2.01e+10
- Mistral Small 4FLOPs/token: 5.87e+9 · Bytes/token: 9.23e+9
- GLM-5 ReasoningFLOPs/token: 8.81e+9 · Bytes/token: 1.36e+10
- Qwen3.5 397B ReasoningFLOPs/token: 9.40e+9 · Bytes/token: 1.45e+10
- Gemma 4 26BFLOPs/token: 3.22e+9 · Bytes/token: 4.83e+9
- Mistral Large 3FLOPs/token: 1.25e+10 · Bytes/token: 1.25e+10
- GPT-OSSFLOPs/token: 2.04e+9 · Bytes/token: 2.81e+9
- Llama 4 MaverickFLOPs/token: 7.05e+9 · Bytes/token: 1.09e+10
- Llama 4 ScoutFLOPs/token: 7.05e+9 · Bytes/token: 1.09e+10
- DeepSeek R1FLOPs/token: 1.61e+10 · Bytes/token: 3.22e+10
- Llama 3.3 70B InstructFLOPs/token: 7.50e+9 · Bytes/token: 7.50e+9
- Qwen2.5-Coder 32B InstructFLOPs/token: 3.50e+9 · Bytes/token: 3.50e+9