SiLU / Swish
activation
Sigmoid-weighted Linear Unit; default activation in modern LLMs (Llama, Qwen)
公式
FLOPs
4 * batch * seq * hiddenBytes
2 * batch * seq * hidden硬件适配性 / Hardware fitness for this operator
基于代表模型 Mistral Large 3 的此算子参数 (FLOPs/token = 2.53e+8, Bytes/token = 1.26e+8, 算术强度 = 2.0 FLOP/byte)。 每行展示该硬件在 BF16 精度下的 ridge point、瓶颈类型、以及该算子可达吞吐 (TFLOP/s)。
| 硬件 | Ridge point | 瓶颈 | 可达 TFLOP/s | 峰值利用率 |
|---|---|---|---|---|
| Cerebras WSE-3 peak BF16 62500 TF · BW 21000 TB/s | 3.0 | 内存带宽 mem-bw | 42000 | 67% |
| Groq LPU (TSP v1) peak BF16 188 TF · BW 80 TB/s | 2.4 | 内存带宽 mem-bw | 160 | 85% |
| NVIDIA R200 SXM (Vera Rubin) peak BF16 7500 TF · BW 13 TB/s | 576.9 | 内存带宽 mem-bw | 26 | 0% |
| AMD Instinct MI355X peak BF16 2300 TF · BW 8 TB/s | 287.5 | 内存带宽 mem-bw | 16 | 1% |
| NVIDIA B200 SXM 180GB peak BF16 2250 TF · BW 8 TB/s | 281.3 | 内存带宽 mem-bw | 16 | 1% |
| NVIDIA B300 SXM 288GB peak BF16 3750 TF · BW 8 TB/s | 468.8 | 内存带宽 mem-bw | 16 | 0% |
| NVIDIA GB200 NVL72 peak BF16 2250 TF · BW 8 TB/s | 281.3 | 内存带宽 mem-bw | 16 | 1% |
| NVIDIA GB300 NVL72 peak BF16 3750 TF · BW 8 TB/s | 468.8 | 内存带宽 mem-bw | 16 | 0% |
| 昇腾 950 🇨🇳 peak BF16 1500 TF · BW 6.4 TB/s | 234.4 | 内存带宽 mem-bw | 13 | 1% |
| SambaNova SN40L peak BF16 638 TF · BW 6.4 TB/s | 99.7 | 内存带宽 mem-bw | 13 | 2% |
| AMD Instinct MI325X peak BF16 1307 TF · BW 6 TB/s | 217.8 | 内存带宽 mem-bw | 12 | 1% |
| Etched Sohu peak BF16 1125 TF · BW 5.76 TB/s | 195.3 | 内存带宽 mem-bw | 12 | 1% |
显示前 12 张 (按可达 TFLOP/s 降序) · 共 38 张可比较