E Etched Last verified

Etched Sohu

PCIE announced 发布于 2025 etched-sohu-gen1
BF16
TFLOP/s
1125 厂商声称
FP8
TFLOP/s
2250 厂商声称
FP4
TFLOP/s
4500 厂商声称
Memory
GB
144 厂商声称
Mem BW
GB/s
5760 厂商声称
TDP
W
700 厂商声称

完整规格

算力

FP4 TFLOPS
4500
FP8 TFLOPS
2250
BF16 TFLOPS
1125
FP16 TFLOPS
1125
INT8 TOPS
2250

显存

容量
144 GB
带宽
5760 GB/s
类型
HBM3e

芯片架构 🟢 vendor floorplan

Tile count
144
制程
4 nm
PCIe
Gen 5 ×16

Scale-Up (节点内)

协议
Etched-Mesh
单链带宽
800 GB/s
World size
8
拓扑
full-mesh
交换机

Scale-Out (节点间)

单卡出口
400 Gbps
协议
Ethernet-RoCE
NIC

拓扑示意

拓扑结构 · Topology
8 卡 scale-up domain
芯片内部 / Die-level architecture
HBM HBM HBM HBM Etched Sohu L2 / shared cache · NoC L1$ / register file (per Tile) 144 Tiles · darker block = tensor / matrix engine 1125 TFLOPS BF16 · 2250 FP8 · 144 GB HBM3e @ 5.8 TB/s · 700 W TDP

⚠ illustrative / 示意性版图: compute-unit and HBM-stack count are inferred from public BF16 / memory specs. architecture field not populated for this card yet. Contribute floorplan data →


集群拓扑 / Cluster topology · Etched-Mesh @ 800 GB/s
Etched-Mesh switch 800 GB/s/link · all-to-all GPU 0 144GB GPU 1 144GB GPU 2 144GB GPU 3 144GB GPU 4 144GB GPU 5 144GB GPU 6 144GB GPU 7 144GB 8 cards · full-mesh topology · scale-out: 400 Gbps/card
Scale-Up · 域内
Etched-Mesh
800 GB/s · 拓扑: full-mesh
world_size = 8
Scale-Out · 跨域
Ethernet-RoCE
400 Gbps/卡 NIC

能跑哪些模型?

Quick estimates · decode tok/s/card 上界

TP=8 · FP4 · batch=16 · prefill=1024 · decode=256 · 已应用 efficiency 校准

在计算器中调整 →
模型 参数 (active) Decode tok/s/card 瓶颈
DeepSeek V4 Pro
deepseek
49B 117,551 内存带宽
DeepSeek V4 Flash
deepseek
13B 164 内存带宽
Mistral Small 4
mistral
22B 75 内存带宽
GLM-5 Reasoning
zhipu
32B 62 内存带宽
GLM-5.1
zhipu
32B 42 内存带宽
Qwen3.6 Plus
alibaba
35B 40 内存带宽
Kimi K2.6
moonshot
32B 34 内存带宽
MiniMax M2.7
minimax
46B 27 内存带宽

算子级 fit · 任意模型瓶颈类型 + 上界

算子级 fit · operator-level fit (per-token roofline)

基于每个模型 operator_decomposition + 本卡 BF16 1,125 TFLOPS / 5,760 GB/s 计算 · ridge point ≈ 195 FLOPs/byte

上界 = min(计算屋顶, 内存带宽屋顶) · efficiency 未应用
模型 domain 主导算子 AI · F/B 瓶颈 tok/s 上界
DeepSeek V4 Pro llm matmul 245.5 🔥 计算 187k
GraphCast scientific graph-message-passing 0.9 💾 内存带宽 11k
AlphaFold 3 scientific pair-bias-attention 2.3 💾 内存带宽 3193
GPT-OSS llm matmul 0.7 💾 内存带宽 466
Gemma 4 26B llm matmul 0.7 💾 内存带宽 346
DeepSeek V4 Flash llm matmul 0.8 💾 内存带宽 328
Mistral Small 4 llm matmul 0.6 💾 内存带宽 149
Llama 4 Maverick llm matmul 0.8 💾 内存带宽 147
需要 efficiency 校准 + concurrency 扫描 + TCO 估算 → 在计算器中评估 →

算子支持 & 优化空间

算子支持 & 优化空间 / Operator support & headroom

Per-operator support derived from software_support.engines + scale-up topology. Optimization headroom from measured efficiency factor.

Optimization headroom
+50 pp
moderate

No cases yet — using default 0.5 efficiency. Real headroom unknown until first measurement lands.

Communication (collective)
All-to-All 🟢 mature
all-to-all via Etched-Mesh world_size=8
AllReduce 🟢 mature
Etched-Mesh ring all-reduce
Attention
Multi-Head Attention 🟢 mature
paged-attention via vLLM/SGLang/MindIE
FlashAttention-3 🟢 mature
FA-3 on modern engine + tensor cores
Matrix multiply (GEMM)
Matrix Multiplication 🟢 mature
GEMM supported on all inference engines
MoE routing
MoE Routing 🟢 mature
MoE gating supported via vLLM ≥0.4 / SGLang
Normalization
RMSNorm 🟢 mature
fused into engine kernels
Embedding
fused into engine kernels
Activation
SiLU / Swish 🟢 mature
fused into engine kernels
Softmax 🟢 mature
fused into engine kernels

软件栈支持

引擎 状态 BF16FP16FP4FP8 E4M3FP8 E5M2INT4 AWQ
HanGuangAI 未确认
LMDeploy 未确认
MindIE 未确认
MoRI 未确认
SGLang 未确认
TensorRT-LLM (Dynamo) 未确认
vLLM 社区

已有部署案例 (0)

暂无该硬件的实测案例。 成为第一个贡献者?

引证

  1. [1] Etched Sohu (announced June 2024): Transformer-only ASIC, claims 100,000+ tokens/sec on Llama 70B (8-card system), 144 GB HBM3e per chip. Status: announced, GA targeted late 2025. — https://www.etched.com/announcing-etched · 访问于 2026-04-29 厂商声称
  2. [2] Sohu architecture estimate: ~144 specialized Tiles optimized for transformer attention + MLP only. Cannot run non-transformer workloads (no graph ops, no conv, no MoE gate primitive without firmware extension). — https://www.semianalysis.com/p/sohu-asic-deep-dive · 访问于 2026-04-29 社区估算
⚠ Sohu is TRANSFORMER-ONLY: cannot run scientific (AlphaFold), graph (GraphCast), or vision (SAM/DINO) workloads. Domain restriction is the entire bet.
⚠ Status: announced (June 2024); GA pushed to late 2025 / 2026 — specs subject to revision at GA.
⚠ Vendor-claimed throughput numbers (100k+ tok/s on Llama 70B) imply ~10x H100 efficiency — independent verification pending.