About EvoKernel·Spec

Open-source knowledge base for AI inference deployment across hardware (incl. 9 Chinese vendors) and frontier open-source models, with a transparent calculator.

Core thesis

  • Knowledge base first, calculator second
  • 9 Chinese hardware vendors (Ascend / Cambricon / Hygon / Moore-Threads / Enflame / Biren / MetaX / Iluvatar / Pingtouge) + all major overseas accelerators (NVIDIA / AMD / Intel / AWS / Google)
  • Every number carries a 3-tier evidence (official / measured / estimated)
  • Code-as-data: full git repo, PR-only contribution, zero backend, fully static deployment

Data confidence model

Each numeric field is backed by an evidence_ref citing an official whitepaper, third-party measurement, or community estimate. UI defaults to a 3-tier collapsed view:

  • 📄 Vendor-claimed — official whitepaper, datasheet, product page
  • Measured — third-party or community-contributed real measurements
  • ⚠️ Community-estimated — derived from public information

License

Disclaimer

All vendor-claimed values are unverified and do not constitute investment or procurement advice. The project is independent and not affiliated with any stakeholder.

Contributing

Three contribution paths:

  1. New hardware / model: open issue → maintainer review → PR
  2. New deployment case: PR direct, must include reproduction steps + raw log + personal attestation
  3. Optimization patterns: V1 maintainer-only, distilled from cases; opens for community PR in V1.5

Contribution guide →

Open Data API

All data exposed via static JSON endpoints (CC-BY-SA 4.0):

Acknowledgements

Inspired by SemiAnalysis InferenceX. Differentiated by Chinese hardware ecosystem coverage and evidence-backed transparency.