File size: 1,455 Bytes
b32d3b4 15d21a3 b32d3b4 15d21a3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ---
title: README
emoji: π»
colorFrom: pink
colorTo: gray
sdk: static
pinned: false
license: mit
---
# Trillim
We're building local AI that runs on the hardware you already have.
Trillim builds infrastructure for running models on consumer CPUs and edge devices β no GPU required. We train and fine-tune ternary ({-1, 0, 1}) models designed to run efficiently on commodity hardware, and build the tooling to deploy them.
## What we believe
GPUs are powerful but expensive, power-hungry, and scarce. Ternary quantization changes the equation: models with {-1, 0, 1} weights don't need floating-point multipliers at all. The right software can make CPUs fast enough for real-time inference. AI should run anywhere β laptops, Raspberry Pis, edge devices β not just in datacenters.
## What we're building
- **DarkNet** β our proprietary high-performance CPU inference engine purpose-built for ternary models, with hand-tuned SIMD kernels for x86 (AVX2) and ARM (NEON) - more supported architectures coming soon
- **Tooling** β an OpenAI-compatible API server, CLI chat interface, LoRA adapter hot-swap, and an integrated voice pipeline (STT + TTS)
- **Models** β ternary models fine-tuned and pre-quantized for efficient CPU inference, hosted here on HuggingFace. Look for the **`-TRNQ`** suffix.
## Supported model architectures
BitNet, Llama, Qwen2, Mistral
## Links
- [GitHub (public link will be added soon!)](https://github.com)
|