Update README.md
Browse files
README.md
CHANGED
|
@@ -5,6 +5,29 @@ colorFrom: pink
|
|
| 5 |
colorTo: gray
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
colorTo: gray
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
+
license: mit
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# Trillim
|
| 12 |
+
|
| 13 |
+
We're building local AI that runs on the hardware you already have.
|
| 14 |
+
|
| 15 |
+
Trillim builds infrastructure for running models on consumer CPUs and edge devices — no GPU required. We train and fine-tune ternary ({-1, 0, 1}) models designed to run efficiently on commodity hardware, and build the tooling to deploy them.
|
| 16 |
+
|
| 17 |
+
## What we believe
|
| 18 |
+
|
| 19 |
+
GPUs are powerful but expensive, power-hungry, and scarce. Ternary quantization changes the equation: models with {-1, 0, 1} weights don't need floating-point multipliers at all. The right software can make CPUs fast enough for real-time inference. AI should run anywhere — laptops, Raspberry Pis, edge devices — not just in datacenters.
|
| 20 |
+
|
| 21 |
+
## What we're building
|
| 22 |
+
|
| 23 |
+
- **DarkNet** — our proprietary high-performance CPU inference engine purpose-built for ternary models, with hand-tuned SIMD kernels for x86 (AVX2) and ARM (NEON) - more supported architectures coming soon
|
| 24 |
+
- **Tooling** — an OpenAI-compatible API server, CLI chat interface, LoRA adapter hot-swap, and an integrated voice pipeline (STT + TTS)
|
| 25 |
+
- **Models** — ternary models fine-tuned and pre-quantized for efficient CPU inference, hosted here on HuggingFace. Look for the **`-TRNQ`** suffix.
|
| 26 |
+
|
| 27 |
+
## Supported model architectures
|
| 28 |
+
|
| 29 |
+
BitNet, Llama, Qwen2, Mistral
|
| 30 |
+
|
| 31 |
+
## Links
|
| 32 |
+
|
| 33 |
+
- [GitHub (public link will be added soon!)](https://github.com)
|