Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,22 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
organization_profile: true
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# 🏛️ Celeste Imperia | High-Efficiency AI Weights
|
| 7 |
+
|
| 8 |
+
**The official repository for local-first, hardware-optimized AI models.** ⚡
|
| 9 |
+
|
| 10 |
+
This organization is dedicated to making advanced AI accessible on consumer hardware. We specialize in porting heavy encoders and LLMs to run on **NPUs** and **ARM** architectures.
|
| 11 |
+
|
| 12 |
+
### 📦 What you’ll find here:
|
| 13 |
+
* **NPU-Optimized Encoders:** CLIP and T5 variants converted for **Intel OpenVINO** and **Qualcomm AI Stack**.
|
| 14 |
+
* **Consistent Character LoRAs:** High-fidelity character models trained for perfect persistence across frames.
|
| 15 |
+
* **Edge-Ready LLMs:** Quantized and ported models specifically tuned for local CPU/NPU inference.
|
| 16 |
+
|
| 17 |
+
### 🛠️ Hardware Focus
|
| 18 |
+
Our models are tested and optimized on local rigs (RTX A4000) to ensure they work for creators, not just data centers.
|
| 19 |
+
|
| 20 |
+
---
|
| 21 |
+
📫 **Inquiries:** [celesteimperia@gmail.com](mailto:celesteimperia@gmail.com)
|
| 22 |
+
"Forging the future of Edge AI, one model at a time."
|