--- license: mit organization_profile: true --- # 🏛️ Celeste Imperia | High-Efficiency AI Weights **The official repository for local-first, hardware-optimized AI models.** ⚡ This organization is dedicated to making advanced AI accessible on consumer hardware. We specialize in porting heavy encoders and LLMs to run on **NPUs** and **ARM** architectures. ### 📦 What you’ll find here: * **NPU-Optimized Encoders:** CLIP and T5 variants converted for **Intel OpenVINO** and **Qualcomm AI Stack**. * **Consistent Character LoRAs:** High-fidelity character models trained for perfect persistence across frames. * **Edge-Ready LLMs:** Quantized and ported models specifically tuned for local CPU/NPU inference. ### 🛠️ Hardware Focus Our models are tested and optimized on local rigs (RTX A4000) to ensure they work for creators, not just data centers. --- 📫 **Inquiries:** [celesteimperia@gmail.com](mailto:celesteimperia@gmail.com) "Forging the future of Edge AI, one model at a time."