๐Ÿ›๏ธ Celeste Imperia | High-Efficiency AI Weights

The official repository for local-first, hardware-optimized AI models. โšก

This organization is dedicated to making advanced AI accessible on consumer hardware. We specialize in porting heavy encoders and LLMs to run on NPUs and ARM architectures.

๐Ÿ“ฆ What youโ€™ll find here:

  • NPU-Optimized Encoders: CLIP and T5 variants converted for Intel OpenVINO and Qualcomm AI Stack.
  • Consistent Character LoRAs: High-fidelity character models trained for perfect persistence across frames.
  • Edge-Ready LLMs: Quantized and ported models specifically tuned for local CPU/NPU inference.

๐Ÿ› ๏ธ Hardware Focus

Our models are tested and optimized on local rigs (RTX A4000) to ensure they work for creators, not just data centers.


๐Ÿ“ซ Inquiries: celesteimperia@gmail.com "Forging the future of Edge AI, one model at a time."

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support