--- title: Celeste Imperia | Hardware-Aware AI Forge emoji: ๐ŸŒŒ colorFrom: blue colorTo: purple sdk: static pinned: true thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/697f74cb005a67fc114da2b4/OCwnTMS-RIt9cPQR0Gnxf.png license: apache-2.0 tags: - edge-ai - openvino - whisper - sdxl - quantization - windows-on-arm --- # Celeste Imperia | Hardware-Aware AI Forge **Bridging the gap between frontier AI architectures and consumer silicon.** Celeste Imperia specializes in the precision optimization and deployment of Large Language Models (LLMs), Multimodal Vision (VLMs), and Speech architectures for edge hardware. We provide high-fidelity, hardware-validated weights optimized for private, zero-latency execution. --- ## ๐Ÿ› ๏ธ The Development Forge All models are validated on our standard consumer-grade benchmark rig to ensure "masses-ready" stability: - **Processor:** Intel Core i5-11400 - **Compute:** NVIDIA RTX A4000 (16GB VRAM) - **Memory:** 40GB RAM - **Validation:** Every port undergoes strict logic consistency and vision-token alignment checks. --- ## ๐Ÿ“‚ Active Repositories ### ๐Ÿง  Language Reasoning (GGUF Trinity) Optimized for the `llama.cpp` ecosystem, providing **Master (FP16)**, **Pro (Q8_0)**, and **Mobile (Q4_K_M)** weights. * **[Llama-3.2-1B-Instruct-GGUF](https://huggingface.co/CelesteImperia/Llama-3.2-1B-Instruct-GGUF)** - Meta's industry standard for edge reasoning. * **[Phi-3.5-mini-instruct-GGUF](https://huggingface.co/CelesteImperia/Phi-3.5-mini-instruct-GGUF)** - Microsoftโ€™s 128k context logic powerhouse. ### ๐ŸŽ™๏ธ Speech & Audio (OpenVINO Optimized) High-speed transcription and translation optimized for Intel CPUs/NPUs. * **[Whisper-Large-V3-Turbo-OpenVINO](https://huggingface.co/CelesteImperia/whisper-large-v3-turbo-openvino)** - **[LATEST]** Near real-time transcription with 8-bit quantization for Intel hardware. ### ๐ŸŽจ Generative Vision (Diffusion) * **[SDXL-OpenVINO-Trinity](https://huggingface.co/CelesteImperia/celeste-imperia-sdxl-openvino)** - **[LATEST]** Full 4-step generation with fused LCM and TinyVAE. Optimized for i5/i7 hardware. * **[Qwen2-VL-2B-Instruct-OpenVINO-INT4](https://huggingface.co/CelesteImperia/Qwen2-VL-2B-Instruct-OpenVINO-INT4)** - State-of-the-art vision-language reasoning for OCR and scene analysis. --- ## ๐Ÿš€ Optimization Matrix | Platform Target | Optimization Backend | Architecture Focus | | :--- | :--- | :--- | | **Snapdragon X Elite** | GGUF / QNN / ONNX | ARM64 high-speed inference | | **Intel Core Ultra / Arc** | OpenVINO / NPU | Low-power background execution | | **Edge CPUs (Mobile/Linux)** | GGUF (INT4/INT8) | Resource-constrained "Agent" logic | --- **New here?** Check out our **[Getting Started Guide](https://huggingface.co/CelesteImperia/SDXL-Base)** to find the right model for your CPU! **Connect with the architect:** [Abhishek Jaiswal on LinkedIn](https://www.linkedin.com/in/abhishek-jaiswal-524056a/)