README / README.md
itsabhishek19's picture
Update README.md
064351f verified
metadata
title: Celeste Imperia | Hardware-Aware AI Forge
emoji: 🌌
colorFrom: blue
colorTo: purple
sdk: static
pinned: true
thumbnail: >-
  https://cdn-uploads.huggingface.co/production/uploads/697f74cb005a67fc114da2b4/OCwnTMS-RIt9cPQR0Gnxf.png
license: apache-2.0
tags:
  - edge-ai
  - openvino
  - whisper
  - sdxl
  - quantization
  - windows-on-arm

Celeste Imperia | Hardware-Aware AI Forge

Bridging the gap between frontier AI architectures and consumer silicon.

Celeste Imperia specializes in the precision optimization and deployment of Large Language Models (LLMs), Multimodal Vision (VLMs), and Speech architectures for edge hardware. We provide high-fidelity, hardware-validated weights optimized for private, zero-latency execution.


πŸ› οΈ The Development Forge

All models are validated on our standard consumer-grade benchmark rig to ensure "masses-ready" stability:

  • Processor: Intel Core i5-11400
  • Compute: NVIDIA RTX A4000 (16GB VRAM)
  • Memory: 40GB RAM
  • Validation: Every port undergoes strict logic consistency and vision-token alignment checks.

πŸ“‚ Active Repositories

🧠 Language Reasoning (GGUF Trinity)

Optimized for the llama.cpp ecosystem, providing Master (FP16), Pro (Q8_0), and Mobile (Q4_K_M) weights.

πŸŽ™οΈ Speech & Audio (OpenVINO Optimized)

High-speed transcription and translation optimized for Intel CPUs/NPUs.

🎨 Generative Vision (Diffusion)


πŸš€ Optimization Matrix

Platform Target Optimization Backend Architecture Focus
Snapdragon X Elite GGUF / QNN / ONNX ARM64 high-speed inference
Intel Core Ultra / Arc OpenVINO / NPU Low-power background execution
Edge CPUs (Mobile/Linux) GGUF (INT4/INT8) Resource-constrained "Agent" logic

New here? Check out our Getting Started Guide to find the right model for your CPU!

Connect with the architect: Abhishek Jaiswal on LinkedIn