AI & ML interests
Specializing in local-hardware optimization (OpenVINO/Qualcomm) and consistent character synthesis.
Recent Activity
Celeste Imperia | Hardware-Aware AI Forge
Bridging the gap between frontier AI architectures and consumer silicon.
Celeste Imperia specializes in the precision optimization and deployment of Large Language Models (LLMs), Multimodal Vision (VLMs), and Speech architectures for edge hardware. We provide high-fidelity, hardware-validated weights optimized for private, zero-latency execution.
🛠️ The Development Forge
All models are validated on our standard consumer-grade benchmark rig to ensure "masses-ready" stability:
- Processor: Intel Core i5-11400
- Compute: NVIDIA RTX A4000 (16GB VRAM)
- Memory: 40GB RAM
- Validation: Every port undergoes strict logic consistency and vision-token alignment checks.
📂 Active Repositories
🧠 Language Reasoning (GGUF Trinity)
Optimized for the llama.cpp ecosystem, providing Master (FP16), Pro (Q8_0), and Mobile (Q4_K_M) weights.
- Llama-3.2-1B-Instruct-GGUF - Meta's industry standard for edge reasoning.
- Phi-3.5-mini-instruct-GGUF - Microsoft’s 128k context logic powerhouse.
🎙️ Speech & Audio (OpenVINO Optimized)
High-speed transcription and translation optimized for Intel CPUs/NPUs.
- Whisper-Large-V3-Turbo-OpenVINO - [LATEST] Near real-time transcription with 8-bit quantization for Intel hardware.
🎨 Generative Vision (Diffusion)
- SDXL-OpenVINO-Trinity - [LATEST] Full 4-step generation with fused LCM and TinyVAE. Optimized for i5/i7 hardware.
- Qwen2-VL-2B-Instruct-OpenVINO-INT4 - State-of-the-art vision-language reasoning for OCR and scene analysis.
🚀 Optimization Matrix
| Platform Target | Optimization Backend | Architecture Focus |
|---|---|---|
| Snapdragon X Elite | GGUF / QNN / ONNX | ARM64 high-speed inference |
| Intel Core Ultra / Arc | OpenVINO / NPU | Low-power background execution |
| Edge CPUs (Mobile/Linux) | GGUF (INT4/INT8) | Resource-constrained "Agent" logic |
New here? Check out our Getting Started Guide to find the right model for your CPU!
Connect with the architect: Abhishek Jaiswal on LinkedIn