CelesteImperia: SDXL OpenVINO (Intel Optimized)
This repository contains the Vedic Cinematic engine optimized for the Intel AI PC ecosystem. Designed to run on everything from local Iris Xe graphics to the latest Intel AI Boost NPUs.
π Performance Highlights
- Verified Benchmark: 6:46 for 1024x1024 generation on standard consumer hardware.
- Precision: Dual-Tier (FP16 for fidelity, INT8 for NPU-native speed).
- Architecture: Fully compatible with OpenVINO 2024.x+.
π¦ Components
unet/: INT8 Quantized (NNCF Calibrated) and FP16 Master.tinyvae/: OpenVINO IR format for instant image previews.text_encoder/: Optimized for low-latency prompt parsing.
π οΈ Quick Start
from optimum.intel.openvino import OVStableDiffusionXLPipeline
pipe = OVStableDiffusionXLPipeline.from_pretrained("CelesteImperia/SDXL-OpenVINO")
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for CelesteImperia/SDXL-OpenVINO
Base model
stabilityai/stable-diffusion-xl-base-1.0