CelesteImperia: SDXL QNN (Snapdragon NPU Native)
The elite tier of optimization. This repository contains NPU-Native DLC files forged specifically for the Qualcomm Hexagon NPU (Snapdragon X Elite).
π The Snapdragon Advantage
- NPU-Native: Forged using the Qualcomm AI Stack (QNN/SNPE).
- Slim King: 10.3GB master weights compressed to 2.39GB via Enhanced INT8 Quantization.
- Hardware-Mapped: Fixed shapes (1024x1024) ensure maximum hardware block utilization.
π¦ Components
unet/sdxl_unet_npu.dlc: The INT8 masterpiece.tinyvae/: DLC-native VAE for hardware-accelerated previews.clip/: NPU-optimized text encoders.
βοΈ Specifications
- Target: Hexagon NPU
- Input Geometry: Fixed 1x4x128x128 (Latent)
- Quantization: Enhanced Per-Channel INT8
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for CelesteImperia/SDXL-QNN
Base model
stabilityai/stable-diffusion-xl-base-1.0