language: en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- RyzenAI
- Quantization
- ONNX
- Computer Vision
inference: true
π Stable Diffusion 1.5 on AMD AI PC NPU
"Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at π€'s Stable Diffusion blog". More details about this model can be found on the original Hugging Face model card: stable-diffusion-v1-5/stable-diffusion-v1-5.
This model repo contains the optimized ONNX models required to run the image generation pipeline for Stable Diffusion 1.5 on AMD NPUs.
Model Details
The folder structure is organized to mirror the main components of the diffusion pipeline (scheduler, text encoder, tokenizer, UNet, and VAE decoder).
ββ scheduler/
ββ text_encoder/
ββ tokenizer/
ββ unet/
ββ vae_decoder/
The scheduler folder contains the scheduler configuration (timesteps, betas, alphas, etc.) used during the diffusion sampling process.
The text_encoder folder contains the text encoder model used to convert the input prompt into conditioning embeddings for the diffusion model.
The tokenizer contains the tokenizer configuration and vocabulary files required to preprocess the text prompt before it is fed to the text encoder.
The unet folder contains the UNet model used in the diffusion process. The UNet is exported and structured specifically to leverage the AMD NPU accelerator for the denoising steps.
The vae_decoder folder contains the VAE decoder model used to map latent representations back to the image space. The VAE decoder is also structured to make use of the NPU accelerator for efficient image reconstruction.
Note: UNet and VAE decoder models are optimized and structured to run on AMD NPUs. The other components (text encoder, tokenizer and scheduler) are shared between GPU and NPU pipelines, but are provided here for completeness.
| Model Details | Description |
|---|---|
| Person or organization developing model | Giovanni Guasti (AMD), Benjamin Consolvo (AMD) |
| Original model authors | Robin Rombach, Patrick Esser |
| Model date | January 2026 |
| Model version | 1.7.0 |
| Model type | Diffusion-based text-to-image generation model |
| Information about training algorithms, parameters, fairness constraints or other applied approaches, and features | This is a model that can be used to generate and modify images based on text prompts. It is a Latent Diffusion Model that uses a fixed, pretrained text encoder (CLIP ViT-L/14) as suggested in the Imagen paper. |
| License | CreativeML OpenRAIL-M |
| Where to send questions or comments about the model | Community Tab and AMD Developer Community Discord |
β‘ Intended Use
Getting Started
To get started on with this model, visit github.com/amd/sd-sandbox.
β Ethical Considerations
AMD is committed to conducting our business in a fair, ethical and honest manner and in compliance with all applicable laws, rules and regulations. You can find out more at the AMD Ethics and Compliance page.
β οΈ Caveats and Recommendations
Please visit the original model card for more details: stable-diffusion-v1-5/stable-diffusion-v1-5.
π Citation Details
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}