Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -14,9 +14,12 @@ tags:
|
|
| 14 |
# Stable-Diffusion-v2.1: Optimized for Mobile Deployment
|
| 15 |
## State-of-the-art generative AI model used to generate detailed images conditioned on text descriptions
|
| 16 |
|
|
|
|
| 17 |
Generates high resolution images from text prompts using a latent diffusion model. This model uses CLIP ViT-L/14 as text encoder, U-Net based latent denoising, and VAE based decoder to generate the final image.
|
| 18 |
|
| 19 |
-
This model is an implementation of
|
|
|
|
|
|
|
| 20 |
This repository provides scripts to run Stable-Diffusion-v2.1 on Qualcomm® devices.
|
| 21 |
More details on model performance across various devices, can be found
|
| 22 |
[here](https://aihub.qualcomm.com/models/stable_diffusion_v2_1_quantized).
|
|
|
|
| 14 |
# Stable-Diffusion-v2.1: Optimized for Mobile Deployment
|
| 15 |
## State-of-the-art generative AI model used to generate detailed images conditioned on text descriptions
|
| 16 |
|
| 17 |
+
|
| 18 |
Generates high resolution images from text prompts using a latent diffusion model. This model uses CLIP ViT-L/14 as text encoder, U-Net based latent denoising, and VAE based decoder to generate the final image.
|
| 19 |
|
| 20 |
+
This model is an implementation of Posenet-Mobilenet found [here](https://github.com/CompVis/stable-diffusion/tree/main).
|
| 21 |
+
|
| 22 |
+
|
| 23 |
This repository provides scripts to run Stable-Diffusion-v2.1 on Qualcomm® devices.
|
| 24 |
More details on model performance across various devices, can be found
|
| 25 |
[here](https://aihub.qualcomm.com/models/stable_diffusion_v2_1_quantized).
|