YuCollection commited on
Commit
e734946
·
verified ·
1 Parent(s): d3202c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +125 -3
README.md CHANGED
@@ -1,3 +1,125 @@
1
- ---
2
- license: openrail++
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: openrail++
3
+ tags:
4
+ - stable-diffusion
5
+ - image-to-image
6
+ ---
7
+ # SD-XL 1.0-refiner Model Card
8
+
9
+ > **Note:** This repository is a **mirror** and **not** the original upstream source.
10
+ > The original model, weights, and documentation are developed and maintained by **Stability AI**.
11
+ >
12
+ > The model weights hosted here are **unmodified** and redistributed **as-is**.
13
+ > Only minor editorial changes to this README (e.g. formatting or clarification) have been made and do **not** affect the model, its behavior, or its licensing.
14
+ >
15
+ > The model is released under the **CreativeML Open RAIL++-M License**, which permits use and redistribution **subject to explicit use-based restrictions** (see *Attachment A*).
16
+ > A full copy of the license is included in this repository and applies to all distributions of the model and its derivatives.
17
+ >
18
+ > Users of this mirror are responsible for complying with all terms of the CreativeML Open RAIL++-M License.
19
+ >
20
+ > This repository is **not affiliated with or endorsed by Stability AI**.
21
+ > The maintainer is willing to cooperate in good faith with the original rights holder regarding reasonable requests.
22
+
23
+ ## Model
24
+
25
+ [SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
26
+ In a first step, the base model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) is used to generate (noisy) latents,
27
+ which are then further processed with a refinement model specialized for the final denoising steps.
28
+ Note that the base model can be used as a standalone module.
29
+
30
+ Alternatively, we can use a two-stage pipeline as follows:
31
+ First, the base model is used to generate latents of the desired output size.
32
+ In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
33
+ to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.
34
+
35
+ Source code is available at https://github.com/Stability-AI/generative-models .
36
+
37
+ ### Model Description
38
+
39
+ - **Developed by:** Stability AI
40
+ - **Model type:** Diffusion-based text-to-image generative model
41
+ - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/LICENSE.md)
42
+ - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
43
+ - **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952).
44
+
45
+ ### Model Sources
46
+
47
+ For research purposes, we recommned our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popoular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time.
48
+ [Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference.
49
+
50
+ - **Repository:** https://github.com/Stability-AI/generative-models
51
+ - **Demo:** https://clipdrop.co/stable-diffusion
52
+
53
+ ### 🧨 Diffusers
54
+
55
+ Make sure to upgrade diffusers to >= 0.18.0:
56
+ ```
57
+ pip install diffusers --upgrade
58
+ ```
59
+
60
+ In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark:
61
+ ```
62
+ pip install invisible_watermark transformers accelerate safetensors
63
+ ```
64
+
65
+ Yon can then use the refiner to improve images.
66
+
67
+ ```py
68
+ import torch
69
+ from diffusers import StableDiffusionXLImg2ImgPipeline
70
+ from diffusers.utils import load_image
71
+ pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained(
72
+ "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
73
+ )
74
+ pipe = pipe.to("cuda")
75
+ url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"
76
+ init_image = load_image(url).convert("RGB")
77
+ prompt = "a photo of an astronaut riding a horse on mars"
78
+ image = pipe(prompt, image=init_image).images
79
+ ```
80
+
81
+ When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
82
+ ```py
83
+ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
84
+ ```
85
+
86
+ If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload`
87
+ instead of `.to("cuda")`:
88
+
89
+ ```diff
90
+ - pipe.to("cuda")
91
+ + pipe.enable_model_cpu_offload()
92
+ ```
93
+
94
+ For more advanced use cases, please have a look at [the docs](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl).
95
+
96
+ ## Uses
97
+
98
+ ### Direct Use
99
+
100
+ The model is intended for research purposes only. Possible research areas and tasks include
101
+
102
+ - Generation of artworks and use in design and other artistic processes.
103
+ - Applications in educational or creative tools.
104
+ - Research on generative models.
105
+ - Safe deployment of models which have the potential to generate harmful content.
106
+ - Probing and understanding the limitations and biases of generative models.
107
+
108
+ Excluded uses are described below.
109
+
110
+ ### Out-of-Scope Use
111
+
112
+ The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
113
+
114
+ ## Limitations and Bias
115
+
116
+ ### Limitations
117
+
118
+ - The model does not achieve perfect photorealism
119
+ - The model cannot render legible text
120
+ - The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
121
+ - Faces and people in general may not be generated properly.
122
+ - The autoencoding part of the model is lossy.
123
+
124
+ ### Bias
125
+ While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.