text
stringlengths
0
5.54k
Quality Improvements
Now that our image generation pipeline is blazing fast, let’s try to get maximum image quality.
First of all, image quality is extremely subjective, so it’s difficult to make general claims here.
The most obvious step to take to improve quality is to use better checkpoints. Since the release of Stable Diffusion, many improved versions have been released, which are summarized here: ...
Official Release - 22 Aug 2022: Stable-Diffusion 1.4
20 October 2022: Stable-Diffusion 1.5
24 Nov 2022: Stable-Diffusion 2.0
7 Dec 2022: Stable-Diffusion 2.1
Newer versions don’t necessarily mean better image quality with the same parameters. People mentioned that 2.0 is slightly worse than 1.5 for certain prompts, but given the right prompt engineering 2.0 and 2.1 seem to be better. ...
Overall, we strongly recommend just trying the models out and reading up on advice online (e.g. it has been shown that using negative prompts is very important for 2.0 and 2.1 to get the highest possible quality. See for example this nice blog post. ...
Additionally, the community has started fine-tuning many of the above versions on certain styles with some of them having an extremely high quality and gaining a lot of traction. ...
We recommend having a look at all diffusers checkpoints sorted by downloads and trying out the different checkpoints. ...
For the following, we will stick to v1.5 for simplicity.
Next, we can also try to optimize single components of the pipeline, e.g. switching out the latent decoder. For more details on how the whole Stable Diffusion pipeline works, please have a look at this blog post. ...
Let’s load stabilityai’s newest auto-decoder.
Copied
from diffusers import AutoencoderKL
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda")
Now we can set it to the vae of the pipeline to use it.
Copied
pipe.vae = vae
Let’s run the same prompt as before to compare quality.
Copied
images = pipe(**get_inputs(batch_size=8)).images
image_grid(images, rows=2, cols=4)
Seems like the difference is only very minor, but the new generations are arguably a bit sharper.
Cool, finally, let’s look a bit into prompt engineering.
Our goal was to generate a photo of an old warrior chief. Let’s now try to bring a bit more color into the photos and make the look more impressive.
Originally our prompt was ”portrait photo of an old warrior chief“.
To improve the prompt, it often helps to add cues that could have been used online to save high-quality photos, as well as add more details.
Essentially, when doing prompt engineering, one has to think:
How was the photo or similar photos of the one I want probably stored on the internet?
What additional detail can I give that steers the models into the style that I want?
Cool, let’s add more details.
Copied
prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes"
and let’s also add some cues that usually help to generate higher quality images.
Copied
prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta"
prompt
Cool, let’s now try this prompt.
Copied
images = pipe(**get_inputs(batch_size=8)).images
image_grid(images, rows=2, cols=4)
Pretty impressive! We got some very high-quality image generations there. The 2nd image is my personal favorite, so I’ll re-use this seed and see whether I can tweak the prompts slightly by using “oldest warrior”, “old”, "", and “young” instead of “old”. ...
Copied
prompts = [
"portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", ...
"portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", ...
"portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", ...
"portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", ...
]
generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] # 1 because we want the 2nd image
images = pipe(prompt=prompts, generator=generator, num_inference_steps=25).images
image_grid(images)
The first picture looks nice! The eye movement slightly changed and looks nice. This finished up our 101-guide on how to use Stable Diffusion 🤗.
For more information on optimization or other guides, I recommend taking a look at the following:
Blog post about Stable Diffusion: In-detail blog post explaining Stable Diffusion.
FlashAttention: XFormers flash attention can optimize your model even further with more speed and memory improvements. ...
Dreambooth - Quickly customize the model by fine-tuning it.
General info on Stable Diffusion - Info on other tasks that are powered by Stable Diffusion.
🧨 Diffusers’ Ethical Guidelines Preamble Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide...
We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. Scope The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concern...
Schedulers Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize
a pipeline to one’s use case. The best example of this is the Schedulers. Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample,
schedulers define the whole denoising process, i.e.: How many denoising steps? Stochastic or deterministic? What algorithm to use to find the denoised sample? They can be quite complex and often define a trade-off between denoising speed and denoising quality.
It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. The following paragraphs show how to do so with the 🧨 Diffusers library. Load pipeline Let’s start by loading the runwayml/stable-diffusion-v1-5...
from diffusers import DiffusionPipeline
import torch
login()
pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
) Next, we move it to GPU: Copied pipeline.to("cuda") Access the scheduler The scheduler is always one of the components of the pipeline and is usually called "scheduler".
So it can be accessed via the "scheduler" property. Copied pipeline.scheduler Output: Copied PNDMScheduler {
"_class_name": "PNDMScheduler",
"_diffusers_version": "0.21.4",