text stringlengths 0 5.54k |
|---|
Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the |
denoising loop. Base class for the output of a scheduler’s step function. |
Re-using seeds for fast prompt engineering |
A common use case when generating images is to generate a batch of images, select one image and improve it with a better, more detailed prompt in a second run. |
To do this, one needs to make each generated image of the batch deterministic. |
Images are generated by denoising gaussian random noise which can be instantiated by passing a torch generator. |
Now, for batched generation, we need to make sure that every single generated image in the batch is tied exactly to one seed. In 🧨 Diffusers, this can be achieved by not passing one generator, but a list |
of generators to the pipeline. |
Let’s go through an example using runwayml/stable-diffusion-v1-5. |
We want to generate several versions of the prompt: |
Copied |
prompt = "Labrador in the style of Vermeer" |
Let’s load the pipeline |
Copied |
>>> from diffusers import DiffusionPipeline |
>>> pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) |
>>> pipe = pipe.to("cuda") |
Now, let’s define 4 different generators, since we would like to reproduce a certain image. We’ll use seeds 0 to 3 to create our generators. |
Copied |
>>> import torch |
>>> generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] |
Let’s generate 4 images: |
Copied |
>>> images = pipe(prompt, generator=generator, num_images_per_prompt=4).images |
>>> images |
Ok, the last images has some double eyes, but the first image looks good! |
Let’s try to make the prompt a bit better while keeping the first seed |
so that the images are similar to the first image. |
Copied |
prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] |
generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] |
We create 4 generators with seed 0, which is the first seed we used before. |
Let’s run the pipeline again. |
Copied |
>>> images = pipe(prompt, generator=generator).images |
>>> images |
Variance preserving stochastic differential equation (VP-SDE) scheduler |
Overview |
Original paper can be found here. |
Score SDE-VP is under construction. |
ScoreSdeVpScheduler |
class diffusers.schedulers.ScoreSdeVpScheduler |
< |
source |
> |
( |
num_train_timesteps = 2000 |
beta_min = 0.1 |
beta_max = 20 |
sampling_eps = 0.001 |
) |
The variance preserving stochastic differential equation (SDE) scheduler. |
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ |
function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. |
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and |
from_pretrained() functions. |
For more information, see the original paper: https://arxiv.org/abs/2011.13456 |
UNDER CONSTRUCTION |
Accelerate inference of text-to-image diffusion models Diffusion models are known to be slower than their counter parts, GANs, because of the iterative and sequential reverse diffusion process. Recent works try to address limitation with: progressive timestep distillation (such as LCM LoRA) model compression (such as SSD-1B) reusing adjacent features of the denoiser (such as DeepCache) In this tutorial, we focus on leveraging the power of PyTorch 2 to accelerate the inference latency of text-to-image diffusion pipeline, instead. We will use Stable Diffusion XL (SDXL) as a case study, but the techniques we will discuss should extend to other text-to-image diffusion pipelines. Setup Make sure you’re on the latest version of diffusers: Copied pip install -U diffusers Then upgrade the other required libraries too: Copied pip install -U transformers accelerate peft To benefit from the fastest kernels, use PyTorch nightly. You can find the installation instructions here. To report the numbers shown below, we used an 80GB 400W A100 with its clock rate set to the maximum. This tutorial doesn’t present the benchmarking code and focuses on how to perform the optimizations, instead. For the full benchmarking code, refer to: https://github.com/huggingface/diffusion-fast. Baseline Let’s start with a baseline. Disable the use of a reduced precision and scaled_dot_product_attention: Copied from diffusers import StableDiffusionXLPipeline |
# Load the pipeline in full-precision and place its model components on CUDA. |
pipe = StableDiffusionXLPipeline.from_pretrained( |
"stabilityai/stable-diffusion-xl-base-1.0" |
).to("cuda") |
# Run the attention ops without efficiency. |
pipe.unet.set_default_attn_processor() |
pipe.vae.set_default_attn_processor() |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
image = pipe(prompt, num_inference_steps=30).images[0] This takes 7.36 seconds: Running inference in bfloat16 Enable the first optimization: use a reduced precision to run the inference. Copied from diffusers import StableDiffusionXLPipeline |
import torch |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.