text stringlengths 0 5.54k |
|---|
expense of slower inference. |
eta (float, optional, defaults to 0.0) — |
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to |
schedulers.DDIMScheduler, will be ignored for others. |
generator (torch.Generator, optional) — |
One or a list of torch generator(s) |
to make generation deterministic. |
output_type (str, optional, defaults to "pil") — |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. |
return_dict (bool, optional) — |
Whether or not to return a ImagePipelineOutput instead of a plain tuple. |
Returns |
ImagePipelineOutput or tuple |
~pipelines.utils.ImagePipelineOutput if return_dict is |
True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. |
Understanding pipelines, models and schedulers 🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you... |
>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") |
>>> image = ddpm(num_inference_steps=25).images[0] |
>>> image That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. In the example above, the pipeline contains a UNet2DModel model and a DDPMScheduler. The pipeline denoises an image by taking random noise the size of the desired output and... |
>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") |
>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") Set the number of timesteps to run the denoising process for: Copied >>> scheduler.set_timesteps(50) Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corr... |
tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, |
700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, |
420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, |
140, 120, 100, 80, 60, 40, 20, 0]) Create some random noise with the same shape as the desired output: Copied >>> import torch |
>>> sample_size = model.config.sample_size |
>>> noise = torch.randn((1, 3, sample_size, sample_size), device="cuda") Now write a loop to iterate over the timesteps. At each timestep, the model does a UNet2DModel.forward() pass and returns the noisy residual. The scheduler’s step() method takes the noisy residual, timestep, and input and it predicts the image at ... |
>>> for t in scheduler.timesteps: |
... with torch.no_grad(): |
... noisy_residual = model(input, t).sample |
... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample |
... input = previous_noisy_sample This is the entire denoising process, and you can use this same pattern to write any diffusion system. The last step is to convert the denoised output into an image: Copied >>> from PIL import Image |
>>> import numpy as np |
>>> image = (input / 2 + 0.5).clamp(0, 1).squeeze() |
>>> image = (image.permute(1, 2, 0) * 255).round().to(torch.uint8).cpu().numpy() |
>>> image = Image.fromarray(image) |
>>> image In the next section, you’ll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You’ll initialize the necessary components, and set the number of timesteps to create a timestep array. The timestep array is used in the denoising loop, and f... |
>>> import torch |
>>> from transformers import CLIPTextModel, CLIPTokenizer |
>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler |
>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_safetensors=True) |
>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") |
>>> text_encoder = CLIPTextModel.from_pretrained( |
... "CompVis/stable-diffusion-v1-4", subfolder="text_encoder", use_safetensors=True |
... ) |
>>> unet = UNet2DConditionModel.from_pretrained( |
... "CompVis/stable-diffusion-v1-4", subfolder="unet", use_safetensors=True |
... ) Instead of the default PNDMScheduler, exchange it for the UniPCMultistepScheduler to see how easy it is to plug a different scheduler in: Copied >>> from diffusers import UniPCMultistepScheduler |
>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: Copied >>> torch_device = "cuda" |
>>> vae.to(torch_device) |
>>> text_encoder.to(torch_device) |
>>> unet.to(torch_device) Create text embeddings The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. 💡 The guidance_scale parameter determines how much weight should be given to th... |
>>> height = 512 # default height of Stable Diffusion |
>>> width = 512 # default width of Stable Diffusion |
>>> num_inference_steps = 25 # Number of denoising steps |
>>> guidance_scale = 7.5 # Scale for classifier-free guidance |
>>> generator = torch.manual_seed(0) # Seed generator to create the initial latent noise |
>>> batch_size = len(prompt) Tokenize the text and generate the embeddings from the prompt: Copied >>> text_input = tokenizer( |
... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" |
... ) |
>>> with torch.no_grad(): |
... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] You’ll also need to generate the unconditional text embeddings which are the embeddings for the padding token. These need to have the same shape (batch_size and seq_length) as the conditional text_embeddings: Copied >>> max_length = text... |
>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") |
>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] Let’s concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: Copied >>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) Create random noise Next, generate some initial ... |
... (batch_size, unet.config.in_channels, height // 8, width // 8), |
... generator=generator, |
... device=torch_device, |
... ) Denoise the image Start by scaling the input with the initial noise distribution, sigma, the noise scale value, which is required for improved schedulers like UniPCMultistepScheduler: Copied >>> latents = latents * scheduler.init_noise_sigma The last step is to create the denoising loop that’ll progressively t... |
>>> scheduler.set_timesteps(num_inference_steps) |
>>> for t in tqdm(scheduler.timesteps): |
... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. |
... latent_model_input = torch.cat([latents] * 2) |
... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.