text
stringlengths
0
5.54k
tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") β€”
The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np"
(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) β€”
Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) β€”
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) β€”
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list
will be passed as callback_kwargs argument. You will only be able to include variables listed in the
._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch
>>> from diffusers import WuerstchenPriorPipeline, WuerstchenDecoderPipeline
>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained(
... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16
... ).to("cuda")
>>> gen_pipe = WuerstchenDecoderPipeline.from_pretrain("warp-ai/wuerstchen", torch_dtype=torch.float16).to(
... "cuda"
... )
>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet"
>>> prior_output = pipe(prompt)
>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt) Citation Copied @misc{pernias2023wuerstchen,
title={Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models},
author={Pablo Pernias and Dominic Rampas and Mats L. Richter and Christopher J. Pal and Marc Aubreville},
year={2023},
eprint={2306.00637},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
🧨 Diffusers
πŸ€— Diffusers provides pretrained vision and audio diffusion models, and serves as a modular toolbox for inference and training.
More precisely, πŸ€— Diffusers offers:
State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see Using Diffusers) or have a look at Pipelines to get an overview of all supported pipelines and their corresponding papers.
Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference. For more information see Schedulers.
Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system. See Models for more details
Training examples to show how to train the most popular diffusion model tasks. For more information see Training.
🧨 Diffusers Pipelines
The following table summarizes all officially supported pipelines, their corresponding paper, and if
available a colab notebook to directly try them out.
Pipeline
Paper
Tasks
Colab
alt_diffusion
AltDiffusion
Image-to-Image Text-Guided Generation
audio_diffusion
Audio Diffusion
Unconditional Audio Generation
controlnet
ControlNet with Stable Diffusion
Image-to-Image Text-Guided Generation
[
cycle_diffusion
Cycle Diffusion
Image-to-Image Text-Guided Generation
dance_diffusion
Dance Diffusion
Unconditional Audio Generation
ddpm
Denoising Diffusion Probabilistic Models
Unconditional Image Generation
ddim
Denoising Diffusion Implicit Models
Unconditional Image Generation
latent_diffusion
High-Resolution Image Synthesis with Latent Diffusion Models
Text-to-Image Generation
latent_diffusion
High-Resolution Image Synthesis with Latent Diffusion Models
Super Resolution Image-to-Image
latent_diffusion_uncond
High-Resolution Image Synthesis with Latent Diffusion Models
Unconditional Image Generation
paint_by_example
Paint by Example: Exemplar-based Image Editing with Diffusion Models
Image-Guided Image Inpainting
pndm
Pseudo Numerical Methods for Diffusion Models on Manifolds
Unconditional Image Generation
score_sde_ve
Score-Based Generative Modeling through Stochastic Differential Equations
Unconditional Image Generation
score_sde_vp
Score-Based Generative Modeling through Stochastic Differential Equations