text
stringlengths
0
5.54k
DDPM Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the πŸ€— Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality...
A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
A scheduler to be used in combination with unet to denoise the encoded image. Can be one of
DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None num_inference_steps: int = 1000 output_type: Optional = 'pil' return_dict: bool = True ) β†’ ImagePipelineOutput or tuple Parameters batch_size (int, optional,...
The number of images to generate. generator (torch.Generator, optional) β€”
A torch.Generator to make
generation deterministic. num_inference_steps (int, optional, defaults to 1000) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. output_type (str, optional, defaults to "pil") β€”
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns
ImagePipelineOutput or tuple
If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is
returned where the first element is a list with the generated images
The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDPMPipeline
>>> # load model and scheduler
>>> pipe = DDPMPipeline.from_pretrained("google/ddpm-cat-256")
>>> # run pipeline in inference (sample random noise and denoise)
>>> image = pipe().images[0]
>>> # save image
>>> image.save("ddpm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) β€”
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines.
🧨 Diffusers Training Examples
Diffusers training examples are a collection of scripts to demonstrate how to effectively use the diffusers library
for a variety of use cases.
Note: If you are looking for official examples on how to use diffusers for inference,
please have a look at src/diffusers/pipelines
Our examples aspire to be self-contained, easy-to-tweak, beginner-friendly and for one-purpose-only.
More specifically, this means:
Self-contained: An example script shall only depend on β€œpip-install-able” Python packages that can be found in a requirements.txt file. Example scripts shall not depend on any local files. This means that one can simply download an example script, e.g. train_unconditional.py, install the required dependencies, e.g. req...
Easy-to-tweak: While we strive to present as many use cases as possible, the example scripts are just that - examples. It is expected that they won’t work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the e...
Beginner-friendly: We do not aim for providing state-of-the-art training scripts for the newest models, but rather examples that can be used as a way to better understand diffusion models and how to use them with the diffusers library. We often purposefully leave out certain state-of-the-art methods if we consider them...
One-purpose-only: Examples should show one task and one task only. Even if a task is from a modeling
point of view very similar, e.g. image super-resolution and image modification tend to use the same model and training method, we want examples to showcase only one task to keep them as readable and easy-to-understand as possible.
We provide official examples that cover the most popular tasks of diffusion models.
Official examples are actively maintained by the diffusers maintainers and we try to rigorously follow our example philosophy as defined above.
If you feel like another important example should exist, we are more than happy to welcome a Feature Request or directly a Pull Request from you!
Training examples show how to pretrain or fine-tune diffusion models for a variety of tasks. Currently we support:
Unconditional Training
Text-to-Image Training
Text Inversion
Dreambooth
LoRA Support
If possible, please install xFormers for memory efficient attention. This could help make your training faster and less memory intensive.
Task
πŸ€— Accelerate
πŸ€— Datasets
Colab
Unconditional Image Generation
βœ…
βœ…
Text-to-Image fine-tuning
βœ…
βœ…
Textual Inversion
βœ…
-
Dreambooth
βœ…
-
Community
In addition, we provide community examples, which are examples added and maintained by our community.
Community examples can consist of both training examples or inference pipelines.
For such examples, we are more lenient regarding the philosophy defined above and also cannot guarantee to provide maintenance for every issue.
Examples that are useful for the community, but are either not yet deemed popular or not yet following our above philosophy should go into the community examples folder. The community folder therefore includes training examples and inference pipelines.
Note: Community examples can be a great first contribution to show to the community how you like to use diffusers πŸͺ„.
Important note
To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
Copied
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
Then cd in the example folder of your choice and run
Copied
pip install -r requirements.txt
Distilled Stable Diffusion inference Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a distilled version of the Stable Diffusion model from Nota AI. The distilled versio...
import torch
distilled = StableDiffusionPipeline.from_pretrained(
"nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True,
).to("cuda")