text
stringlengths
0
5.54k
If specified, weights are saved in the format pytorch_model.<variant>.bin. Upload model, scheduler, or pipeline files to the πŸ€— Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel
unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet")
# Push the `unet` to your namespace with the name "my-finetuned-unet".
unet.push_to_hub("my-finetuned-unet")
# Push the `unet` to an organization with the name "my-finetuned-unet".
unet.push_to_hub("your-org/my-finetuned-unet")
🧨 Diffusers Training Examples
Diffusers training examples are a collection of scripts to demonstrate how to effectively use the diffusers library
for a variety of use cases.
Note: If you are looking for official examples on how to use diffusers for inference,
please have a look at src/diffusers/pipelines
Our examples aspire to be self-contained, easy-to-tweak, beginner-friendly and for one-purpose-only.
More specifically, this means:
Self-contained: An example script shall only depend on β€œpip-install-able” Python packages that can be found in a requirements.txt file. Example scripts shall not depend on any local files. This means that one can simply download an example script, e.g. train_unconditional.py, install the required dependencies, e.g. req...
Easy-to-tweak: While we strive to present as many use cases as possible, the example scripts are just that - examples. It is expected that they won’t work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the e...
Beginner-friendly: We do not aim for providing state-of-the-art training scripts for the newest models, but rather examples that can be used as a way to better understand diffusion models and how to use them with the diffusers library. We often purposefully leave out certain state-of-the-art methods if we consider them...
One-purpose-only: Examples should show one task and one task only. Even if a task is from a modeling
point of view very similar, e.g. image super-resolution and image modification tend to use the same model and training method, we want examples to showcase only one task to keep them as readable and easy-to-understand as possible.
We provide official examples that cover the most popular tasks of diffusion models.
Official examples are actively maintained by the diffusers maintainers and we try to rigorously follow our example philosophy as defined above.
If you feel like another important example should exist, we are more than happy to welcome a Feature Request or directly a Pull Request from you!
Training examples show how to pretrain or fine-tune diffusion models for a variety of tasks. Currently we support:
Unconditional Training
Text-to-Image Training
Text Inversion
Dreambooth
LoRA Support
ControlNet
InstructPix2Pix
Custom Diffusion
If possible, please install xFormers for memory efficient attention. This could help make your training faster and less memory intensive.
Task
πŸ€— Accelerate
πŸ€— Datasets
Colab
Unconditional Image Generation
βœ…
βœ…
Text-to-Image fine-tuning
βœ…
βœ…
Textual Inversion
βœ…
-
Dreambooth
βœ…
-
Training with LoRA
βœ…
-
-
ControlNet
βœ…
βœ…
-
InstructPix2Pix
βœ…
βœ…
-
Custom Diffusion
βœ…
βœ…
-
Community
In addition, we provide community examples, which are examples added and maintained by our community.
Community examples can consist of both training examples or inference pipelines.
For such examples, we are more lenient regarding the philosophy defined above and also cannot guarantee to provide maintenance for every issue.
Examples that are useful for the community, but are either not yet deemed popular or not yet following our above philosophy should go into the community examples folder. The community folder therefore includes training examples and inference pipelines.
Note: Community examples can be a great first contribution to show to the community how you like to use diffusers πŸͺ„.
Important note
To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
Copied
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
Then cd in the example folder of your choice and run
Copied
pip install -r requirements.txt
Euler Ancestral scheduler
Overview
Ancestral sampling with Euler method steps. Based on the original (k-diffusion)[https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72] implementation by Katherine Crowson.