text stringlengths 0 5.54k |
|---|
pipeline = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 |
).to("cuda") |
image = pipeline( |
prompt_embeds=prompt_embeds, # generated from Compel |
negative_prompt_embeds=negative_prompt_embeds, # generated from Compel |
).images[0] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For exampl... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda") |
pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. |
Train a diffusion model |
Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub, but if you can’t ... |
This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋. |
💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook! |
Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training). |
Copied |
!pip install diffusers[training] |
We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted: |
Copied |
>>> from huggingface_hub import notebook_login |
>>> notebook_login() |
Or login in from the terminal: |
Copied |
huggingface-cli login |
Since the model checkpoints are quite large, install Git-LFS to version these large files: |
Copied |
!sudo apt -qq install git-lfs |
!git config --global credential.helper store |
Training configuration |
For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them): |
Copied |
>>> from dataclasses import dataclass |
>>> @dataclass |
... class TrainingConfig: |
... image_size = 128 # the generated image resolution |
... train_batch_size = 16 |
... eval_batch_size = 16 # how many images to sample during evaluation |
... num_epochs = 50 |
... gradient_accumulation_steps = 1 |
... learning_rate = 1e-4 |
... lr_warmup_steps = 500 |
... save_image_epochs = 10 |
... save_model_epochs = 30 |
... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision |
... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub |
... push_to_hub = True # whether to upload the saved model to the HF Hub |
... hub_private_repo = False |
... overwrite_output_dir = True # overwrite the old model when re-running the notebook |
... seed = 0 |
>>> config = TrainingConfig() |
Load the dataset |
You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library: |
Copied |
>>> from datasets import load_dataset |
>>> config.dataset_name = "huggan/smithsonian_butterflies_subset" |
>>> dataset = load_dataset(config.dataset_name, split="train") |
💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder. Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images. |
🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize: |
Copied |
>>> import matplotlib.pyplot as plt |
>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) |
>>> for i, image in enumerate(dataset[:4]["image"]): |
... axs[i].imshow(image) |
... axs[i].set_axis_off() |
>>> fig.show() |
The images are all different sizes though, so you’ll need to preprocess them first: |
Resize changes the image size to the one defined in config.image_size. |
RandomHorizontalFlip augments the dataset by randomly mirroring the images. |
Normalize is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.