text stringlengths 0 5.54k |
|---|
Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the |
denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — |
The predicted denoised sample (x_{0}) based on the model output from the current timestep. |
pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. |
DreamBooth |
DreamBooth is a method to personalize text-to-image models like Stable Diffusion given just a few (3-5) images of a subject. It allows the model to generate contextualized images of the subject in different scenes, poses, and views. |
Dreambooth examples from the project's blog. |
This guide will show you how to finetune DreamBooth with the CompVis/stable-diffusion-v1-4 model for various GPU sizes, and with Flax. All the training scripts for DreamBooth used in this guide can be found here if you’re interested in digging deeper and seeing how things work. |
Before running the scripts, make sure you install the library’s training dependencies. We also recommend installing 🧨 Diffusers from the main GitHub branch: |
Copied |
pip install git+https://github.com/huggingface/diffusers |
pip install -U -r diffusers/examples/dreambooth/requirements.txt |
xFormers is not part of the training requirements, but we recommend you install it if you can because it could make your training faster and less memory intensive. |
After all the dependencies have been set up, initialize a 🤗 Accelerate environment with: |
Copied |
accelerate config |
To setup a default 🤗 Accelerate environment without choosing any configurations: |
Copied |
accelerate config default |
Or if your environment doesn’t support an interactive shell like a notebook, you can use: |
Copied |
from accelerate.utils import write_basic_config |
write_basic_config() |
Finally, download a few images of a dog to DreamBooth with: |
Copied |
from huggingface_hub import snapshot_download |
local_dir = "./dog" |
snapshot_download( |
"diffusers/dog-example", |
local_dir=local_dir, |
repo_type="dataset", |
ignore_patterns=".gitattributes", |
) |
Finetuning |
DreamBooth finetuning is very sensitive to hyperparameters and easy to overfit. We recommend you take a look at our in-depth analysis with recommended settings for different subjects to help you choose the appropriate hyperparameters. |
Pytorch |
Hide Pytorch content |
Set the INSTANCE_DIR environment variable to the path of the directory containing the dog images. |
Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument. |
Copied |
export MODEL_NAME="CompVis/stable-diffusion-v1-4" |
export INSTANCE_DIR="./dog" |
export OUTPUT_DIR="path_to_saved_model" |
Then you can launch the training script (you can find the full training script here) with the following command: |
Copied |
accelerate launch train_dreambooth.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--instance_data_dir=$INSTANCE_DIR \ |
--output_dir=$OUTPUT_DIR \ |
--instance_prompt="a photo of sks dog" \ |
--resolution=512 \ |
--train_batch_size=1 \ |
--gradient_accumulation_steps=1 \ |
--learning_rate=5e-6 \ |
--lr_scheduler="constant" \ |
--lr_warmup_steps=0 \ |
--max_train_steps=400 |
JAX |
Hide JAX content |
If you have access to TPUs or want to train even faster, you can try out the Flax training script. The Flax training script doesn’t support gradient checkpointing or gradient accumulation, so you’ll need a GPU with at least 30GB of memory. |
Before running the script, make sure you have the requirements installed: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.