text stringlengths 0 5.54k |
|---|
export INSTANCE_DIR="./dog" |
export CLASS_DIR="path_to_class_images" |
export OUTPUT_DIR="path_to_saved_model" |
accelerate launch train_dreambooth.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--train_text_encoder \ |
--instance_data_dir=$INSTANCE_DIR \ |
--class_data_dir=$CLASS_DIR \ |
--output_dir=$OUTPUT_DIR \ |
--with_prior_preservation --prior_loss_weight=1.0 \ |
--instance_prompt="a photo of sks dog" \ |
--class_prompt="a photo of dog" \ |
--resolution=512 \ |
--train_batch_size=1 \ |
--use_8bit_adam |
--gradient_checkpointing \ |
--learning_rate=2e-6 \ |
--lr_scheduler="constant" \ |
--lr_warmup_steps=0 \ |
--num_class_images=200 \ |
--max_train_steps=800 |
JAX |
Hide JAX content |
Copied |
export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" |
export INSTANCE_DIR="./dog" |
export CLASS_DIR="path-to-class-images" |
export OUTPUT_DIR="path-to-save-model" |
python train_dreambooth_flax.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--train_text_encoder \ |
--instance_data_dir=$INSTANCE_DIR \ |
--class_data_dir=$CLASS_DIR \ |
--output_dir=$OUTPUT_DIR \ |
--with_prior_preservation --prior_loss_weight=1.0 \ |
--instance_prompt="a photo of sks dog" \ |
--class_prompt="a photo of dog" \ |
--resolution=512 \ |
--train_batch_size=1 \ |
--learning_rate=2e-6 \ |
--num_class_images=200 \ |
--max_train_steps=800 |
Finetuning with LoRA |
You can also use Low-Rank Adaptation of Large Language Models (LoRA), a fine-tuning technique for accelerating training large models, on DreamBooth. For more details, take a look at the LoRA training guide. |
Saving checkpoints while training |
It’s easy to overfit while training with Dreambooth, so sometimes it’s useful to save regular checkpoints during the training process. One of the intermediate checkpoints might actually work better than the final model! Pass the following argument to the training script to enable saving checkpoints: |
Copied |
--checkpointing_steps=500 |
This saves the full training state in subfolders of your output_dir. Subfolder names begin with the prefix checkpoint-, followed by the number of steps performed so far; for example, checkpoint-1500 would be a checkpoint saved after 1500 training steps. |
Resume training from a saved checkpoint |
If you want to resume training from any of the saved checkpoints, you can pass the argument --resume_from_checkpoint to the script and specify the name of the checkpoint you want to use. You can also use the special string "latest" to resume from the last saved checkpoint (the one with the largest number of steps). For... |
Copied |
--resume_from_checkpoint="checkpoint-1500" |
This is a good opportunity to tweak some of your hyperparameters if you wish. |
Inference from a saved checkpoint |
Saved checkpoints are stored in a format suitable for resuming training. They not only include the model weights, but also the state of the optimizer, data loaders, and learning rate. |
If you have "accelerate>=0.16.0" installed, use the following code to run |
inference from an intermediate checkpoint. |
Copied |
from diffusers import DiffusionPipeline, UNet2DConditionModel |
from transformers import CLIPTextModel |
import torch |
# Load the pipeline with the same arguments (model, revision) that were used for training |
model_id = "CompVis/stable-diffusion-v1-4" |
unet = UNet2DConditionModel.from_pretrained("/sddata/dreambooth/daruma-v2-1/checkpoint-100/unet") |
# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder |
text_encoder = CLIPTextModel.from_pretrained("/sddata/dreambooth/daruma-v2-1/checkpoint-100/text_encoder") |
pipeline = DiffusionPipeline.from_pretrained(model_id, unet=unet, text_encoder=text_encoder, dtype=torch.float16) |
pipeline.to("cuda") |
# Perform inference, or save, or push to the hub |
pipeline.save_pretrained("dreambooth-pipeline") |
If you have "accelerate<0.16.0" installed, you need to convert it to an inference pipeline first: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.