text stringlengths 0 5.54k |
|---|
Copied |
huggingface-cli login |
If you have already cloned the repo, then you won’t need to go through these steps. Instead, you can pass the path to your local checkout to the training script and it will be loaded from there. |
Hardware Requirements for Fine-tuning |
Using gradient_checkpointing and mixed_precision it should be possible to fine tune the model on a single 24GB GPU. For higher batch_size and faster training it’s better to use GPUs with more than 30GB of GPU memory. You can also use JAX / Flax for fine-tuning on TPUs or GPUs, see below for details. |
Fine-tuning Example |
The following script will launch a fine-tuning run using Justin Pinkneys’ captioned Pokemon dataset, available in Hugging Face Hub. |
Copied |
export MODEL_NAME="CompVis/stable-diffusion-v1-4" |
export dataset_name="lambdalabs/pokemon-blip-captions" |
accelerate launch train_text_to_image.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--dataset_name=$dataset_name \ |
--use_ema \ |
--resolution=512 --center_crop --random_flip \ |
--train_batch_size=1 \ |
--gradient_accumulation_steps=4 \ |
--gradient_checkpointing \ |
--mixed_precision="fp16" \ |
--max_train_steps=15000 \ |
--learning_rate=1e-05 \ |
--max_grad_norm=1 \ |
--lr_scheduler="constant" --lr_warmup_steps=0 \ |
--output_dir="sd-pokemon-model" |
To run on your own training files you need to prepare the dataset according to the format required by datasets. You can upload your dataset to the Hub, or you can prepare a local folder with your files. This documentation explains how to do it. |
You should modify the script if you wish to use custom loading logic. We have left pointers in the code in the appropriate places :) |
Copied |
export MODEL_NAME="CompVis/stable-diffusion-v1-4" |
export TRAIN_DIR="path_to_your_dataset" |
export OUTPUT_DIR="path_to_save_model" |
accelerate launch train_text_to_image.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--train_data_dir=$TRAIN_DIR \ |
--use_ema \ |
--resolution=512 --center_crop --random_flip \ |
--train_batch_size=1 \ |
--gradient_accumulation_steps=4 \ |
--gradient_checkpointing \ |
--mixed_precision="fp16" \ |
--max_train_steps=15000 \ |
--learning_rate=1e-05 \ |
--max_grad_norm=1 \ |
--lr_scheduler="constant" --lr_warmup_steps=0 \ |
--output_dir=${OUTPUT_DIR} |
Once training is finished the model will be saved to the OUTPUT_DIR specified in the command. To load the fine-tuned model for inference, just pass that path to StableDiffusionPipeline: |
Copied |
from diffusers import StableDiffusionPipeline |
model_path = "path_to_saved_model" |
pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16) |
pipe.to("cuda") |
image = pipe(prompt="yoda").images[0] |
image.save("yoda-pokemon.png") |
Flax / JAX fine-tuning |
Thanks to @duongna211 it’s possible to fine-tune Stable Diffusion using Flax! This is very efficient on TPU hardware but works great on GPUs too. You can use the Flax training script like this: |
Copied |
export MODEL_NAME="runwayml/stable-diffusion-v1-5" |
export dataset_name="lambdalabs/pokemon-blip-captions" |
python train_text_to_image_flax.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--dataset_name=$dataset_name \ |
--resolution=512 --center_crop --random_flip \ |
--train_batch_size=1 \ |
--max_train_steps=15000 \ |
--learning_rate=1e-05 \ |
--max_grad_norm=1 \ |
--output_dir="sd-pokemon-model" |
Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. |
This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original... |
The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — |
The starting beta value of inference. beta_end (float, defaults to 0.02) — |
The final beta value. beta_schedule (str, defaults to "linear") — |
The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from |
linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — |
Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) — |
The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we |
will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) — |
Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — |
The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — |
Each diffusion step uses the alphas product value at that step and at the previous one. For the final step |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.