text stringlengths 0 5.54k |
|---|
preprocessed_images = preprocess_images(examples) |
original_images, edited_images = preprocessed_images.chunk(2) |
original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) |
edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) |
examples["original_pixel_values"] = original_images |
examples["edited_pixel_values"] = edited_images |
captions = list(examples[edit_prompt_column]) |
examples["input_ids"] = tokenize_captions(captions) |
return examples Finally, in the training loop, it starts by encoding the edited images into latent space: Copied latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() |
latents = latents * vae.config.scaling_factor Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image. Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] |
original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() |
if args.conditioning_dropout_prob is not None: |
random_p = torch.rand(bsz, device=latents.device, generator=generator) |
prompt_mask = random_p < 2 * args.conditioning_dropout_prob |
prompt_mask = prompt_mask.reshape(bsz, 1, 1) |
null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] |
encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) |
image_mask_dtype = original_image_embeds.dtype |
image_mask = 1 - ( |
(random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) |
* (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) |
) |
image_mask = image_mask.reshape(bsz, 1, 1, 1) |
original_image_embeds = image_mask * original_image_embeds That’s pretty much it! Aside from the differences described here, the rest of the script is very similar to the Text-to-image training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’re happy with the changes to your script or if you’re okay with the default configuration, you’re ready to launch the training script! 🚀 This guide uses the fusing/instructpix2pix-1000-samples dataset, which is a smaller version of the original dataset. You can also create and use your own dataset if you’d like (see the Create a dataset for training guide). Set the MODEL_NAME environment variable to the name of the model (can be a model id on the Hub or a path to a local model), and the DATASET_ID to the name of the dataset on the Hub. The script creates and saves all the components (feature extractor, scheduler, text encoder, UNet, etc.) to a subfolder in your repository. For better results, try longer training runs with a larger dataset. We’ve only tested this training script on a smaller-scale dataset. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation image with --val_image_url and a validation prompt with --validation_prompt. This can be really useful for debugging the model. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. Copied accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--dataset_name=$DATASET_ID \ |
--enable_xformers_memory_efficient_attention \ |
--resolution=256 \ |
--random_flip \ |
--train_batch_size=4 \ |
--gradient_accumulation_steps=4 \ |
--gradient_checkpointing \ |
--max_train_steps=15000 \ |
--checkpointing_steps=5000 \ |
--checkpoints_total_limit=1 \ |
--learning_rate=5e-05 \ |
--max_grad_norm=1 \ |
--lr_warmup_steps=0 \ |
--conditioning_dropout_prob=0.05 \ |
--mixed_precision=fp16 \ |
--seed=42 \ |
--push_to_hub After training is finished, you can use your new InstructPix2Pix for inference: Copied import PIL |
import requests |
import torch |
from diffusers import StableDiffusionInstructPix2PixPipeline |
from diffusers.utils import load_image |
pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("your_cool_model", torch_dtype=torch.float16).to("cuda") |
generator = torch.Generator("cuda").manual_seed(0) |
image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png") |
prompt = "add some ducks to the lake" |
num_inference_steps = 20 |
image_guidance_scale = 1.5 |
guidance_scale = 10 |
edited_image = pipeline( |
prompt, |
image=image, |
num_inference_steps=num_inference_steps, |
image_guidance_scale=image_guidance_scale, |
guidance_scale=guidance_scale, |
generator=generator, |
).images[0] |
edited_image.save("edited_image.png") You should experiment with different num_inference_steps, image_guidance_scale, and guidance_scale values to see how they affect inference speed and quality. The guidance scale parameters are especially impactful because they control how much the original image and edit instructions affect the edited image. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_instruct_pix2pix_sdxl.py script to train a SDXL model to follow image editing instructions. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own InstructPix2Pix model! 🥳 To learn more about the model, it may be helpful to: Read the Instruction-tuning Stable Diffusion with InstructPix2Pix blog post to learn more about some experiments we’ve done with InstructPix2Pix, dataset preparation, and results for different instructions. |
ONNX Runtime 🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. You’ll need to install 🤗 Optimum with the following command for ONNX Runtime support: Copied pip install -q optimum["onnxruntime"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime. Stable Diffusion To load and run inference, use the ORTStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set export=True: Copied from optimum.onnxruntime import ORTStableDiffusionPipeline |
model_id = "runwayml/stable-diffusion-v1-5" |
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True) |
prompt = "sailing ship in storm by Leonardo da Vinci" |
image = pipeline(prompt).images[0] |
pipeline.save_pretrained("./onnx-stable-diffusion-v1-5") Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. To export the pipeline in the ONNX format offline and use it later for inference, |
use the optimum-cli export command: Copied optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/ Then to perform inference (you don’t have to specify export=True again): Copied from optimum.onnxruntime import ORTStableDiffusionPipeline |
model_id = "sd_v15_onnx" |
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) |
prompt = "sailing ship in storm by Leonardo da Vinci" |
image = pipeline(prompt).images[0] You can find more examples in 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the ORTStableDiffusionXLPipeline: Copied from optimum.onnxruntime import ORTStableDiffusionXLPipeline |
model_id = "stabilityai/stable-diffusion-xl-base-1.0" |
pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id) |
prompt = "sailing ship in storm by Leonardo da Vinci" |
image = pipeline(prompt).images[0] To export the pipeline in the ONNX format and use it later for inference, use the optimum-cli export command: Copied optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/ SDXL in the ONNX format is supported for text-to-image and image-to-image. |
DeepCache DeepCache accelerates StableDiffusionPipeline and StableDiffusionXLPipeline by strategically caching and reusing high-level features while efficiently updating low-level features by taking advantage of the U-Net architecture. Start by installing DeepCache: Copied pip install DeepCache Then load and enable the DeepCacheSDHelper: Copied import torch |
from diffusers import StableDiffusionPipeline |
pipe = StableDiffusionPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda") |
+ from DeepCache import DeepCacheSDHelper |
+ helper = DeepCacheSDHelper(pipe=pipe) |
+ helper.set_params( |
+ cache_interval=3, |
+ cache_branch_id=0, |
+ ) |
+ helper.enable() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.