text stringlengths 0 5.54k |
|---|
Whether or not to resume downloading the model weights and configuration files. If set to False, any |
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — |
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — |
Whether to only load local model weights and configuration files or not. If set to True, the model |
won’t be downloaded from the Hub. token (str or bool, optional) — |
The token to use as HTTP bearer authorization for remote files. If True, the token generated from |
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — |
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier |
allowed by Git. subfolder (str, optional, defaults to "") — |
The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — |
Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not |
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more |
information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and |
Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline |
import torch |
model_id = "runwayml/stable-diffusion-v1-5" |
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") |
pipe.load_textual_inversion("sd-concepts-library/cat-toy") |
prompt = "A <cat-toy> backpack" |
image = pipe(prompt, num_inference_steps=50).images[0] |
image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first |
(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline |
import torch |
model_id = "runwayml/stable-diffusion-v1-5" |
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") |
pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") |
prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." |
image = pipe(prompt, num_inference_steps=50).images[0] |
image.save("character.png") maybe_convert_prompt < source > ( prompt: Union tokenizer: PreTrainedTokenizer ) → str or list of str Parameters prompt (str or list of str) — |
The prompt or prompts to guide the image generation. tokenizer (PreTrainedTokenizer) — |
The tokenizer responsible for encoding the prompt into input tokens. Returns |
str or list of str |
The converted prompt |
Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to |
be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual |
inversion token or if the textual inversion token is a single vector, the input prompt is returned. |
InstructPix2Pix InstructPix2Pix is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be “turn the clouds rainy” and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image. Thi... |
cd diffusers |
pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/instruct_pix2pix |
pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: ... |
write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it,... |
--resolution=512 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix: --original_image_column: the original image before the edits are made --edited_image_column: the image after the edits are made --... |
out_channels = unet.conv_in.out_channels |
unet.register_to_config(in_channels=in_channels) |
with torch.no_grad(): |
new_conv_in = nn.Conv2d( |
in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding |
) |
new_conv_in.weight.zero_() |
new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) |
unet.conv_in = new_conv_in These UNet parameters are updated by the optimizer: Copied optimizer = optimizer_cls( |
unet.parameters(), |
lr=args.learning_rate, |
betas=(args.adam_beta1, args.adam_beta2), |
weight_decay=args.adam_weight_decay, |
eps=args.adam_epsilon, |
) Next, the edited images and and edit instructions are preprocessed and tokenized. It is important the same image transformations are applied to the original and edited images. Copied def preprocess_train(examples): |
preprocessed_images = preprocess_images(examples) |
original_images, edited_images = preprocessed_images.chunk(2) |
original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) |
edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) |
examples["original_pixel_values"] = original_images |
examples["edited_pixel_values"] = edited_images |
captions = list(examples[edit_prompt_column]) |
examples["input_ids"] = tokenize_captions(captions) |
return examples Finally, in the training loop, it starts by encoding the edited images into latent space: Copied latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() |
latents = latents * vae.config.scaling_factor Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image. Copied encoder_hidden_states = text_encoder(batc... |
original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() |
if args.conditioning_dropout_prob is not None: |
random_p = torch.rand(bsz, device=latents.device, generator=generator) |
prompt_mask = random_p < 2 * args.conditioning_dropout_prob |
prompt_mask = prompt_mask.reshape(bsz, 1, 1) |
null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] |
encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) |
image_mask_dtype = original_image_embeds.dtype |
image_mask = 1 - ( |
(random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) |
* (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) |
) |
image_mask = image_mask.reshape(bsz, 1, 1, 1) |
original_image_embeds = image_mask * original_image_embeds That’s pretty much it! Aside from the differences described here, the rest of the script is very similar to the Text-to-image training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check ... |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--dataset_name=$DATASET_ID \ |
--enable_xformers_memory_efficient_attention \ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.