text
stringlengths
0
5.54k
attention_mask=text_mask,
).predicted_image_embedding If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration...
accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \
--dataset_name=$DATASET_NAME \
--resolution=768 \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--checkpoints_total_limit=3 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--validation_prompts="A robot pokemon, 4k photo" \
--report_to="wandb" \
--push_to_hub \
--output_dir="kandi2-prior-pokemon-model" Once training is finished, you can use your newly trained model for inference! prior model decoder model Copied from diffusers import AutoPipelineForText2Image, DiffusionPipeline
import torch
prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16)
prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()}
pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16)
pipe.enable_model_cpu_offload()
prompt="A robot pokemon, 4k photo"
image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0] Feel free to replace kandinsky-community/kandinsky-2-2-decoder with your own trained decoder checkpoint! Next steps Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be h...
Textual inversion The StableDiffusionPipeline supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can ge...
from diffusers import StableDiffusionPipeline
from diffusers.utils import make_image_grid Stable Diffusion 1 and 2 Pick a Stable Diffusion checkpoint and a pre-learned concept from the Stable Diffusion Conceptualizer: Copied pretrained_model_name_or_path = "runwayml/stable-diffusion-v1-5"
repo_id_embeds = "sd-concepts-library/cat-toy" Now you can load a pipeline, and pass the pre-learned concept to it: Copied pipeline = StableDiffusionPipeline.from_pretrained(
pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True
).to("cuda")
pipeline.load_textual_inversion(repo_id_embeds) Create a prompt with the pre-learned concept by using the special placeholder token <cat-toy>, and choose the number of samples and rows of images you’d like to generate: Copied prompt = "a grafitti in a favela wall with a <cat-toy> on it"
num_samples_per_row = 2
num_rows = 2 Then run the pipeline (feel free to adjust the parameters like num_inference_steps and guidance_scale to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning: Copied all_images = []
for _ in range(num_rows):
images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps=50, guidance_scale=7.5).images
all_images.extend(images)
grid = make_image_grid(all_images, num_rows, num_samples_per_row)
grid Stable Diffusion XL Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you’ll need two textual inversion embeddings - one for each text encoder model. Let’s download the SDXL textual inversion embeddings and have...
from safetensors.torch import load_file
file = hf_hub_download("dn118/unaestheticXL", filename="unaestheticXLv31.safetensors")
state_dict = load_file(file)
state_dict Copied {'clip_g': tensor([[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275],
...,
[-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]],
'clip_l': tensor([[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011],
...,
[ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]], There are two tensors, "clip_g" and "clip_l".
"clip_g" corresponds to the bigger text encoder in SDXL and refers to
pipe.text_encoder_2 and "clip_l" refers to pipe.text_encoder. Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer
to load_textual_inversion(): Copied from diffusers import AutoPipelineForText2Image
import torch
pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16)
pipe.to("cuda")
pipe.load_textual_inversion(state_dict["clip_g"], token="unaestheticXLv31", text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
pipe.load_textual_inversion(state_dict["clip_l"], token="unaestheticXLv31", text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
# the embedding should be used as a negative embedding, so we pass it as a negative prompt
generator = torch.Generator().manual_seed(33)
image = pipe("a woman standing in front of a mountain", negative_prompt="unaestheticXLv31", generator=generator).images[0]
image
Depth-to-image The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure. Make sure to check out the Stable Diffusion Tips section to learn how...
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) —
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) —
A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) —
A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) —
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < sou...
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) —
Image or tensor representing an image batch to be used as the starting point. Can accept image
latents as image only if depth_map is not None. depth_map (torch.FloatTensor, optional) —
Depth prediction to be used as additional conditioning for the image generation process. If not
defined, it automatically predicts the depth with self.depth_estimator. strength (float, optional, defaults to 0.8) —
Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a
starting point and more noise is added the higher the strength. The number of denoising steps depends
on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising
process runs for the full number of iterations specified in num_inference_steps. A value of 1
essentially ignores image. num_inference_steps (int, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) —
A higher guidance scale value encourages the model to generate images closely linked to the text
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) —
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) —
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) —
Corresponds to parameter eta (η) from the DDIM paper. Only applies
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) —
A torch.Generator to make
generation deterministic. prompt_embeds (torch.FloatTensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) —