text stringlengths 0 5.54k |
|---|
text_encoder_cls_one = import_model_class_from_model_name_or_path( |
args.pretrained_model_name_or_path, args.revision |
) |
text_encoder_cls_two = import_model_class_from_model_name_or_path( |
args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" |
) The prompt and image embeddings are computed first and kept in memory, which isn’t typically an issue for a smaller dataset, but for larger datasets it can lead to memory problems. If this is the case, you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this PR for more discussion about this topic). Copied text_encoders = [text_encoder_one, text_encoder_two] |
tokenizers = [tokenizer_one, tokenizer_two] |
compute_embeddings_fn = functools.partial( |
encode_prompt, |
text_encoders=text_encoders, |
tokenizers=tokenizers, |
proportion_empty_prompts=args.proportion_empty_prompts, |
caption_column=args.caption_column, |
) |
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) |
train_dataset = train_dataset.map( |
compute_vae_encodings_fn, |
batched=True, |
batch_size=args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps, |
new_fingerprint=new_fingerprint_for_vae, |
) After calculating the embeddings, the text encoder, VAE, and tokenizer are deleted to free up some memory: Copied del text_encoders, tokenizers, vae |
gc.collect() |
torch.cuda.empty_cache() Finally, the training loop takes care of the rest. If you chose to apply a timestep bias strategy, you’ll see the timestep weights are calculated and added as noise: Copied weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to( |
model_input.device |
) |
timesteps = torch.multinomial(weights, bsz, replacement=True).long() |
noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with VAE_NAME to avoid numerical instabilities. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt and --validation_epochs to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" |
export VAE_NAME="madebyollin/sdxl-vae-fp16-fix" |
export DATASET_NAME="lambdalabs/pokemon-blip-captions" |
accelerate launch train_text_to_image_sdxl.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--pretrained_vae_model_name_or_path=$VAE_NAME \ |
--dataset_name=$DATASET_NAME \ |
--enable_xformers_memory_efficient_attention \ |
--resolution=512 \ |
--center_crop \ |
--random_flip \ |
--proportion_empty_prompts=0.2 \ |
--train_batch_size=1 \ |
--gradient_accumulation_steps=4 \ |
--gradient_checkpointing \ |
--max_train_steps=10000 \ |
--use_8bit_adam \ |
--learning_rate=1e-06 \ |
--lr_scheduler="constant" \ |
--lr_warmup_steps=0 \ |
--mixed_precision="fp16" \ |
--report_to="wandb" \ |
--validation_prompt="a cute Sundar Pichai creature" \ |
--validation_epochs 5 \ |
--checkpointing_steps=5000 \ |
--output_dir="sdxl-pokemon-model" \ |
--push_to_hub After you’ve finished training, you can use your newly trained SDXL model for inference! PyTorch PyTorch XLA Copied from diffusers import DiffusionPipeline |
import torch |
pipeline = DiffusionPipeline.from_pretrained("path/to/your/model", torch_dtype=torch.float16).to("cuda") |
prompt = "A pokemon with green eyes and red legs." |
image = pipeline(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] |
image.save("pokemon.png") Next steps Congratulations on training a SDXL model! To learn more about how to use your new model, the following guides may be helpful: Read the Stable Diffusion XL guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting), how to use it’s refiner model, and the different types of micro-conditionings. Check out the DreamBooth and LoRA training guides to learn how to train a personalized SDXL model with just a few example images. These two training techniques can even be combined! |
Kandinsky 3 Kandinsky 3 is created by Vladimir Arkhipkin,Anastasia Maltseva,Igor Pavlov,Andrei Filatov,Arseniy Shakhmatov,Andrey Kuznetsov,Denis Dimitrov, Zein Shaheen The description from it’s Github page: Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively. Its architecture includes 3 main components: FLAN-UL2, which is an encoder decoder model based on the T5 architecture. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters. Sber-MoVQGAN is a decoder proven to have superior results in image restoration. The original codebase can be found at ai-forever/Kandinsky-3. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Kandinsky3Pipeline class diffusers.Kandinsky3Pipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = 1024 width: Optional = 1024 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True latents = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — |
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. |
instead. num_inference_steps (int, optional, defaults to 25) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. timesteps (List[int], optional) — |
Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps |
timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 3.0) — |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, |
usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). num_images_per_prompt (int, optional, defaults to 1) — |
The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — |
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — |
The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — |
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to |
schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — |
One or a list of torch generator(s) |
to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. attention_mask (torch.FloatTensor, optional) — |
Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — |
Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — |
Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — |
A function that will be called every callback_steps steps during inference. The function will be |
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — |
The frequency at which the callback function will be called. If not specified, the callback will be |
called at every step. clean_caption (bool, optional, defaults to True) — |
Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to |
be installed. If the dependencies are not installed, the embeddings will be created from the raw |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.