text stringlengths 0 5.54k |
|---|
Parameters |
value_function (UNet1DModel) — A specialized UNet for fine-tuning trajectories base on reward. |
unet (UNet1DModel) — U-Net architecture to denoise the encoded trajectories. |
scheduler (SchedulerMixin) — |
A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this |
application is DDPMScheduler. |
env — An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. |
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) |
Pipeline for sampling actions from a diffusion model trained to predict sequences of states. |
Original implementation inspired by this repository: https://github.com/jannerm/diffuser. |
Kandinsky 2.2 This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes a... |
cd diffusers |
pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/kandinsky2_2/text_to_image |
pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: ... |
write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it... |
--mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s get straight to a walkthrough of the Kandinsky training scripts! Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence.... |
--snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts. prior model decoder model The main() function contain... |
image_processor = CLIPImageProcessor.from_pretrained( |
args.pretrained_prior_model_name_or_path, subfolder="image_processor" |
) |
tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer") |
with ContextManagers(deepspeed_zero_init_disabled_context_manager()): |
image_encoder = CLIPVisionModelWithProjection.from_pretrained( |
args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype |
).eval() |
text_encoder = CLIPTextModelWithProjection.from_pretrained( |
args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype |
).eval() Kandinsky uses a PriorTransformer to generate the image embeddings, so you’ll want to setup the optimizer to learn the prior mode’s parameters. Copied prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") |
prior.train() |
optimizer = optimizer_cls( |
prior.parameters(), |
lr=args.learning_rate, |
betas=(args.adam_beta1, args.adam_beta2), |
weight_decay=args.adam_weight_decay, |
eps=args.adam_epsilon, |
) Next, the input captions are tokenized, and images are preprocessed by the CLIPImageProcessor: Copied def preprocess_train(examples): |
images = [image.convert("RGB") for image in examples[image_column]] |
examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values |
examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) |
return examples Finally, the training loop converts the input images into latents, adds noise to the image embeddings, and makes a prediction: Copied model_pred = prior( |
noisy_latents, |
timestep=timesteps, |
proj_embedding=prompt_embeds, |
encoder_hidden_states=text_encoder_hidden_states, |
attention_mask=text_mask, |
).predicted_image_embedding If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration... |
accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \ |
--dataset_name=$DATASET_NAME \ |
--resolution=768 \ |
--train_batch_size=1 \ |
--gradient_accumulation_steps=4 \ |
--max_train_steps=15000 \ |
--learning_rate=1e-05 \ |
--max_grad_norm=1 \ |
--checkpoints_total_limit=3 \ |
--lr_scheduler="constant" \ |
--lr_warmup_steps=0 \ |
--validation_prompts="A robot pokemon, 4k photo" \ |
--report_to="wandb" \ |
--push_to_hub \ |
--output_dir="kandi2-prior-pokemon-model" Once training is finished, you can use your newly trained model for inference! prior model decoder model Copied from diffusers import AutoPipelineForText2Image, DiffusionPipeline |
import torch |
prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16) |
prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()} |
pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16) |
pipe.enable_model_cpu_offload() |
prompt="A robot pokemon, 4k photo" |
image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0] Feel free to replace kandinsky-community/kandinsky-2-2-decoder with your own trained decoder checkpoint! Next steps Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be h... |
UniDiffuser The UniDiffuser model was proposed in One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu. The abstract from the paper is: This paper proposes a unified diffusion framework (dubbed Un... |
from diffusers import UniDiffuserPipeline |
device = "cuda" |
model_id_or_path = "thu-ml/unidiffuser-v1" |
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) |
pipe.to(device) |
# Unconditional image and text generation. The generation task is automatically inferred. |
sample = pipe(num_inference_steps=20, guidance_scale=8.0) |
image = sample.images[0] |
text = sample.text[0] |
image.save("unidiffuser_joint_sample_image.png") |
print(text) This is also called “joint” generation in the UniDiffuser paper, since we are sampling from the joint image-text distribution. Note that the generation task is inferred from the inputs used when calling the pipeline. |
It is also possible to manually specify the unconditional generation task (“mode”) manually with UniDiffuserPipeline.set_joint_mode(): Copied # Equivalent to the above. |
pipe.set_joint_mode() |
sample = pipe(num_inference_steps=20, guidance_scale=8.0) When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting to infer the mode. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.