text stringlengths 0 5.54k |
|---|
timestep (int) — current discrete timestep in the diffusion chain. |
sample (torch.FloatTensor) — |
current instance of sample being created by diffusion process. |
eta (float) — weight of noise for added noise in diffusion step. |
use_clipped_model_output (bool) — if True, compute “corrected” model_output from the clipped |
predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when |
self.config.clip_sample is True. If no clipping has happened, “corrected” model_output would |
coincide with the one provided as input and use_clipped_model_output will have not effect. |
generator — random number generator. |
variance_noise (torch.FloatTensor) — instead of generating noise for the variance using generator, we |
can directly provide the noise for the variance itself. This is useful for methods such as |
CycleDiffusion. (https://arxiv.org/abs/2210.05559) |
return_dict (bool) — option for returning tuple rather than DDIMSchedulerOutput class |
Returns |
~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple |
~schedulers.scheduling_utils.DDIMSchedulerOutput if return_dict is True, otherwise a tuple. When |
returning a tuple, the first element is the sample tensor. |
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion |
process from the learned model outputs (most often the predicted noise). |
Unconditional image generation Unconditional image generation models are not conditioned on text or images during training. It only generates images that resemble its training data distribution. This guide will explore the train_unconditional.py training script to help you become familiar with it, and how you can adapt... |
cd diffusers |
pip install . Then navigate to the example folder containing the training script and install the required dependencies: Copied cd examples/unconditional_image_generation |
pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: ... |
write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understandin... |
--mixed_precision="bf16" Some basic and important parameters to specify include: --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of s... |
sample_size=args.resolution, |
in_channels=3, |
out_channels=3, |
layers_per_block=2, |
block_out_channels=(128, 128, 256, 256, 512, 512), |
down_block_types=( |
"DownBlock2D", |
"DownBlock2D", |
"DownBlock2D", |
"DownBlock2D", |
"AttnDownBlock2D", |
"DownBlock2D", |
), |
up_block_types=( |
"UpBlock2D", |
"AttnUpBlock2D", |
"UpBlock2D", |
"UpBlock2D", |
"UpBlock2D", |
"UpBlock2D", |
), |
) Next, the script initializes a scheduler and optimizer: Copied # Initialize the scheduler |
accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys()) |
if accepts_prediction_type: |
noise_scheduler = DDPMScheduler( |
num_train_timesteps=args.ddpm_num_steps, |
beta_schedule=args.ddpm_beta_schedule, |
prediction_type=args.prediction_type, |
) |
else: |
noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) |
# Initialize the optimizer |
optimizer = torch.optim.AdamW( |
model.parameters(), |
lr=args.learning_rate, |
betas=(args.adam_beta1, args.adam_beta2), |
weight_decay=args.adam_weight_decay, |
eps=args.adam_epsilon, |
) Then it loads a dataset and you can specify how to preprocess it: Copied dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") |
augmentations = transforms.Compose( |
[ |
transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), |
transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), |
transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), |
transforms.ToTensor(), |
transforms.Normalize([0.5], [0.5]), |
] |
) Finally, the training loop handles everything else such as adding noise to the images, predicting the noise residual, calculating the loss, saving checkpoints at specified steps, and saving and pushing the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipel... |
<hfoptions id="launchtraining"> |
<hfoption id="single GPU"> |
Copied accelerate launch train_unconditional.py \ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.