text stringlengths 0 5.54k |
|---|
contains the predicted Gaussian variance. DPMSolverSinglestepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic |
methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — |
The direct output from the learned diffusion model. sample (torch.FloatTensor) — |
A current instance of a sample created by the diffusion process. Returns |
torch.FloatTensor |
The converted model output. |
Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is |
designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an |
integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise |
prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — |
The direct output from the learned diffusion model. timestep (int) — |
The current discrete timestep in the diffusion chain. prev_timestep (int) — |
The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — |
A current instance of a sample created by the diffusion process. Returns |
torch.FloatTensor |
The sample tensor at the previous timestep. |
One step for the first-order DPMSolver (equivalent to DDIM). get_order_list < source > ( num_inference_steps: int ) Parameters num_inference_steps (int) — |
The number of diffusion steps used when generating samples with a pre-trained model. Computes the solver order at each time step. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — |
The input sample. Returns |
torch.FloatTensor |
A scaled input sample. |
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the |
current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — |
The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — |
The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). singlestep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — |
The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — |
The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — |
The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — |
A current instance of a sample created by the diffusion process. Returns |
torch.FloatTensor |
The sample tensor at the previous timestep. |
One step for the second-order singlestep DPMSolver that computes the solution at time prev_timestep from the |
time timestep_list[-2]. singlestep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — |
The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — |
The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — |
The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — |
A current instance of a sample created by diffusion process. Returns |
torch.FloatTensor |
The sample tensor at the previous timestep. |
One step for the third-order singlestep DPMSolver that computes the solution at time prev_timestep from the |
time timestep_list[-3]. singlestep_dpm_solver_update < source > ( model_output_list: List *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — |
The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — |
The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — |
The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — |
A current instance of a sample created by diffusion process. order (int) — |
The solver order at this step. Returns |
torch.FloatTensor |
The sample tensor at the previous timestep. |
One step for the singlestep DPMSolver. step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — |
The direct output from learned diffusion model. timestep (int) — |
The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — |
A current instance of a sample created by the diffusion process. return_dict (bool) — |
Whether or not to return a SchedulerOutput or tuple. Returns |
SchedulerOutput or tuple |
If return_dict is True, SchedulerOutput is returned, otherwise a |
tuple is returned where the first element is the sample tensor. |
Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with |
the singlestep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — |
Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the |
denoising loop. Base class for the output of a scheduler’s step function. |
Using Diffusers with other modalities |
Diffusers is in the process of expanding to modalities other than images. |
Example type |
Colab |
Pipeline |
Molecule conformation generation |
❌ |
More coming soon! |
InstructPix2Pix InstructPix2Pix is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be “turn the clouds rainy” and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image. This guide will explore the train_instruct_pix2pix.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers |
cd diffusers |
pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/instruct_pix2pix |
pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config |
write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the resolution of the input image: Copied accelerate launch train_instruct_pix2pix.py \ |
--resolution=512 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix: --original_image_column: the original image before the edits are made --edited_image_column: the image after the edits are made --edit_prompt_column: the instructions to edit the image --conditioning_dropout_prob: the dropout probability for the edited image and edit prompts during training which enables classifier-free guidance (CFG) for one or both conditioning inputs Training script The dataset preprocessing code and training loop are found in the main() function. This is where you’ll make your changes to the training script to adapt it for your own use-case. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script. The script begins by modifing the number of input channels in the first convolutional layer of the UNet to account for InstructPix2Pix’s additional conditioning image: Copied in_channels = 8 |
out_channels = unet.conv_in.out_channels |
unet.register_to_config(in_channels=in_channels) |
with torch.no_grad(): |
new_conv_in = nn.Conv2d( |
in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding |
) |
new_conv_in.weight.zero_() |
new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) |
unet.conv_in = new_conv_in These UNet parameters are updated by the optimizer: Copied optimizer = optimizer_cls( |
unet.parameters(), |
lr=args.learning_rate, |
betas=(args.adam_beta1, args.adam_beta2), |
weight_decay=args.adam_weight_decay, |
eps=args.adam_epsilon, |
) Next, the edited images and and edit instructions are preprocessed and tokenized. It is important the same image transformations are applied to the original and edited images. Copied def preprocess_train(examples): |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.