text
stringlengths
0
5.54k
make a commit with git commit to record your changes locally: Copied $ git add modified_file.py
$ git commit -m "A descriptive message about your changes." It is a good idea to sync your copy of the code with the original
repository regularly. This way you can quickly account for changes: Copied $ git pull upstream main Push the changes to your account using: Copied $ git push -u origin a-descriptive-name-for-my-changes Once you are satisfied, go to the
webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes
to the project maintainers for review. It’s OK if maintainers ask you for changes. It happens to core contributors
too! So everyone can see the changes in the Pull request, work in your local
branch and push the changes to your fork. They will automatically appear in
the pull request. Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in
the tests folder. We like pytest and pytest-xdist because it’s faster. From the root of the
repository, here’s how to run tests with pytest for the library: Copied $ python -m pytest -n auto --dist=loadfile -s -v ./tests/ In fact, that’s how make test is implemented! You can specify a smaller set of tests in order to test only the feature
you’re working on. By default, slow tests are skipped. Set the RUN_SLOW environment variable to
yes to run them. This will download many gigabytes of models — make sure you
have enough disk space and a good Internet connection, or a lot of patience! Copied $ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ unittest is fully supported, here’s how to run tests with it: Copied $ python -m unittest discover -s tests -t . -v
$ python -m unittest discover -s examples -t examples -v Syncing forked main with upstream (HuggingFace) main To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs,
when syncing the main branch of a forked repository, please, follow these steps: When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. If a PR is absolutely necessary, use the following steps after checking out your branch: Copied ...
$ git pull --squash --no-commit upstream main
$ git commit -m '<your message without GitHub references>'
$ git push --set-upstream origin your-branch-for-syncing Style guide For documentation strings, 🧨 Diffusers follows the Google style.
VQDiffusionScheduler VQDiffusionScheduler converts the transformer model’s output into a sample for the unnoised image at the previous diffusion timestep. It was introduced in Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, ...
The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked
latent pixel. num_train_timesteps (int, defaults to 100) —
The number of diffusion steps to train the model. alpha_cum_start (float, defaults to 0.99999) —
The starting cumulative alpha value. alpha_cum_end (float, defaults to 0.00009) —
The ending cumulative alpha value. gamma_cum_start (float, defaults to 0.00009) —
The starting cumulative gamma value. gamma_cum_end (float, defaults to 0.99999) —
The ending cumulative gamma value. A scheduler for vector quantized diffusion. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving. log_Q_t_transitioning_to_known_class < source > ( t: torch.int32 x_t: LongTensor log_onehot_x_t: FloatTensor cumulative: bool ) → torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) Parameters t (torch.Long) —
The timestep that determines which transition matrix is used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) —
The classes of each latent pixel at time t. log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) —
The log one-hot vectors of x_t. cumulative (bool) —
If cumulative is False, the single step transition matrix t-1->t is used. If cumulative is
True, the cumulative transition matrix 0->t is used. Returns
torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels)
Each column of the returned matrix is a row of log probabilities of the complete probability
transition matrix.
When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be
masked.
Where:
q_n is the probability distribution for the forward process of the nth latent pixel.
C_0 is a class of a latent pixel embedding
C_k is the class of the masked latent pixel
non-cumulative result (omitting logarithms):
_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0)
. . .
. . .
. . .
q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`}
wrap={false}
/>
cumulative result (omitting logarithms):
_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0)
. . .
. . .
. . .
q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`}
wrap={false}
/>
Calculates the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each
latent pixel in x_t. q_posterior < source > ( log_p_x_0 x_t t ) → torch.FloatTensor of shape (batch size, num classes, num latent pixels) Parameters log_p_x_0 (torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels)) —
The log probabilities for the predicted classes of the initial latent pixels. Does not include a
prediction for the masked class as the initial unnoised image cannot be masked. x_t (torch.LongTensor of shape (batch size, num latent pixels)) —
The classes of each latent pixel at time t. t (torch.Long) —
The timestep that determines which transition matrix is used. Returns
torch.FloatTensor of shape (batch size, num classes, num latent pixels)
The log probabilities for the predicted classes of the image at timestep t-1.
Calculates the log probabilities for the predicted classes of the image at timestep t-1: Copied p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) ) set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) —
The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) —
The device to which the timesteps and diffusion process parameters (alpha, beta, gamma) should be moved
to. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: torch.int64 sample: LongTensor generator: Optional = None return_dict: bool = True ) → VQDiffusionSchedulerOutput or tuple Parameters t (torch.long) —
The timestep that determines which transition matrices are used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) —
The classes of each latent pixel at time t. generator (torch.Generator, or None) —
A random number generator for the noise applied to p(x_{t-1} | x_t) before it is sampled from. return_dict (bool, optional, defaults to True) —
Whether or not to return a VQDiffusionSchedulerOutput or
tuple. Returns
VQDiffusionSchedulerOutput or tuple
If return_dict is True, VQDiffusionSchedulerOutput is
returned, otherwise a tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by the reverse transition distribution. See
q_posterior() for more details about how the distribution is computer. VQDiffusionSchedulerOutput class diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput < source > ( prev_sample: LongTensor ) Parameters prev_sample (torch.LongTensor of shape (batch size, num latent pixels)) —
Computed sample x_{t-1} of previous timestep. prev_sample should be used as next model input in the
denoising loop. Output class for the scheduler’s step function output.
Overview 🤗 Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in diffusers/examples. Each training script is: Self-contained: the training script does not depend on any local files, and all packages required to run the script are ins...
cd diffusers
pip install . Then navigate to the folder of the training script (for example, DreamBooth) and install the requirements.txt file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you’re using one of these scripts, make sure you install its corresponding requirements file. Copied cd ex...
pip install -r requirements.txt
# to train SDXL with DreamBooth
pip install -r requirements_sdxl.txt To speedup training and reduce memory-usage, we recommend: using PyTorch 2.0 or higher to automatically use scaled dot product attention during training (you don’t need to make any changes to the training code) installing xFormers to enable memory-efficient attention
Distributed inference with multiple GPUs On distributed setups, you can run inference across multiple GPUs with 🤗 Accelerate or PyTorch Distributed, which is useful for generating with multiple prompts in parallel. This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference. 🤗...
from accelerate import PartialState
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
)
distributed_state = PartialState()