text
stringlengths
0
5.54k
Parameters
t (torch.Long) β€”
The timestep that determines which transition matrix is used.
Returns
torch.FloatTensor of shape (batch size, num classes, num latent pixels)
The log probabilities for the predicted classes of the image at timestep t-1. I.e. Equation (11).
Calculates the log probabilities for the predicted classes of the image at timestep t-1. I.e. Equation (11).
Instead of directly computing equation (11), we use Equation (5) to restate Equation (11) in terms of only
forward probabilities.
Equation (11) stated in terms of forward probabilities via Equation (5):
Where:
the sum is over x0 = {C_0 … C{k-1}} (classes for x_0)
p(x{t-1} | x_t) = sum( q(x_t | x{t-1}) q(x_{t-1} | x_0) p(x_0) / q(x_t | x_0) )
set_timesteps
<
source
>
(
num_inference_steps: int
device: typing.Union[str, torch.device] = None
)
Parameters
num_inference_steps (int) β€”
the number of diffusion steps used when generating samples with a pre-trained model.
device (str or torch.device) β€”
device to place the timesteps and the diffusion process parameters (alpha, beta, gamma) on.
Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
step
<
source
>
(
model_output: FloatTensor
timestep: torch.int64
sample: LongTensor
generator: typing.Optional[torch._C.Generator] = None
return_dict: bool = True
)
β†’
~schedulers.scheduling_utils.VQDiffusionSchedulerOutput or tuple
Parameters
t (torch.long) β€”
The timestep that determines which transition matrices are used.
x_t β€” (torch.LongTensor of shape (batch size, num latent pixels)):
The classes of each latent pixel at time t
generator β€” (torch.Generator or None):
RNG for the noise applied to p(x_{t-1} | x_t) before it is sampled from.
return_dict (bool) β€”
option for returning tuple rather than VQDiffusionSchedulerOutput class
Returns
~schedulers.scheduling_utils.VQDiffusionSchedulerOutput or tuple
~schedulers.scheduling_utils.VQDiffusionSchedulerOutput if return_dict is True, otherwise a tuple.
When returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep via the reverse transition distribution i.e. Equation (11). See the
docstring for self.q_posterior for more in depth docs on how Equation (11) is computed.
AutoPipeline AutoPipeline is designed to: make it easy for you to load a checkpoint for a task without knowing the specific pipeline class to use use multiple pipelines in your workflow Based on the task, the AutoPipeline class automatically retrieves the relevant pipeline given the name or path to the pretrained weights with the from_pretrained() method. To seamlessly switch between tasks with the same checkpoint without reallocating additional memory, use the from_pipe() method to transfer the components from the original pipeline to the new one. Copied from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
).to("cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipeline(prompt, num_inference_steps=25).images[0] Check out the AutoPipeline tutorial to learn how to use this API! AutoPipeline supports text-to-image, image-to-image, and inpainting for the following diffusion models: Stable Diffusion ControlNet Stable Diffusion XL (SDXL) DeepFloyd IF Kandinsky 2.1 Kandinsky 2.2 AutoPipelineForText2Image class diffusers.AutoPipelineForText2Image < source > ( *args **kwargs ) AutoPipelineForText2Image is a generic pipeline class that instantiates a text-to-image pipeline class. The