text
stringlengths
0
5.54k
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
self.processor. clip_skip (int, optional) β€”
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Returns
TextToVideoSDPipelineOutput or tuple
If return_dict is True, TextToVideoSDPipelineOutput is
returned, otherwise a tuple is returned where the first element is a list with the generated frames.
The call function to the pipeline for generation. Examples: Copied >>> import torch
>>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
>>> from diffusers.utils import export_to_video
>>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16)
>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
>>> pipe.to("cuda")
>>> prompt = "spiderman running in the desert"
>>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames[0]
>>> # safe low-res video
>>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4")
>>> # let's offload the text-to-image model
>>> pipe.to("cpu")
>>> # and load the image-to-image model
>>> pipe = DiffusionPipeline.from_pretrained(
... "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15"
... )
>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
>>> pipe.enable_model_cpu_offload()
>>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode
>>> pipe.vae.enable_slicing()
>>> # now let's upscale it
>>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames]
>>> # and denoise it
>>> video_frames = pipe(prompt, video=video, strength=0.6).frames[0]
>>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4")
>>> video_path encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) β€”
prompt to be encoded
device β€” (torch.device):
torch device num_images_per_prompt (int) β€”
number of images that should be generated per prompt do_classifier_free_guidance (bool) β€”
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β€”
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. lora_scale (float, optional) β€”
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β€”
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. TextToVideoSDPipelineOutput class diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.nd...
List of video outputs - It can be a nested list of length batch_size, with each sub-list containing denoised Output class for text-to-video pipelines. PIL image sequences of length num_frames. It can also be a NumPy array or Torch tensor of shape
(batch_size, num_frames, channels, height, width)
KarrasVeScheduler KarrasVeScheduler is a stochastic sampler tailored to variance-expanding (VE) models. It is based on the Elucidating the Design Space of Diffusion-Based Generative Models and Score-based generative modeling through stochastic differential equations papers. KarrasVeScheduler class diffusers.KarrasVeS...
The minimum noise magnitude. sigma_max (float, defaults to 100) β€”
The maximum noise magnitude. s_noise (float, defaults to 1.007) β€”
The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000,
1.011]. s_churn (float, defaults to 80) β€”
The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100]. s_min (float, defaults to 0.05) β€”
The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10]. s_max (float, defaults to 50) β€”
The end value of the sigma range to add noise. A reasonable range is [0.2, 80]. A stochastic scheduler tailored to variance-expanding models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving. For more details on the parameters, see Appendix E. The grid search values used
to find the optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. add_noise_to_input < source > ( sample: FloatTensor sigma: float generator: Optional = None ) Parameters sample (torch.FloatTensor) β€”
The input sample. sigma (float) β€” generator (torch.Generator, optional) β€”
A random number generator. Explicit Langevin-like β€œchurn” step of adding noise to the sample according to a gamma_i β‰₯ 0 to reach a
higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) β†’ torch.FloatTensor Parameters sample (torch.FloatTensor) β€”
The input sample. timestep (int, optional) β€”
The current timestep in the diffusion chain. Returns
torch.FloatTensor
A scaled input sample.
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) β€”
The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) β€”
The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor return_dict: bool = True ) β†’ ~sc...
The direct output from learned diffusion model. sigma_hat (float) β€” sigma_prev (float) β€” sample_hat (torch.FloatTensor) β€” return_dict (bool, optional, defaults to True) β€”
Whether or not to return a ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple. Returns
~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple
If return_dict is True, ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput is returned,
otherwise a tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise). step_correct < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) β†’ prev_sample (TODO) Parameters model_output (torch.Fl...
The direct output from learned diffusion model. sigma_hat (float) β€” TODO sigma_prev (float) β€” TODO sample_hat (torch.FloatTensor) β€” TODO sample_prev (torch.FloatTensor) β€” TODO derivative (torch.FloatTensor) β€” TODO return_dict (bool, optional, defaults to True) β€”
Whether or not to return a DDPMSchedulerOutput or tuple. Returns
prev_sample (TODO)
updated sample in the diffusion chain. derivative (TODO): TODO
Corrects the predicted sample based on the model_output of the network. KarrasVeOutput class diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput < source > ( prev_sample: FloatTensor derivative: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of sh...
Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the
denoising loop. derivative (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€”
Derivative of predicted original image sample (x_0). pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€”
The predicted denoised sample (x_{0}) based on the model output from the current timestep.
pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output.