text
stringlengths
0
5.54k
A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) —
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( video: List = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: Optional = None guidance_scale: float = 7.5 strength: float = 0.8 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → AnimateDiffPipelineOutput or tuple Parameters video (List[PipelineImageInput]) —
The input video to condition the generation on. Must be a list of images/frames of the video. prompt (str or List[str], optional) —
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The width in pixels of the generated video. num_inference_steps (int, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
expense of slower inference. strength (float, optional, defaults to 0.8) —
Higher strength leads to more differences between original video and generated video. guidance_scale (float, optional, defaults to 7.5) —
A higher guidance scale value encourages the model to generate images closely linked to the text
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) —
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) —
Corresponds to parameter eta (η) from the DDIM paper. Only applies
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) —
A torch.Generator to make
generation deterministic. latents (torch.FloatTensor, optional) —
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor is generated by sampling using the supplied random generator. Latents should be of shape
(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
not provided, negative_prompt_embeds are generated from the negative_prompt input argument.
ip_adapter_image — (PipelineImageInput, optional):
Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") —
The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or
np.array. return_dict (bool, optional, defaults to True) —
Whether or not to return a AnimateDiffPipelineOutput instead
of a plain tuple. cross_attention_kwargs (dict, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
self.processor. clip_skip (int, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) —
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) —
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list
will be passed as callback_kwargs argument. You will only be able to include variables listed in the
._callback_tensor_inputs attribute of your pipeine class. Returns
AnimateDiffPipelineOutput or tuple
If return_dict is True, AnimateDiffPipelineOutput is
returned, otherwise a tuple is returned where the first element is a list with the generated frames.
The call function to the pipeline for generation. Examples: disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to
computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to
computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) —
Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) —
Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values
that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) —
prompt to be encoded
device — (torch.device):
torch device num_images_per_prompt (int) —
number of images that should be generated per prompt do_classifier_free_guidance (bool) —
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). prompt_embeds (torch.FloatTensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. lora_scale (float, optional) —
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union ) Parameters frames (List[List[PIL.Image.Image]] or torch.Tensor or np.ndarray) —
List of PIL Images of length batch_size or torch.Tensor or np.ndarray of shape
(batch_size, num_frames, height, width, num_channels). Output class for AnimateDiff pipelines.
Depth-to-Image Generation
StableDiffusionDepth2ImgPipeline
The depth-guided stable diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION, as part of Stable Diffusion 2.0. It uses MiDas to infer depth based on an image.
StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the images’ structure.
The original codebase can be found here:
Stable Diffusion v2: Stability-AI/stablediffusion
Available Checkpoints are:
stable-diffusion-2-depth: stabilityai/stable-diffusion-2-depth
class diffusers.StableDiffusionDepth2ImgPipeline
<
source
>
(
vae: AutoencoderKL
text_encoder: CLIPTextModel
tokenizer: CLIPTokenizer
unet: UNet2DConditionModel
scheduler: KarrasDiffusionSchedulers
depth_estimator: DPTForDepthEstimation