text
stringlengths
0
5.54k
num_inference_steps=6,
generator=torch.Generator("cpu").manual_seed(0),
)
frames = output.frames[0]
export_to_gif(frames, "animatelcm-motion-lora.gif") A space rocket, 4K.
AnimateDiffPipeline class diffusers.AnimateDiffPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = ...
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β€”
A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) β€”
A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) β€”
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter(...
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The width in pixels of the generated video. num_frames (int, optional, defaults to 16) β€”
The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds
amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality videos at the
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
A higher guidance scale value encourages the model to generate images closely linked to the text
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β€”
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) β€”
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β€”
A torch.Generator to make
generation deterministic. latents (torch.FloatTensor, optional) β€”
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor is generated by sampling using the supplied random generator. Latents should be of shape
(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
not provided, negative_prompt_embeds are generated from the negative_prompt input argument.
ip_adapter_image β€” (PipelineImageInput, optional):
Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) β€”
Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters.
Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding
if do_classifier_free_guidance is set to True.
If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") β€”
The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or
np.array. return_dict (bool, optional, defaults to True) β€”
Whether or not to return a TextToVideoSDPipelineOutput instead
of a plain tuple. cross_attention_kwargs (dict, optional) β€”
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
self.processor. clip_skip (int, optional) β€”
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) β€”
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) β€”
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list
will be passed as callback_kwargs argument. You will only be able to include variables listed in the
._callback_tensor_inputs attribute of your pipeine class. Returns
AnimateDiffPipelineOutput or tuple
If return_dict is True, AnimateDiffPipelineOutput is
returned, otherwise a tuple is returned where the first element is a list with the generated frames.
The call function to the pipeline for generation. Examples: Copied >>> import torch
>>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler
>>> from diffusers.utils import export_to_gif
>>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2")
>>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter)
>>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False)
>>> output = pipe(prompt="A corgi walking in the park")
>>> frames = output.frames[0]
>>> export_to_gif(frames, "animation.gif") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or Li...
prompt to be encoded
device β€” (torch.device):
torch device num_images_per_prompt (int) β€”
number of images that should be generated per prompt do_classifier_free_guidance (bool) β€”
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β€”
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. lora_scale (float, optional) β€”
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β€”
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffVideoToVideoPipeline class diffusers.AnimateDiffVideoToVideoPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DC...
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β€”
A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) β€”
A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) β€”
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter(...
The input video to condition the generation on. Must be a list of images/frames of the video. prompt (str or List[str], optional) β€”
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The width in pixels of the generated video. num_inference_steps (int, optional, defaults to 50) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality videos at the