text stringlengths 0 5.54k |
|---|
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β |
The prompt or prompts to guide what to not include in image generation. If not defined, you need to |
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) β |
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) β |
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. Latents should be of shape |
(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not |
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If |
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") β |
The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) β |
Whether or not to return a TextToVideoSDPipelineOutput instead |
of a plain tuple. callback (Callable, optional) β |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. cross_attention_kwargs (dict, optional) β |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in |
self.processor. clip_skip (int, optional) β |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Returns |
TextToVideoSDPipelineOutput or tuple |
If return_dict is True, TextToVideoSDPipelineOutput is |
returned, otherwise a tuple is returned where the first element is a list with the generated frames. |
The call function to the pipeline for generation. Examples: Copied >>> import torch |
>>> from diffusers import TextToVideoSDPipeline |
>>> from diffusers.utils import export_to_video |
>>> pipe = TextToVideoSDPipeline.from_pretrained( |
... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16" |
... ) |
>>> pipe.enable_model_cpu_offload() |
>>> prompt = "Spiderman is surfing" |
>>> video_frames = pipe(prompt).frames[0] |
>>> video_path = export_to_video(video_frames) |
>>> video_path encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) β |
prompt to be encoded |
device β (torch.device): |
torch device num_images_per_prompt (int) β |
number of images that should be generated per prompt do_classifier_free_guidance (bool) β |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. lora_scale (float, optional) β |
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. VideoToVideoSDPipeline class diffusers.VideoToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel sche... |
Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. text_encoder (CLIPTextModel) β |
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β |
A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) β |
A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) β |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods |
implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < sou... |
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video (List[np.ndarray] or torch.FloatTensor) β |
video frames or tensor representing a video batch to be used as the starting point for the process. |
Can also accept video latents as image, if passing latents directly, it will not be encoded again. strength (float, optional, defaults to 0.8) β |
Indicates extent to transform the reference video. Must be between 0 and 1. video is used as a |
starting point, adding more noise to it the larger the strength. The number of denoising steps |
depends on the amount of noise initially added. When strength is 1, added noise is maximum and the |
denoising process runs for the full number of iterations specified in num_inference_steps. A value of |
1 essentially ignores video. num_inference_steps (int, optional, defaults to 50) β |
The number of denoising steps. More denoising steps usually lead to a higher quality videos at the |
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β |
A higher guidance scale value encourages the model to generate images closely linked to the text |
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β |
The prompt or prompts to guide what to not include in video generation. If not defined, you need to |
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) β |
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. Latents should be of shape |
(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not |
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If |
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") β |
The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) β |
Whether or not to return a TextToVideoSDPipelineOutput instead |
of a plain tuple. callback (Callable, optional) β |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. cross_attention_kwargs (dict, optional) β |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.