text stringlengths 0 5.54k |
|---|
latent code of generated videos and a list of bools indicating whether the corresponding generated |
video contains βnot-safe-for-workβ (nsfw) content.. |
The call function to the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs cross_attention_kwargs = None ) β latents Parameters callback (Callable, optional) β |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. |
extra_step_kwargs β |
Extra_step_kwargs. |
cross_attention_kwargs β |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in |
self.processor. |
num_warmup_steps β |
number of warmup steps. Returns |
latents |
Latents of backward process output at time timesteps[-1]. |
Perform backward process given list of time steps. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) β |
prompt to be encoded |
device β (torch.device): |
torch device num_images_per_prompt (int) β |
number of images that should be generated per prompt do_classifier_free_guidance (bool) β |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. lora_scale (float, optional) β |
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) β x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) β |
A torch.Generator to make |
generation deterministic. Returns |
x_t1 |
Forward process applied to x_t0 from time t0 to t1. |
Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoZeroSDXLPipeline class diffusers.TextToVideoZeroSDXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) β |
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β |
Frozen text-encoder. Stable Diffusion XL uses the text portion of |
CLIP, specifically |
the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) β |
Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of |
CLIP, |
specifically the |
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k |
variant. tokenizer (CLIPTokenizer) β |
Tokenizer of class |
CLIPTokenizer. tokenizer_2 (CLIPTokenizer) β |
Second Tokenizer of class |
CLIPTokenizer. unet (UNet2DConditionModel) β Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) β |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for zero-shot text-to-video generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods |
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union prompt_2: Union = None video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None frame_ids: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None t0: int = 44 t1: int = 47 ) Parameters prompt (str or List[str], optional) β |
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. |
instead. prompt_2 (str or List[str], optional) β |
The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is |
used in both text-encoders video_length (int, optional, defaults to 8) β |
The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β |
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β |
The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. denoising_end (float, optional) β |
When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be |
completed before it is intentionally prematurely terminated. As a result, the returned sample will |
still retain a substantial amount of noise as determined by the discrete timesteps selected by the |
scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a |
βMixture of Denoisersβ multi-pipeline setup, as elaborated in Refining the Image |
Output guidance_scale (float, optional, defaults to 7.5) β |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, |
usually at the expense of lower image quality. negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). negative_prompt_2 (str or List[str], optional) β |
The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and |
text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_videos_per_prompt (int, optional, defaults to 1) β |
The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) β |
Corresponds to parameter eta (Ξ·) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to |
schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) β |
One or a list of torch generator(s) |
to make generation deterministic. frame_ids (List[int], optional) β |
Indexes of the frames that are being generated. This is used when generating longer videos |
chunk-by-chunk. prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. pooled_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. |
If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt |
input argument. latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor will ge generated by sampling using the supplied random generator. motion_field_strength_x (float, optional, defaults to 12) β |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.