text stringlengths 0 5.54k |
|---|
expense of slower inference. strength (float, optional, defaults to 0.8) — |
Higher strength leads to more differences between original video and generated video. guidance_scale (float, optional, defaults to 7.5) — |
A higher guidance scale value encourages the model to generate images closely linked to the text |
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — |
The prompt or prompts to guide what to not include in image generation. If not defined, you need to |
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — |
Corresponds to parameter eta (η) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) — |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. Latents should be of shape |
(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not |
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If |
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. |
ip_adapter_image — (PipelineImageInput, optional): |
Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — |
Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. |
Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding |
if do_classifier_free_guidance is set to True. |
If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — |
The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or |
np.array. return_dict (bool, optional, defaults to True) — |
Whether or not to return a AnimateDiffPipelineOutput instead |
of a plain tuple. cross_attention_kwargs (dict, optional) — |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in |
self.processor. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — |
A function that calls at the end of each denoising steps during the inference. The function is called |
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by |
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — |
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list |
will be passed as callback_kwargs argument. You will only be able to include variables listed in the |
._callback_tensor_inputs attribute of your pipeine class. Returns |
pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput or tuple |
If return_dict is True, pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput is |
returned, otherwise a tuple is returned where the first element is a list with the generated frames. |
The call function to the pipeline for generation. Examples: encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters ... |
prompt to be encoded |
device — (torch.device): |
torch device num_images_per_prompt (int) — |
number of images that should be generated per prompt do_classifier_free_guidance (bool) — |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. lora_scale (float, optional) — |
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[L... |
List of video outputs - It can be a nested list of length batch_size, with each sub-list containing denoised Output class for AnimateDiff pipelines. PIL image sequences of length num_frames. It can also be a NumPy array or Torch tensor of shape |
(batch_size, num_frames, channels, height, width) |
Speed up inference There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either xFormers or torch.nn.functional.scaled_dot_product_attention in PyTorch 2.0 for their memory-efficient attention. In many cases, optimizing for speed or memory leads to improved ... |
torch.backends.cuda.matmul.allow_tf32 = True You can learn more about TF32 in the Mixed precision training guide. Half-precision weights To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16: Copied import torch |
from diffusers import DiffusionPipeline |
pipe = DiffusionPipeline.from_pretrained( |
"runwayml/stable-diffusion-v1-5", |
torch_dtype=torch.float16, |
use_safetensors=True, |
) |
pipe = pipe.to("cuda") |
prompt = "a photo of an astronaut riding a horse on mars" |
image = pipe(prompt).images[0] Don’t use torch.autocast in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. |
Image-to-image Image-to-image is similar to text-to-image, but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent ... |
from diffusers import AutoPipelineForImage2Image |
from diffusers.utils import load_image, make_image_grid |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, then you don’t need to call enable_xformers_memory_efficient_attention()... |
image = pipeline(prompt, image=init_image).images[0] |
make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Popular models The most popular image-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and ... |
from diffusers import AutoPipelineForImage2Image |
from diffusers.utils import make_image_grid, load_image |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# prepare image |
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" |
init_image = load_image(url) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.