text
stringlengths
0
5.54k
image = pipe("a photo of an astronaut on a moon").images[0] The set_params method accepts two arguments: cache_interval and cache_branch_id. cache_interval means the frequency of feature caching, specified as the number of steps between each cache operation. cache_branch_id identifies which branch of the network (ordered from the shallowest to the deepest layer) is responsible for executing the caching processes.
Opting for a lower cache_branch_id or a larger cache_interval can lead to faster inference speed at the expense of reduced image quality (ablation experiments of these two hyperparameters can be found in the paper). Once those arguments are set, use the enable or disable methods to activate or deactivate the DeepCacheSDHelper. You can find more generated samples (original pipeline vs DeepCache) and the corresponding inference latency in the WandB report. The prompts are randomly selected from the MS-COCO 2017 dataset. Benchmark We tested how much faster DeepCache accelerates Stable Diffusion v2.1 with 50 inference steps on an NVIDIA RTX A5000, using different configurations for resolution, batch size, cache interval (I), and cache branch (B). Resolution Batch size Original DeepCache(I=3, B=0) DeepCache(I=5, B=0) DeepCache(I=5, B=1) 512 8 15.96 6.88(2.32x) 5.03(3.18x) 7.27(2.20x) 4 8.39 3.60(2.33x) 2.62(3.21x) 3.75(2.24x) 1 2.61 1.12(2.33x) 0.81(3.24x) 1.11(2.35x) 768 8 43.58 18.99(2.29x) 13.96(3.12x) 21.27(2.05x) 4 22.24 9.67(2.30x) 7.10(3.13x) 10.74(2.07x) 1 6.33 2.72(2.33x) 1.97(3.21x) 2.98(2.12x) 1024 8 101.95 45.57(2.24x) 33.72(3.02x) 53.00(1.92x) 4 49.25 21.86(2.25x) 16.19(3.04x) 25.78(1.91x) 1 13.83 6.07(2.28x) 4.43(3.12x) 7.15(1.93x)
Text2Video-Zero Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, Humphrey Shi. Text2Video-Zero enables zero-shot video generation using either: A textual prompt A prompt combined with guidance from poses or edges Video Instruct-Pix2Pix (instruction-guided video editing) Results are temporally consistent and closely follow the guidance and textual prompts. The abstract from the paper is: Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain.
Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object.
Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing.
As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. You can find additional information about Text2Video-Zero on the project page, paper, and original codebase. Usage example Text-To-Video To generate a video from prompt, run the following Python code: Copied import torch
from diffusers import TextToVideoZeroPipeline
model_id = "runwayml/stable-diffusion-v1-5"
pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
prompt = "A panda is playing guitar on times square"
result = pipe(prompt=prompt).images
result = [(r * 255).astype("uint8") for r in result]
imageio.mimsave("video.mp4", result, fps=4) You can change these parameters in the pipeline call: Motion field strength (see the paper, Sect. 3.3.1):motion_field_strength_x and motion_field_strength_y. Default: motion_field_strength_x=12, motion_field_strength_y=12 T and T' (see the paper, Sect. 3.3.1)t0 and t1 in the range {0, ..., num_inference_steps}. Default: t0=45, t1=48 Video length:video_length, the number of frames video_length to be generated. Default: video_length=8 We can also generate longer videos by doing the processing in a chunk-by-chunk manner: Copied import torch
from diffusers import TextToVideoZeroPipeline
import numpy as np
model_id = "runwayml/stable-diffusion-v1-5"
pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
seed = 0
video_length = 24 #24 ÷ 4fps = 6 seconds
chunk_size = 8
prompt = "A panda is playing guitar on times square"
# Generate the video chunk-by-chunk
result = []
chunk_ids = np.arange(0, video_length, chunk_size - 1)
generator = torch.Generator(device="cuda")
for i in range(len(chunk_ids)):
print(f"Processing chunk {i + 1} / {len(chunk_ids)}")
ch_start = chunk_ids[i]
ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1]
# Attach the first frame for Cross Frame Attention
frame_ids = [0] + list(range(ch_start, ch_end))
# Fix the seed for the temporal consistency
generator.manual_seed(seed)
output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids)
result.append(output.images[1:])
# Concatenate chunks and save
result = np.concatenate(result)
result = [(r * 255).astype("uint8") for r in result]
imageio.mimsave("video.mp4", result, fps=4) SDXL SupportIn order to use the SDXL model when generating a video from prompt, use the TextToVideoZeroSDXLPipeline pipeline: Copied import torch
from diffusers import TextToVideoZeroSDXLPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
pipe = TextToVideoZeroSDXLPipeline.from_pretrained(
model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True
).to("cuda") Text-To-Video with Pose Control To generate a video from prompt with additional pose control Download a demo video Copied from huggingface_hub import hf_hub_download
filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4"
repo_id = "PAIR/Text2Video-Zero"
video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video containing extracted pose images Copied from PIL import Image
import imageio
reader = imageio.get_reader(video_path, "ffmpeg")
frame_count = 8
pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] To extract pose from actual video, read ControlNet documentation. Run StableDiffusionControlNetPipeline with our custom attention processor Copied import torch
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor
model_id = "runwayml/stable-diffusion-v1-5"
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
model_id, controlnet=controlnet, torch_dtype=torch.float16
).to("cuda")
# Set the attention processor
pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
# fix latents for all frames
latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1)
prompt = "Darth Vader dancing in a desert"
result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images
imageio.mimsave("video.mp4", result, fps=4) SDXL Support Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL: Copied import torch
from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel
from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor
controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0'
model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
model_id, controlnet=controlnet, torch_dtype=torch.float16
).to('cuda')
# Set the attention processor
pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
# fix latents for all frames
latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1)
prompt = "Darth Vader dancing in a desert"
result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images
imageio.mimsave("video.mp4", result, fps=4) Text-To-Video with Edge Control To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using Canny edge ControlNet model. Video Instruct-Pix2Pix To perform text-guided video editing (with InstructPix2Pix): Download a demo video Copied from huggingface_hub import hf_hub_download