text stringlengths 0 5.54k |
|---|
... # predict the noise residual |
... with torch.no_grad(): |
... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample |
... # perform guidance |
... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) |
... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) |
... # compute the previous noisy sample x_t -> x_t-1 |
... latents = scheduler.step(noise_pred, t, latents).prev_sample Decode the image The final step is to use the vae to decode the latent representation into an image and get the decoded output with sample: Copied # scale and decode the image latents with vae |
latents = 1 / 0.18215 * latents |
with torch.no_grad(): |
image = vae.decode(latents).sample Lastly, convert the image to a PIL.Image to see your generated image! Copied >>> image = (image / 2 + 0.5).clamp(0, 1).squeeze() |
>>> image = (image.permute(1, 2, 0) * 255).to(torch.uint8).cpu().numpy() |
>>> image = Image.fromarray(image) |
>>> image Next steps From basic to complex pipelines, you’ve seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler’s timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the schedule... |
🧪 This pipeline is for research purposes only. Text-to-video ModelScope Text-to-Video Technical Report is by Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, Shiwei Zhang. The abstract from the paper is: This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-i... |
from diffusers import DiffusionPipeline |
from diffusers.utils import export_to_video |
pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") |
pipe = pipe.to("cuda") |
prompt = "Spiderman is surfing" |
video_frames = pipe(prompt).frames[0] |
video_path = export_to_video(video_frames) |
video_path Diffusers supports different optimization techniques to improve the latency |
and memory footprint of a pipeline. Since videos are often more memory-heavy than images, |
we can enable CPU offloading and VAE slicing to keep the memory footprint at bay. Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing: Copied import torch |
from diffusers import DiffusionPipeline |
from diffusers.utils import export_to_video |
pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") |
pipe.enable_model_cpu_offload() |
# memory optimization |
pipe.enable_vae_slicing() |
prompt = "Darth Vader surfing a wave" |
video_frames = pipe(prompt, num_frames=64).frames[0] |
video_path = export_to_video(video_frames) |
video_path It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above. We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion: Copied import torch |
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler |
from diffusers.utils import export_to_video |
pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") |
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) |
pipe.enable_model_cpu_offload() |
prompt = "Spiderman is surfing" |
video_frames = pipe(prompt, num_inference_steps=25).frames[0] |
video_path = export_to_video(video_frames) |
video_path Here are some sample outputs: An astronaut riding a horse. |
Darth vader surfing in waves. |
cerspense/zeroscope_v2_576w & cerspense/zeroscope_v2_XL Zeroscope are watermark-free model and have been trained on specific sizes such as 576x320 and 1024x576. |
One should first generate a video using the lower resolution checkpoint cerspense/zeroscope_v2_576w with TextToVideoSDPipeline, |
which can then be upscaled using VideoToVideoSDPipeline and cerspense/zeroscope_v2_XL. Copied import torch |
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler |
from diffusers.utils import export_to_video |
from PIL import Image |
pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) |
pipe.enable_model_cpu_offload() |
# memory optimization |
pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) |
pipe.enable_vae_slicing() |
prompt = "Darth Vader surfing a wave" |
video_frames = pipe(prompt, num_frames=24).frames[0] |
video_path = export_to_video(video_frames) |
video_path Now the video can be upscaled: Copied pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16) |
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) |
pipe.enable_model_cpu_offload() |
# memory optimization |
pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) |
pipe.enable_vae_slicing() |
video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] |
video_frames = pipe(prompt, video=video, strength=0.6).frames[0] |
video_path = export_to_video(video_frames) |
video_path Here are some sample outputs: Darth vader surfing in waves. |
Tips Video generation is memory-intensive and one way to reduce your memory usage is to set enable_forward_chunking on the pipeline’s UNet so you don’t run the entire feedforward layer at once. Breaking it up into chunks in a loop is more efficient. Check out the Text or image-to-video guide for more details... |
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — |
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — |
A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — |
A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods |
implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < sou... |
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — |
The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — |
The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds |
amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — |
The number of denoising steps. More denoising steps usually lead to a higher quality videos at the |
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — |
A higher guidance scale value encourages the model to generate images closely linked to the text |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.