Diffusers
Safetensors
How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("CiaraRowles/temporal-controlnet-depth-svd-v1", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Stable Video Diffusion Temporal Controlnet

Overview

Introducing the Stable Video Diffusion Temporal Controlnet! This tool uses a controlnet style encoder with the svd base. It's designed to enhance your video diffusion projects by providing precise temporal control.

Setup

Demo

combined_with_square_image_new_gif

Notes

  • Focus on Central Object: The system tends to extract motion features primarily from a central object and, occasionally, from the background. It's best to avoid overly complex motion or obscure objects.
  • Simplicity in Motion: Stick to motions that svd can handle well without the controlnet. This ensures it will be able to apply the motion.

Acknowledgements

  • Diffusers Team: For the svd implementation.
  • Pixeli99: For providing a practical svd training script: SVD_Xtend
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support