HoloCine / README.md
nielsr's picture
nielsr HF Staff
Update model card for HoloCine: Add metadata, links, abstract, and usage
f317b94 verified
|
raw
history blame
9.51 kB
metadata
license: cc-by-nc-sa-4.0
pipeline_tag: text-to-video
library_name: diffusers

HoloCine: Holistic Generation of Cinematic Multi-Shot Long Video Narratives

πŸ“„ Paper - 🌐 Project Page - πŸ’» Code

Yihao Meng1,2, Hao Ouyang2, Yue Yu1,2, Qiuyu Wang2, Wen Wang2,3, Ka Leong Cheng2,
Hanlin Wang1,2, Yixuan Li2,4, Cheng Chen2,5, Yanhong Zeng2, Yujun Shen2, Huamin Qu1

1HKUST, 2Ant Group, 3ZJU, 4CUHK, 5NTU

TLDR

  • What it is: A text-to-video model that generates full scenes, not just isolated clips.
  • Key Feature: It maintains consistency of characters, objects, and style across all shots in a scene.
  • How it works: You provide shot-by-shot text prompts, giving you directorial control over the final video.

Strongly recommend seeing our demo page.

If you enjoyed the videos we created, please consider giving us a star 🌟.

Abstract

State-of-the-art text-to-video models excel at generating isolated clips but fall short of creating the coherent, multi-shot narratives, which are the essence of storytelling. We bridge this "narrative gap" with HoloCine, a model that generates entire scenes holistically to ensure global consistency from the first shot to the last. Our architecture achieves precise directorial control through a Window Cross-Attention mechanism that localizes text prompts to specific shots, while a Sparse Inter-Shot Self-Attention pattern (dense within shots but sparse between them) ensures the efficiency required for minute-scale generation. Beyond setting a new state-of-the-art in narrative coherence, HoloCine develops remarkable emergent abilities: a persistent memory for characters and scenes, and an intuitive grasp of cinematic techniques. Our work marks a pivotal shift from clip synthesis towards automated filmmaking, making end-to-end cinematic creation a tangible future.

Installation

Create a conda environment and install requirements:

git clone https://github.com/yihao-meng/HoloCine.git
cd HoloCine
conda create -n HoloCine python=3.10
pip install -e .

We use FlashAttention-3 to implement the sparse inter-shot attention. We highly recommend using FlashAttention-3 for its fast speed. We provide a simple instruction on how to install FlashAttention-3.

git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention
cd hopper
python setup.py install

If you encounter environment problem when installing FlashAttention-3, you can refer to their official github page https://github.com/Dao-AILab/flash-attention.

If you cannot install FlashAttention-3, you can use FlashAttention-2 as an alternative, and our code will automatically detect the FlashAttention version. It will be slower than FlashAttention-3,but can also produce the right result.

If you want to install FlashAttention-2, you can use the following command:

pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whl

Checkpoint

Step 1: Download Wan 2.2 VAE and T5

If you already have downloaded Wan 2.2 14B T2V before, skip this section.

If not, you need the T5 text encoder and the VAE from the original Wan 2.2 repository: https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B

Based on the repository's file structure, you only need to download models_t5_umt5-xxl-enc-bf16.pth and Wan2.1_VAE.pth.

You do not need to download the google, high_noise_model, or low_noise_model folders, nor any other files.

Recommended Download (CLI)

We recommend using huggingface-cli to download only the necessary files. Make sure you have huggingface_hub installed (pip install huggingface_hub).

This command will download only the required T5 and VAE models into the correct directory:

huggingface-cli download Wan-AI/Wan2.2-T2V-A14B \
  --local-dir checkpoints/Wan2.2-T2V-A14B \
  --allow-patterns "models_t5_*.pth" "Wan2.1_VAE.pth"

Manual Download

Alternatively, go to the "Files" tab on the Hugging Face repo and manually download the following two files:

  • models_t5_umt5-xxl-enc-bf16.pth
  • Wan2.1_VAE.pth

Place both files inside a new folder named checkpoints/Wan2.2-T2V-A14B/.

Step 2: Download HoloCine Model (HoloCine_dit)

Download our fine-tuned high-noise and low-noise DiT checkpoints from the following link:

[➑️ Download HoloCine_dit Model Checkpoints Here]

This download contain the four fine-tuned model files. Two for full_attention version: full_high_noise.safetensors, full_low_noise.safetensors. And two for sparse inter-shot attention version: sparse_high_noise.safetensors, sparse_high_noise.safetensors. The sparse version is still uploading.

You can choose a version to download, or try both version if you want.

The full attention version will have better performance, so we suggest you start from it. The sparse inter-shot attention version will be slightly unstable (but also great in most cases), but faster than the full attention version.

For full attention version: Create a new folder named checkpoints/HoloCine_dit/full/ and place both high and low noise files inside.

For sparse attention version: Create a new folder named checkpoints/HoloCine_dit/full/ and place both high and low noise files inside.

Step 3: Final Directory Structure

If you downloaded the full model, your checkpoints directory should look like this:

checkpoints/
β”œβ”€β”€ Wan2.2-T2V-A14B/
β”‚   β”œβ”€β”€ models_t5_umt5-xxl-enc-bf16.pth
β”‚   └── Wan2.1_VAE.pth
└── HoloCine_dit/
    └── full/
        β”œβ”€β”€ full_high_noise.safetensors
        └── full_low_noise.safetensors

(If you downloaded the sparse model, replace full with sparse.)

Sample Usage (Inference)

We release two versions of models, one using full attention to model the multi-shot sequence (our default), the other using sparse inter-shot attention.

To use the full attention version:

python HoloCine_inference_full_attention.py

To use the sparse inter-shot attention version:

python HoloCine_inference_sparse_attention.py

Prompt Format - Structured Input Example

This is the easiest way to create new multi-shot prompts. You provide the components as separate arguments inside the script, and our helper function will format them correctly.

Example (inside HoloCine_inference_full_attention.py):

run_inference(
    pipe=pipe,
    negative_prompt=scene_negative_prompt,
    output_path="test_structured_output.mp4",

    # Choice 1 inputs
    global_caption="The scene is set in a lavish, 1920s Art Deco ballroom during a masquerade party. [character1] is a mysterious woman with a sleek bob, wearing a sequined silver dress and an ornate feather mask. [character2] is a dapper gentleman in a black tuxedo, his face half-hidden by a simple black domino mask. The environment is filled with champagne fountains, a live jazz band, and dancing couples in extravagant costumes. This scene contains 5 shots.",
    shot_captions=[
        "Medium shot of [character1] standing by a pillar, observing the crowd, a champagne flute in her hand.",
        "Close-up of [character2] watching her from across the room, a look of intrigue on his visible features.",
        "Medium shot as [character2] navigates the crowd and approaches [character1], offering a polite bow. ",
        "Close-up on [character1]'s eyes through her mask, as they crinkle in a subtle, amused smile.",
        "A stylish medium two-shot of them standing together, the swirling party out of focus behind them, as they begin to converse."

    ],
    num_frames=241
)

Citation

If you find this work useful, please consider citing our paper:

@article{meng2025holocine,
  title={HoloCine: Holistic Generation of Cinematic Multi-Shot Long Video Narratives},
  author={Meng, Yihao and Ouyang, Hao and Yu, Yue and Wang, Qiuyu and Wang, Wen and Cheng, Ka Leong and Wang, Hanlin and Li, Yixuan and Chen, Cheng and Zeng, Yanhong and Shen, Yujun and Qu, Huamin},
  journal={arXiv preprint arXiv:2510.20822},
  year={2025}
}

License

This project is licensed under the CC BY-NC-SA 4.0 (Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License).

The code is provided for academic research purposes only.

For any questions, please contact ymengas@cse.ust.hk.