| | --- |
| | task_categories: |
| | - image-to-video |
| | --- |
| | |
| | # EPiC_Data: Training Data for Efficient Video Camera Control Learning |
| | |
| | This repository contains the training data for **EPiC: Efficient Video Camera Control Learning with Precise Anchor-Video Guidance**. EPiC is an efficient and precise camera control learning framework that automatically constructs high-quality anchor videos without expensive camera trajectory annotations. This dataset provides the approximately 5K training videos and related components used to train the EPiC framework for image-to-video (I2V) and video-to-video (V2V) camera control tasks. |
| | |
| | **Paper:** [EPiC: Efficient Video Camera Control Learning with Precise Anchor-Video Guidance](https://huggingface.co/papers/2505.21876) |
| | **Project Page:** [https://zunwang1.github.io/Epic](https://zunwang1.github.io/Epic) |
| | **Code:** [https://github.com/wz0919/EPiC](https://github.com/wz0919/EPiC) |
| | |
| | ## Paper Abstract |
| | |
| | Recent approaches on 3D camera control in video diffusion models (VDMs) often |
| | create anchor videos to guide diffusion models as a structured prior by |
| | rendering from estimated point clouds following annotated camera trajectories. |
| | However, errors inherent in point cloud estimation often lead to inaccurate |
| | anchor videos. Moreover, the requirement for extensive camera trajectory |
| | annotations further increases resource demands. To address these limitations, |
| | we introduce EPiC, an efficient and precise camera control learning framework |
| | that automatically constructs high-quality anchor videos without expensive |
| | camera trajectory annotations. Concretely, we create highly precise anchor |
| | videos for training by masking source videos based on first-frame visibility. |
| | This approach ensures high alignment, eliminates the need for camera trajectory |
| | annotations, and thus can be readily applied to any in-the-wild video to |
| | generate image-to-video (I2V) training pairs. Furthermore, we introduce |
| | Anchor-ControlNet, a lightweight conditioning module that integrates anchor |
| | video guidance in visible regions to pretrained VDMs, with less than 1% of |
| | backbone model parameters. By combining the proposed anchor video data and |
| | ControlNet module, EPiC achieves efficient training with substantially fewer |
| | parameters, training steps, and less data, without requiring modifications to |
| | the diffusion model backbone typically needed to mitigate rendering |
| | misalignments. Although being trained on masking-based anchor videos, our |
| | method generalizes robustly to anchor videos made with point clouds during |
| | inference, enabling precise 3D-informed camera control. EPiC achieves SOTA |
| | performance on RealEstate10K and MiraData for I2V camera control task, |
| | demonstrating precise and robust camera control ability both quantitatively and |
| | qualitatively. Notably, EPiC also exhibits strong zero-shot generalization to |
| | video-to-video scenarios. |
| | |
| | ## Sample Usage |
| | |
| | To download the EPiC training dataset, which consists of approximately 5,000 training videos, navigate to the `data/train` directory within your EPiC code repository clone, and use the following `wget` and `unzip` commands: |
| | |
| | ```bash |
| | # Assuming you are in the root directory of the EPiC code repository, navigate to data/train |
| | cd data/train |
| | |
| | # Download the main training videos |
| | wget https://huggingface.co/datasets/ZunWang/EPiC_Data/resolve/main/train.zip |
| | unzip train.zip |
| | ``` |
| | |
| | **(Optional)** You can also download the pre-extracted VAE latents, which can save several hours of preprocessing time: |
| | |
| | ```bash |
| | wget https://huggingface.co/datasets/ZunWang/EPiC_Data/resolve/main/train_joint_latents.zip |
| | unzip train_joint_latents.zip |
| | ``` |
| | |
| | After downloading and unzipping, your `data/train` folder, following preprocessing steps (as described in the [EPiC GitHub repository](https://github.com/wz0919/EPiC)), should have a structure similar to this: |
| | |
| | ``` |
| | data/ |
| | βββ train/ |
| | βββ caption_embs/ |
| | βββ captions/ |
| | βββ joint_latents/ |
| | βββ masked_videos/ |
| | βββ masks/ |
| | βββ videos/ |
| | ``` |
| | |
| | For detailed instructions on preprocessing this data (e.g., extracting caption embeddings or VAE latents) and training the EPiC model, please refer to the [EPiC GitHub repository](https://github.com/wz0919/EPiC). |