Add comprehensive dataset card for EPiC_Data
#1
by
nielsr HF Staff - opened
README.md
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- image-to-video
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# EPiC_Data: Training Data for Efficient Video Camera Control Learning
|
| 7 |
+
|
| 8 |
+
This repository contains the training data for **EPiC: Efficient Video Camera Control Learning with Precise Anchor-Video Guidance**. EPiC is an efficient and precise camera control learning framework that automatically constructs high-quality anchor videos without expensive camera trajectory annotations. This dataset provides the approximately 5K training videos and related components used to train the EPiC framework for image-to-video (I2V) and video-to-video (V2V) camera control tasks.
|
| 9 |
+
|
| 10 |
+
**Paper:** [EPiC: Efficient Video Camera Control Learning with Precise Anchor-Video Guidance](https://huggingface.co/papers/2505.21876)
|
| 11 |
+
**Project Page:** [https://zunwang1.github.io/Epic](https://zunwang1.github.io/Epic)
|
| 12 |
+
**Code:** [https://github.com/wz0919/EPiC](https://github.com/wz0919/EPiC)
|
| 13 |
+
|
| 14 |
+
## Paper Abstract
|
| 15 |
+
|
| 16 |
+
Recent approaches on 3D camera control in video diffusion models (VDMs) often
|
| 17 |
+
create anchor videos to guide diffusion models as a structured prior by
|
| 18 |
+
rendering from estimated point clouds following annotated camera trajectories.
|
| 19 |
+
However, errors inherent in point cloud estimation often lead to inaccurate
|
| 20 |
+
anchor videos. Moreover, the requirement for extensive camera trajectory
|
| 21 |
+
annotations further increases resource demands. To address these limitations,
|
| 22 |
+
we introduce EPiC, an efficient and precise camera control learning framework
|
| 23 |
+
that automatically constructs high-quality anchor videos without expensive
|
| 24 |
+
camera trajectory annotations. Concretely, we create highly precise anchor
|
| 25 |
+
videos for training by masking source videos based on first-frame visibility.
|
| 26 |
+
This approach ensures high alignment, eliminates the need for camera trajectory
|
| 27 |
+
annotations, and thus can be readily applied to any in-the-wild video to
|
| 28 |
+
generate image-to-video (I2V) training pairs. Furthermore, we introduce
|
| 29 |
+
Anchor-ControlNet, a lightweight conditioning module that integrates anchor
|
| 30 |
+
video guidance in visible regions to pretrained VDMs, with less than 1% of
|
| 31 |
+
backbone model parameters. By combining the proposed anchor video data and
|
| 32 |
+
ControlNet module, EPiC achieves efficient training with substantially fewer
|
| 33 |
+
parameters, training steps, and less data, without requiring modifications to
|
| 34 |
+
the diffusion model backbone typically needed to mitigate rendering
|
| 35 |
+
misalignments. Although being trained on masking-based anchor videos, our
|
| 36 |
+
method generalizes robustly to anchor videos made with point clouds during
|
| 37 |
+
inference, enabling precise 3D-informed camera control. EPiC achieves SOTA
|
| 38 |
+
performance on RealEstate10K and MiraData for I2V camera control task,
|
| 39 |
+
demonstrating precise and robust camera control ability both quantitatively and
|
| 40 |
+
qualitatively. Notably, EPiC also exhibits strong zero-shot generalization to
|
| 41 |
+
video-to-video scenarios.
|
| 42 |
+
|
| 43 |
+
## Sample Usage
|
| 44 |
+
|
| 45 |
+
To download the EPiC training dataset, which consists of approximately 5,000 training videos, navigate to the `data/train` directory within your EPiC code repository clone, and use the following `wget` and `unzip` commands:
|
| 46 |
+
|
| 47 |
+
```bash
|
| 48 |
+
# Assuming you are in the root directory of the EPiC code repository, navigate to data/train
|
| 49 |
+
cd data/train
|
| 50 |
+
|
| 51 |
+
# Download the main training videos
|
| 52 |
+
wget https://huggingface.co/datasets/ZunWang/EPiC_Data/resolve/main/train.zip
|
| 53 |
+
unzip train.zip
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
**(Optional)** You can also download the pre-extracted VAE latents, which can save several hours of preprocessing time:
|
| 57 |
+
|
| 58 |
+
```bash
|
| 59 |
+
wget https://huggingface.co/datasets/ZunWang/EPiC_Data/resolve/main/train_joint_latents.zip
|
| 60 |
+
unzip train_joint_latents.zip
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
After downloading and unzipping, your `data/train` folder, following preprocessing steps (as described in the [EPiC GitHub repository](https://github.com/wz0919/EPiC)), should have a structure similar to this:
|
| 64 |
+
|
| 65 |
+
```
|
| 66 |
+
data/
|
| 67 |
+
βββ train/
|
| 68 |
+
βββ caption_embs/
|
| 69 |
+
βββ captions/
|
| 70 |
+
βββ joint_latents/
|
| 71 |
+
βββ masked_videos/
|
| 72 |
+
βββ masks/
|
| 73 |
+
βββ videos/
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
For detailed instructions on preprocessing this data (e.g., extracting caption embeddings or VAE latents) and training the EPiC model, please refer to the [EPiC GitHub repository](https://github.com/wz0919/EPiC).
|