Add comprehensive dataset card for TACO
#1
by
nielsr
HF Staff
- opened
README.md
ADDED
|
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- video-to-video
|
| 4 |
+
license: cc-by-nc-4.0
|
| 5 |
+
tags:
|
| 6 |
+
- video-completion
|
| 7 |
+
- amodal-completion
|
| 8 |
+
- diffusion-models
|
| 9 |
+
- synthetic-data
|
| 10 |
+
- robotics
|
| 11 |
+
- autonomous-driving
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# TACO: Taming Diffusion for in-the-wild Video Amodal Completion
|
| 15 |
+
|
| 16 |
+
This repository contains the large-scale synthetic dataset used for training and evaluating **TACO**, a conditional diffusion model for Video Amodal Completion (VAC).
|
| 17 |
+
|
| 18 |
+
[Paper](https://huggingface.co/papers/2503.12049) | [Project Page](https://jason-aplp.github.io/TACO/) | [Code](https://github.com/JasonAplp/TACO)
|
| 19 |
+
|
| 20 |
+
<p align="center">
|
| 21 |
+
<img src="https://github.com/jason-aplp/TACO/blob/main/assets/teaser.gif?raw=true" width=100%>
|
| 22 |
+
</p>
|
| 23 |
+
|
| 24 |
+
## Abstract
|
| 25 |
+
Humans can infer complete shapes and appearances of objects from limited visual cues, relying on extensive prior knowledge of the physical world. However, completing partially observable objects while ensuring consistency across video frames remains challenging for existing models, especially for unstructured, in-the-wild videos. This paper tackles the task of Video Amodal Completion (VAC), which aims to generate the complete object consistently throughout the video given a visual prompt specifying the object of interest. Leveraging the rich, consistent manifolds learned by pre-trained video diffusion models, we propose a conditional diffusion model, TACO, that repurposes these manifolds for VAC. To enable its effective and robust generalization to challenging in-the-wild scenarios, we curate a large-scale synthetic dataset with multiple difficulty levels by systematically imposing occlusions onto un-occluded videos. Building on this, we devise a progressive fine-tuning paradigm that starts with simpler recovery tasks and gradually advances to more complex ones. We demonstrate TACO's versatility on a wide range of in-the-wild videos from Internet, as well as on diverse, unseen datasets commonly used in autonomous driving, robotic manipulation, and scene understanding. Moreover, we show that TACO can be effectively applied to various downstream tasks like object reconstruction and pose estimation, highlighting its potential to facilitate physical world understanding and reasoning. Our project page is available at this https URL .
|
| 26 |
+
|
| 27 |
+
## Dataset Structure and Contents
|
| 28 |
+
|
| 29 |
+
This Hugging Face repository hosts the datasets used for benchmarking and training TACO. These include:
|
| 30 |
+
- **Benchmarks:** `OvO Dataset` and `Kubric Dataset` for evaluation.
|
| 31 |
+
- **Training Data:** `OvO_Easy`, `OvO_Hard`, `OvO_Drive` datasets used for training, along with corresponding path files (e.g., `Easy_train.json`, `Easy_val.json`).
|
| 32 |
+
|
| 33 |
+
The data structure for training datasets is typically:
|
| 34 |
+
```
|
| 35 |
+
<Dataset_Name>/
|
| 36 |
+
MVImgNet/
|
| 37 |
+
0/
|
| 38 |
+
1/
|
| 39 |
+
...
|
| 40 |
+
SA-V/
|
| 41 |
+
sav_000/
|
| 42 |
+
sav_001/
|
| 43 |
+
...
|
| 44 |
+
<Dataset_Name>_train.json
|
| 45 |
+
<Dataset_Name>_val.json
|
| 46 |
+
```
|
| 47 |
+
To download the datasets locally, use Git LFS:
|
| 48 |
+
```bash
|
| 49 |
+
git lfs install
|
| 50 |
+
git clone https://huggingface.co/datasets/JasonAplp/TACO.git
|
| 51 |
+
```
|
| 52 |
+
Please ensure to unzip all downloaded files to match the expected structure.
|
| 53 |
+
|
| 54 |
+
## Sample Usage
|
| 55 |
+
|
| 56 |
+
For detailed instructions on installation, single-example inference, dataset inference, and training using this dataset, please refer to the [official GitHub repository](https://github.com/JasonAplp/TACO).
|
| 57 |
+
|
| 58 |
+
Here are some common usage patterns:
|
| 59 |
+
|
| 60 |
+
### Installation
|
| 61 |
+
```bash
|
| 62 |
+
conda create -n taco python=3.10
|
| 63 |
+
conda activate taco
|
| 64 |
+
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
|
| 65 |
+
pip install git+https://github.com/OpenAI/CLIP.git
|
| 66 |
+
pip install git+https://github.com/Stability-AI/datapipelines.git
|
| 67 |
+
pip install -r requirements.txt
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
### Dataset Inference (e.g., Kubric)
|
| 71 |
+
Ensure you have downloaded the OvO Dataset and Kubric Dataset. Revise the dataset path in `configs/inference_vac_kubric.yaml` and `configs/inference_vac_OvO.yaml` (`data/params/dset_root`) before running:
|
| 72 |
+
```bash
|
| 73 |
+
bash infer_kubric.sh
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
### Training (e.g., OvO_Easy)
|
| 77 |
+
First, download the required Stable Video Diffusion checkpoints (e.g., SVD 14 frames from Hugging Face) and place them under the `pretrained` folder. Then, download the training datasets (e.g., OvO_Easy) and their corresponding path files. Revise `data.params.dset_root`, `data.params.train_path`, and `data.params.val_path` in `train.sh` before running:
|
| 78 |
+
```bash
|
| 79 |
+
bash train.sh
|
| 80 |
+
```
|
| 81 |
+
Note that the training script is set for an 8-GPU system. Debugging with a single GPU is possible using `bash debug.sh`.
|
| 82 |
+
|
| 83 |
+
## Citation
|
| 84 |
+
If you find this dataset or the TACO model useful, please cite the paper:
|
| 85 |
+
|
| 86 |
+
```bibtex
|
| 87 |
+
@article{lu2025taco,
|
| 88 |
+
title={Taco: Taming diffusion for in-the-wild video amodal completion},
|
| 89 |
+
author={Lu, Ruijie and Chen, Yixin and Liu, Yu and Tang, Jiaxiang and Ni, Junfeng and Wan, Diwen and Zeng, Gang and Huang, Siyuan},
|
| 90 |
+
journal={arXiv preprint arXiv:2503.12049},
|
| 91 |
+
year={2025}
|
| 92 |
+
}
|
| 93 |
+
```
|