VideoGrain-dataset / README.md
XiangpengYang's picture
Update README.md
e08783c verified
---
license: cc-by-nc-4.0
task_categories:
- text-to-video
- text-to-image
language:
- en
pretty_name: VideoGrain-dataset
source_datasets:
- original
tags:
- video editing
- Multi grained Video Editing
- text-to-video
- Pika
- video generation
- Video Generative Model Evaluation
- Text-to-Video Diffusion Model Development
- Text-to-Video Prompt Engineering
- Efficient Video Generation
---
# VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing (ICLR 2025)
[Github](https://github.com/knightyxp/VideoGrain) (⭐ Star our GitHub )
[Project Page](https://knightyxp.github.io/VideoGrain_project_page)
[ArXiv](https://arxiv.org/abs/2502.17258)
[Youtube Video](https://www.youtube.com/watch?v=XEM4Pex7F9E)
[HuggingFace Daily Papers Top1](https://huggingface.co/papers/2502.17258)
If you think this dataset is helpful, please feel free to leave a star⭐️⭐️⭐️ and cite our paper:
<p align="center">
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6486df66373f79a52913e017/ZQnogrOMFhy1mcTuxSQ62.mp4"></video>
</p>
# Summary
This is the dataset proposed in our paper [VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing](https://arxiv.org/abs/2502.17258) (ICLR 2025).
VideoGrain is a zero-shot method for class-level, instance-level, and part-level video editing.
- **Multi-grained Video Editing**
- class-level: Editing objects within the same class (previous SOTA limited to this level)
- instance-level: Editing each individual instance to distinct object
- part-level: Adding new objects or modifying existing attributes at the part-level
- **Training-Free**
- Does not require any training/fine-tuning
- **One-Prompt Multi-region Control & Deep investigations about cross/self attn**
- modulating cross-attn for multi-regions control (visualizations available)
- modulating self-attn for feature decoupling (clustering are available)
# Directory
```
data/
├── 2_cars
│ ├── 2_cars # original videos frames
│ └── layout_masks # layout masks subfolders (e.g., bg, left, right)
├── 2_cats
│ ├── 2_cats
│ └── layout_masks
├── 2_monkeys
├── badminton
├── boxer-punching
├── car
├── cat_flower
├── man_text_message
├── run_two_man
├── soap-box
├── spin-ball
├── tennis
└── wolf
```
# Download
### Automatical
Install the [datasets](https://huggingface.co/docs/datasets/v1.15.1/installation.html) library first, by:
```
pip install datasets
```
Then it can be downloaded automatically with
```python
import numpy as np
from datasets import load_dataset
dataset = load_dataset("XiangpengYang/VideoGrain-dataset")
```
# License
This dataset are licensed under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en).
# Citation
```
@article{yang2025videograin,
title={VideoGrain: Modulating Space-Time Attention for Multi-grained Video Editing},
author={Yang, Xiangpeng and Zhu, Linchao and Fan, Hehe and Yang, Yi},
journal={arXiv preprint arXiv:2502.17258},
year={2025}
}
```
# Contact
If you have any questions, feel free to contact Xiangpeng Yang (knightyxp@gmail.com).