| license: cc-by-4.0 | |
| task_categories: | |
| - video-to-audio | |
| # V2M Dataset: A Large-Scale Video-to-Music Dataset πΆ | |
| **The V2M dataset is proposed in the [VidMuse project](https://vidmuse.github.io/), aimed at advancing research in video-to-music generation. See the paper [VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling](https://huggingface.co/papers/2406.04321) for more details.** | |
| ## β¨ Dataset Overview | |
| The V2M dataset comprises 360K pairs of videos and music, covering various types including movie trailers, advertisements, and documentaries. This dataset provides researchers with a rich resource to explore the relationship between video content and music generation. | |
| ## π οΈ Usage Instructions | |
| - Download the dataset: | |
| ```bash | |
| git clone https://huggingface.co/datasets/Zeyue7/V2M | |
| ``` | |
| - Dataset structure: | |
| ``` | |
| V2M/ | |
| βββ V2M.txt | |
| βββ V2M-20k.txt | |
| βββ V2M-bench.txt | |
| ``` | |
| ## π― Citation | |
| If you use the V2M dataset in your research, please consider citing: | |
| ``` | |
| @article{tian2024vidmuse, | |
| title={Vidmuse: A simple video-to-music generation framework with long-short-term modeling}, | |
| author={Tian, Zeyue and Liu, Zhaoyang and Yuan, Ruibin and Pan, Jiahao and Liu, Qifeng and Tan, Xu and Chen, Qifeng and Xue, Wei and Guo, Yike}, | |
| journal={arXiv preprint arXiv:2406.04321}, | |
| year={2024} | |
| } | |
| ``` |