| | --- |
| | license: cc-by-nc-4.0 |
| | task_categories: |
| | - text-to-audio |
| | size_categories: |
| | - 100K<n<1M |
| | --- |
| | |
| |
|
| | # V2M Dataset: A Large-Scale Video-to-Music Dataset πΆ |
| |
|
| | **The V2M dataset is proposed in the [VidMuse project](https://vidmuse.github.io/), aimed at advancing research in video-to-music generation.** |
| |
|
| | ## β¨ Dataset Overview |
| |
|
| | The V2M dataset comprises 360K pairs of videos and music, covering various types including movie trailers, advertisements, and documentaries. This dataset provides researchers with a rich resource to explore the relationship between video content and music generation. |
| |
|
| |
|
| | ## π οΈ Usage Instructions |
| |
|
| | - Download the dataset: |
| | |
| | ```bash |
| | git clone https://huggingface.co/datasets/HKUSTAudio/VidMuse-V2M-Dataset |
| | ``` |
| |
|
| | - Dataset structure: |
| | |
| | ``` |
| | V2M/ |
| | βββ V2M.txt |
| | βββ V2M-20k.txt |
| | βββ V2M-bench.txt |
| | ``` |
| |
|
| | ## π― Citation |
| |
|
| | If you use the V2M dataset in your research, please consider citing: |
| |
|
| | ``` |
| | @article{tian2024vidmuse, |
| | title={Vidmuse: A simple video-to-music generation framework with long-short-term modeling}, |
| | author={Tian, Zeyue and Liu, Zhaoyang and Yuan, Ruibin and Pan, Jiahao and Liu, Qifeng and Tan, Xu and Chen, Qifeng and Xue, Wei and Guo, Yike}, |
| | journal={arXiv preprint arXiv:2406.04321}, |
| | year={2024} |
| | } |
| | ``` |