| configs: | |
| - config_name: test | |
| data_files: test.csv | |
| - config_name: train_val_test | |
| data_files: '*.csv' | |
| - config_name: metadata | |
| data_files: multivsr.tar | |
| license: mit | |
| language: | |
| - en | |
| - fr | |
| - de | |
| - es | |
| - it | |
| - ca | |
| - ru | |
| - ja | |
| - zh | |
| - pl | |
| - pt | |
| - tr | |
| - nl | |
| tags: | |
| - lipreading | |
| - audiovisual | |
| - video | |
| - asr | |
| - avsr | |
| - talkingface | |
| - audio | |
| - speech | |
| # Dataset: MultiVSR | |
| We introduce a large-scale multilingual lip-reading dataset: MultiVSR. The dataset comprises a total of 12,000 hours of video footage, covering English + 12 non-English languages. MultiVSR is a massive dataset with a huge diversity in terms of the speakers as well as languages, with approximately 1.6M video clips across 123K YouTube videos. Please check the [website](https://www.robots.ox.ac.uk/~vgg/research/multivsr/) for samples. | |
| <p align="center"> | |
| <img src="dataset_teaser.gif" alt="MultiVSR Dataset Teaser"> | |
| </p> | |
| ## Download instructions | |
| Please check the GitHub repo to download, preprocess, and prepare the dataset: https://github.com/Sindhu-Hegde/multivsr/tree/master/dataset. |