Initial dataset card
Browse files
README.md
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Dataset: MultiVSR
|
| 2 |
+
|
| 3 |
+
We introduce a large-scale multilingual lip-reading dataset: MultiVSR. The dataset comprises a total of ~12,000 hours of video footage, covering English + 12 non-English languages. MultiVSR is a massive dataset with a huge diversity in terms of the speakers as well as languages, with approximately 1.6M video clips across 123K YouTube videos. Please check the [https://www.robots.ox.ac.uk/~vgg/research/multivsr/](website) for samples.
|
| 4 |
+
|
| 5 |
+
<p align="center">
|
| 6 |
+
<img src="dataset_teaser.gif" alt="MultiVSR Dataset Teaser">
|
| 7 |
+
</p>
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
## Download instructions
|
| 11 |
+
|
| 12 |
+
Download the list of YouTube IDs, metadata and the train-val-test csvs from `huggingface-datasets` 🤗:
|
| 13 |
+
|
| 14 |
+
```python
|
| 15 |
+
from datasets import load_dataset
|
| 16 |
+
|
| 17 |
+
# Login using e.g. `huggingface-cli login` to access this dataset
|
| 18 |
+
ds = load_dataset("sindhuhegde/multivsr")
|
| 19 |
+
```
|
| 20 |
+
|
| 21 |
+
# Download and Preprocess the videos using the metadata
|
| 22 |
+
|
| 23 |
+
Follow the instructions in the [https://github.com/Sindhu-Hegde/multivsr/tree/master/dataset](GitHub repo) to download and preprocess the videos.
|
| 24 |
+
|
| 25 |
+
Once preprocessed, you should have the video clips (`.mp4`) and the transcript (`.txt`) files in the following structure:
|
| 26 |
+
```
|
| 27 |
+
data_root (path of the pre-processed videos)
|
| 28 |
+
├── list of video-ids
|
| 29 |
+
│ ├── *.mp4 (extracted face track video for each sample)
|
| 30 |
+
| ├── *.txt (full transcript for each clip)
|
| 31 |
+
```
|