Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AVSpeech Metadata Files
|
| 2 |
+
|
| 3 |
+
This repository contains the metadata CSV files for the [AVSpeech dataset](https://research.google.com/avspeech/) by Google Research.
|
| 4 |
+
|
| 5 |
+
## Dataset Description
|
| 6 |
+
|
| 7 |
+
AVSpeech is a large-scale audio-visual speech dataset containing over 290,000 video segments from YouTube, designed for audio-visual speech recognition and lip reading research.
|
| 8 |
+
|
| 9 |
+
## Files
|
| 10 |
+
|
| 11 |
+
- `avspeech_train.csv` (128 MB) - Training set with 2,621,845 video segments from 270k videos
|
| 12 |
+
- `avspeech_test.csv` (9 MB) - Test set with video segments from a separate set of 22k videos
|
| 13 |
+
|
| 14 |
+
## CSV Format
|
| 15 |
+
|
| 16 |
+
Each row contains:
|
| 17 |
+
```
|
| 18 |
+
YouTube ID, start_time, end_time, x_coordinate, y_coordinate
|
| 19 |
+
```
|
| 20 |
+
|
| 21 |
+
Where:
|
| 22 |
+
- **YouTube ID**: The YouTube video identifier
|
| 23 |
+
- **start_time**: Start time of the segment in seconds
|
| 24 |
+
- **end_time**: End time of the segment in seconds
|
| 25 |
+
- **x_coordinate**: X coordinate of the speaker's face center (normalized 0.0-1.0, 0.0 = left)
|
| 26 |
+
- **y_coordinate**: Y coordinate of the speaker's face center (normalized 0.0-1.0, 0.0 = top)
|
| 27 |
+
|
| 28 |
+
The train and test sets have disjoint speakers.
|
| 29 |
+
|
| 30 |
+
## Usage
|
| 31 |
+
|
| 32 |
+
### With Hugging Face Hub
|
| 33 |
+
|
| 34 |
+
```python
|
| 35 |
+
from huggingface_hub import hf_hub_download
|
| 36 |
+
|
| 37 |
+
# Download train CSV
|
| 38 |
+
train_csv = hf_hub_download(
|
| 39 |
+
repo_id="bbrothers/avspeech-metadata",
|
| 40 |
+
filename="avspeech_train.csv",
|
| 41 |
+
repo_type="dataset"
|
| 42 |
+
)
|
| 43 |
+
|
| 44 |
+
# Download test CSV
|
| 45 |
+
test_csv = hf_hub_download(
|
| 46 |
+
repo_id="bbrothers/avspeech-metadata",
|
| 47 |
+
filename="avspeech_test.csv",
|
| 48 |
+
repo_type="dataset"
|
| 49 |
+
)
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
### With our dataset loader
|
| 53 |
+
|
| 54 |
+
```python
|
| 55 |
+
from ml.data.av_speech.dataset import AVSpeechDataset
|
| 56 |
+
|
| 57 |
+
# Initialize dataset (will auto-download CSVs if needed)
|
| 58 |
+
dataset = AVSpeechDataset()
|
| 59 |
+
|
| 60 |
+
# Download videos
|
| 61 |
+
dataset.download(
|
| 62 |
+
splits=['train', 'test'],
|
| 63 |
+
max_videos=100, # Or None for all videos
|
| 64 |
+
num_workers=4
|
| 65 |
+
)
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
## Citation
|
| 69 |
+
|
| 70 |
+
If you use this dataset, please cite the original AVSpeech paper:
|
| 71 |
+
|
| 72 |
+
```bibtex
|
| 73 |
+
@inproceedings{ephrat2018looking,
|
| 74 |
+
title={Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation},
|
| 75 |
+
author={Ephrat, Ariel and Mosseri, Inbar and Lang, Oran and Dekel, Tali and Wilson, Kevin and Hassidim, Avinatan and Freeman, William T and Rubinstein, Michael},
|
| 76 |
+
booktitle={ACM SIGGRAPH 2018},
|
| 77 |
+
year={2018}
|
| 78 |
+
}
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
## Links
|
| 82 |
+
|
| 83 |
+
- [AVSpeech Official Page](https://research.google.com/avspeech/)
|
| 84 |
+
- [Original Paper](https://arxiv.org/abs/1804.03619)
|
| 85 |
+
- [Our GitHub Repository](https://github.com/Pierre-LouisBJT/interconnect)
|
| 86 |
+
|
| 87 |
+
## Notes
|
| 88 |
+
|
| 89 |
+
- This repository only contains the metadata CSV files, not the actual video content
|
| 90 |
+
- Videos must be downloaded from YouTube using the provided YouTube IDs
|
| 91 |
+
- Some videos may no longer be available (deleted, private, or geo-blocked)
|
| 92 |
+
- Estimated total dataset size: ~4500 hours of video
|