Datasets:
Commit ·
0fd366e
1
Parent(s): 2ef6aab
Update dataset
Browse files
.DS_Store
CHANGED
|
Binary files a/.DS_Store and b/.DS_Store differ
|
|
|
README.md
CHANGED
|
@@ -129,6 +129,13 @@ merge_dir (path of the merged videos)
|
|
| 129 |
- `target_word_boundary`: Word boundary of the target word. Format: [target-word, start_frame, end_frame]
|
| 130 |
- `word_boundaries`: Word boundaries for all the words in the video. Format: [[word-1, start_frame, end_frame], [word-2, start_frame, end_frame], ..., [word-n, start_frame, end_frame]]
|
| 131 |
- `stress_label`: Binary label indicating whether the target-word has been stressed in the corresponding speech
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 132 |
|
| 133 |
### Data Instances
|
| 134 |
|
|
@@ -145,7 +152,9 @@ Each instance in the dataset contains the above fields. An example instance is s
|
|
| 145 |
"target_word": "beautiful",
|
| 146 |
"target_word_boundary": "['beautiful', 21, 37]",
|
| 147 |
"word_boundaries": "[['app', 0, 11], ['is', 12, 13], ['beautiful', 21, 37], ['it', 45, 47], ['just', 48, 53], ['is', 60, 63], ['streamlined', 65, 81], ['it', 82, 83]]",
|
| 148 |
-
"stress_label": 1
|
|
|
|
|
|
|
| 149 |
}
|
| 150 |
```
|
| 151 |
|
|
@@ -155,20 +164,20 @@ See the [AVS-Spot dataset viewer](https://huggingface.co/datasets/sindhuhegde/av
|
|
| 155 |
|
| 156 |
## 📦 Dataset Curation
|
| 157 |
|
| 158 |
-
AVS-Spot is a dataset of video clips where a specific word is distinctly gestured. We begin with the full English test set from the [AVSpeech dataset](https://looking-to-listen.github.io/avspeech/) and extract word-aligned transcripts using the WhisperX ASR model. Short phrases containing 4 to 12 words are then selected, ensuring that the clips exhibit distinct gesture movements. We then manually review and annotate clips with a `target-word`, where the word is visibly gestured. This process results in $500$ curated clips, each containing a well-defined gestured word. The manual annotation ensures minimal label noise, enabling a reliable evaluation of the gesture spotting task. Additionally, we provide binary `stress/emphasis` labels for target words, capturing key gesture-related cues.
|
| 159 |
Summarized dataset information is given below:
|
| 160 |
|
| 161 |
- Source: [AVSpeech](https://looking-to-listen.github.io/avspeech/)
|
| 162 |
- Language: English
|
| 163 |
- Modalities: Video, audio, text
|
| 164 |
-
- Labels: Target-word, word-boundaries, speech-stress binary label
|
| 165 |
- Task: Gestured word spotting
|
| 166 |
|
| 167 |
### Statistics
|
| 168 |
|
| 169 |
| Dataset | Split | # Hours | # Speakers | Avg. clip duration | # Videos |
|
| 170 |
|:--------:|:-----:|:-------:|:-----------:|:-----------------:|:--------:|
|
| 171 |
-
| AVS-Spot | test | 0.38 |
|
| 172 |
|
| 173 |
Below, we show some additional statistics for the dataset: (i) Duration of videos in terms of number of frames, (ii) Wordcloud of most gestured words in the dataset, illustrating the diversity of the different words present, and (iii) The distribution of target-word occurences in the video.
|
| 174 |
|
|
@@ -191,4 +200,4 @@ If you find this dataset helpful, please consider starring ⭐ the repository an
|
|
| 191 |
|
| 192 |
## 🙏 Acknowledgements
|
| 193 |
|
| 194 |
-
The authors would like to thank Piyush Bagad, Ragav Sachdeva,
|
|
|
|
| 129 |
- `target_word_boundary`: Word boundary of the target word. Format: [target-word, start_frame, end_frame]
|
| 130 |
- `word_boundaries`: Word boundaries for all the words in the video. Format: [[word-1, start_frame, end_frame], [word-2, start_frame, end_frame], ..., [word-n, start_frame, end_frame]]
|
| 131 |
- `stress_label`: Binary label indicating whether the target-word has been stressed in the corresponding speech
|
| 132 |
+
- `lighting`: Indicates the lighting condition of the video. Possible values:
|
| 133 |
+
- `dim`: Low light, difficult to see details.
|
| 134 |
+
- `medium`: Moderate light, clear but not very bright.
|
| 135 |
+
- `bright`: Well-lit, high visibility.
|
| 136 |
+
- `speaker_pose`: Indicates the speaker's pose. Possible values:
|
| 137 |
+
- `frontal`: Speaker facing the camera.
|
| 138 |
+
- `non-frontal`: Speaker not directly facing the camera.
|
| 139 |
|
| 140 |
### Data Instances
|
| 141 |
|
|
|
|
| 152 |
"target_word": "beautiful",
|
| 153 |
"target_word_boundary": "['beautiful', 21, 37]",
|
| 154 |
"word_boundaries": "[['app', 0, 11], ['is', 12, 13], ['beautiful', 21, 37], ['it', 45, 47], ['just', 48, 53], ['is', 60, 63], ['streamlined', 65, 81], ['it', 82, 83]]",
|
| 155 |
+
"stress_label": 1,
|
| 156 |
+
"lighting": "medium",
|
| 157 |
+
"speaker_pose": "frontal"
|
| 158 |
}
|
| 159 |
```
|
| 160 |
|
|
|
|
| 164 |
|
| 165 |
## 📦 Dataset Curation
|
| 166 |
|
| 167 |
+
AVS-Spot is a dataset of video clips where a specific word is distinctly gestured. We begin with the full English test set from the [AVSpeech dataset](https://looking-to-listen.github.io/avspeech/) and extract word-aligned transcripts using the WhisperX ASR model. Short phrases containing 4 to 12 words are then selected, ensuring that the clips exhibit distinct gesture movements. We then manually review and annotate clips with a `target-word`, where the word is visibly gestured. This process results in $500$ curated clips, each containing a well-defined gestured word. The manual annotation ensures minimal label noise, enabling a reliable evaluation of the gesture spotting task. Additionally, we provide binary `stress/emphasis` labels for target words, capturing key gesture-related cues. We also provide `lighting` and `speaker_pose` labels, which indicate the video's lighting conditions and the speaker's pose, respectively.
|
| 168 |
Summarized dataset information is given below:
|
| 169 |
|
| 170 |
- Source: [AVSpeech](https://looking-to-listen.github.io/avspeech/)
|
| 171 |
- Language: English
|
| 172 |
- Modalities: Video, audio, text
|
| 173 |
+
- Labels: Target-word, word-boundaries, speech-stress binary label, lighting label, speaker pose label
|
| 174 |
- Task: Gestured word spotting
|
| 175 |
|
| 176 |
### Statistics
|
| 177 |
|
| 178 |
| Dataset | Split | # Hours | # Speakers | Avg. clip duration | # Videos |
|
| 179 |
|:--------:|:-----:|:-------:|:-----------:|:-----------------:|:--------:|
|
| 180 |
+
| AVS-Spot | test | 0.38 | 384 | 2.76 | 500 |
|
| 181 |
|
| 182 |
Below, we show some additional statistics for the dataset: (i) Duration of videos in terms of number of frames, (ii) Wordcloud of most gestured words in the dataset, illustrating the diversity of the different words present, and (iii) The distribution of target-word occurences in the video.
|
| 183 |
|
|
|
|
| 200 |
|
| 201 |
## 🙏 Acknowledgements
|
| 202 |
|
| 203 |
+
The authors would like to thank Piyush Bagad, Ragav Sachdeva, Jaesung Hugh, Paul Engstler for their valuable discussions. The authors are further grateful to Alyosha Efros, Jitendra Malik, and Justine Cassell for their insightful inputs and suggestions. They also extend their thanks to David Pinto for setting up the data annotation tool and to Ashish Thandavan for his support with the infrastructure. This research is funded by EPSRC Programme Grant VisualAI EP/T028572/1, an SNSF Postdoc.Mobility Fellowship P500PT\_225450 and a Royal Society Research Professorship RSRP\textbackslash R\textbackslash 241003.
|
test.csv
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|