Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,16 @@ configs:
|
|
| 8 |
- config_name: frames
|
| 9 |
data_files: "frames/*.tar"
|
| 10 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
## Example usage for clips:
|
| 12 |
### Also decoding raw binary video data and json
|
| 13 |
```python
|
|
@@ -43,4 +53,20 @@ dataset = (
|
|
| 43 |
.shuffle(100)
|
| 44 |
.to_tuple("mp4", "json")
|
| 45 |
.map_tuple(load_video, load_json)
|
| 46 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
- config_name: frames
|
| 9 |
data_files: "frames/*.tar"
|
| 10 |
---
|
| 11 |
+
# Grounding YouTube Dataset #
|
| 12 |
+
What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions
|
| 13 |
+
[arxiv](https://arxiv.org/abs/2303.16990)
|
| 14 |
+
|
| 15 |
+
## The dataset is present in three styles:
|
| 16 |
+
* Untrimmed videos + annotations within the entire video
|
| 17 |
+
* Action clips extracted from the videos + annotations in each clip
|
| 18 |
+
* Action frames extracted from the videos + annotation of the frame
|
| 19 |
+
|
| 20 |
+
|
| 21 |
## Example usage for clips:
|
| 22 |
### Also decoding raw binary video data and json
|
| 23 |
```python
|
|
|
|
| 53 |
.shuffle(100)
|
| 54 |
.to_tuple("mp4", "json")
|
| 55 |
.map_tuple(load_video, load_json)
|
| 56 |
+
)
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
## Evaluation - Pointwise accuracy:
|
| 60 |
+
For pointwise accuracy, a prediction is considered correct if the predicted point lies inside the annotated ground truth bounding box. In order to evaluate your predictions, see [evaluation](evaluation/)
|
| 61 |
+
|
| 62 |
+
If you're using GroundingYouTube in your research or applications, please cite using this BibTeX:
|
| 63 |
+
```
|
| 64 |
+
@InProceedings{Chen_2024_CVPR,
|
| 65 |
+
author = {Chen, Brian and Shvetsova, Nina and Rouditchenko, Andrew and Kondermann, Daniel and Thomas, Samuel and Chang, Shih-Fu and Feris, Rogerio and Glass, James and Kuehne, Hilde},
|
| 66 |
+
title = {What When and Where? Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions},
|
| 67 |
+
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
| 68 |
+
month = {June},
|
| 69 |
+
year = {2024},
|
| 70 |
+
pages = {18419-18429}
|
| 71 |
+
}
|
| 72 |
+
```
|