Datasets:

Modalities:
Image
Text
Formats:
webdataset
ArXiv:
Libraries:
Datasets
WebDataset
License:
niranjangaurav17 commited on
Commit
0f818b7
·
verified ·
1 Parent(s): fe88b8d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -7,6 +7,8 @@ configs:
7
  data_files: "clips/*.tar"
8
  - config_name: frames
9
  data_files: "frames/*.tar"
 
 
10
  ---
11
  # Grounding YouTube Dataset #
12
  What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions
@@ -63,9 +65,9 @@ For pointwise accuracy, a prediction is considered correct if the predicted poin
63
  [Visualization](https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/tree/main/visualization) contains scripts to generate frames with the ground truth box and the predicted point.
64
  One should follow the prediction json format given in random_preds.json files. Here are a few visualizations generated:
65
 
66
-
67
  If you're using GroundingYouTube in your research or applications, please cite using this BibTeX:
68
- ```
69
  @InProceedings{Chen_2024_CVPR,
70
  author = {Chen, Brian and Shvetsova, Nina and Rouditchenko, Andrew and Kondermann, Daniel and Thomas, Samuel and Chang, Shih-Fu and Feris, Rogerio and Glass, James and Kuehne, Hilde},
71
  title = {What When and Where? Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions},
 
7
  data_files: "clips/*.tar"
8
  - config_name: frames
9
  data_files: "frames/*.tar"
10
+ tags:
11
+ - webdataset
12
  ---
13
  # Grounding YouTube Dataset #
14
  What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions
 
65
  [Visualization](https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/tree/main/visualization) contains scripts to generate frames with the ground truth box and the predicted point.
66
  One should follow the prediction json format given in random_preds.json files. Here are a few visualizations generated:
67
 
68
+ ## Citation Information
69
  If you're using GroundingYouTube in your research or applications, please cite using this BibTeX:
70
+ ```bibtex
71
  @InProceedings{Chen_2024_CVPR,
72
  author = {Chen, Brian and Shvetsova, Nina and Rouditchenko, Andrew and Kondermann, Daniel and Thomas, Samuel and Chang, Shih-Fu and Feris, Rogerio and Glass, James and Kuehne, Hilde},
73
  title = {What When and Where? Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions},