Datasets:

Modalities:
Image
Text
Formats:
webdataset
ArXiv:
Libraries:
Datasets
WebDataset
License:
File size: 4,078 Bytes
1f8f290
6e9ae50
1f8f290
 
 
 
 
 
 
0f818b7
 
29c951f
9a11175
 
 
 
c2e6b7b
 
9a11175
 
 
 
 
 
29c951f
 
 
 
 
 
 
 
 
 
 
 
 
d6e1500
29c951f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9a11175
 
 
 
fe88b8d
 
 
 
 
 
1fef57d
5cea7f1
1fef57d
 
5cea7f1
 
1fef57d
 
5cea7f1
 
 
466a1ab
 
0f818b7
9a11175
0f818b7
9a11175
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: cc-by-4.0
configs:
  - config_name: videos
    data_files: "videos/*.tar"
  - config_name: clips
    data_files: "clips/*.tar"
  - config_name: frames
    data_files: "frames/*.tar"
tags:
- webdataset
---
# Grounding YouTube Dataset #
What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions
[arxiv](https://arxiv.org/abs/2303.16990)

This dataset is packed in [WebDataset](https://huggingface.co/docs/hub/en/datasets-webdataset#webdataset) format.

## The dataset is present in three styles:
* Untrimmed videos + annotations within the entire video
* Action clips extracted from the videos + annotations in each clip
* Action frames extracted from the videos + annotation of the frame


## Example usage for clips:
### Also decoding raw binary video data and json
```python
import webdataset as wds
from huggingface_hub import HfFileSystem, get_token, hf_hub_url
import json
import io
import torch
import av 
import numpy as np
from torch.utils.data import DataLoader

fs = HfFileSystem()
files = [fs.resolve_path(path) for path in fs.glob("hf://datasets/CVML-TueAI/grounding-YT-dataset/clips/*.tar")]
urls = [hf_hub_url(file.repo_id, file.path_in_repo, repo_type="dataset") for file in files]
urls = f"pipe: curl -s -L -H 'Authorization:Bearer {get_token()}' {'::'.join(urls)}"

def load_video(video_bytes):
    container = av.open(io.BytesIO(video_bytes))
    frames = []
    for frame in container.decode(video=0):
        img = frame.to_ndarray(format="rgb24")
        frames.append(img)
    video_tensor = torch.from_numpy(np.stack(frames))
    return video_tensor #[T, H, W, C]

def load_json(json_bytes):
    """Decode JSON metadata"""
    return json.loads(json_bytes.decode("utf-8"))

dataset = (
    wds.WebDataset(urls,)
    .shuffle(100)
    .to_tuple("mp4", "json")
    .map_tuple(load_video, load_json)
)
```

## Evaluation - Pointwise accuracy:
For pointwise accuracy, a prediction is considered correct if the predicted point lies inside the annotated ground truth bounding box. In order to evaluate your predictions, see [evaluation](https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/tree/main/evaluation)

## Visualization:
[Visualization](https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/tree/main/visualization) contains scripts to generate frames with the ground truth box and the predicted point. 
One should follow the prediction json format given in random_preds.json files. Here are a few visualizations generated:

<table width="100%">
  <tr>
    <td><img src="https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/resolve/main/sample_images/-1okAudsnAc_5769.jpg" style="width:100%; height:auto;"/></td>
    <td><img src="https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/resolve/main/sample_images/0tcT84VeD2c_2315.jpg" style="width:100%; height:auto;"/></td>
  </tr>
  <tr>
    <td><img src="https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/resolve/main/sample_images/1Q1_jE4IIls_2036.jpg" style="width:100%; height:auto;"/></td>
    <td><img src="https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/resolve/main/sample_images/_7RI2fa78aE_1033.jpg" style="width:100%; height:auto;"/></td>
  </tr>
</table>

The red dot shows the predicted point. Prediction is None in case no action is predicted.

## Citation Information
If you're using GroundingYouTube in your research or applications, please cite using this BibTeX:
```bibtex
@InProceedings{Chen_2024_CVPR,
    author    = {Chen, Brian and Shvetsova, Nina and Rouditchenko, Andrew and Kondermann, Daniel and Thomas, Samuel and Chang, Shih-Fu and Feris, Rogerio and Glass, James and Kuehne, Hilde},
    title     = {What When and Where? Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {18419-18429}
}
```