|
|
--- |
|
|
license: cc-by-4.0 |
|
|
configs: |
|
|
- config_name: videos |
|
|
data_files: "videos/*.tar" |
|
|
- config_name: clips |
|
|
data_files: "clips/*.tar" |
|
|
- config_name: frames |
|
|
data_files: "frames/*.tar" |
|
|
tags: |
|
|
- webdataset |
|
|
--- |
|
|
# Grounding YouTube Dataset # |
|
|
What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions |
|
|
[arxiv](https://arxiv.org/abs/2303.16990) |
|
|
|
|
|
This dataset is packed in [WebDataset](https://huggingface.co/docs/hub/en/datasets-webdataset#webdataset) format. |
|
|
|
|
|
## The dataset is present in three styles: |
|
|
* Untrimmed videos + annotations within the entire video |
|
|
* Action clips extracted from the videos + annotations in each clip |
|
|
* Action frames extracted from the videos + annotation of the frame |
|
|
|
|
|
|
|
|
## Example usage for clips: |
|
|
### Also decoding raw binary video data and json |
|
|
```python |
|
|
import webdataset as wds |
|
|
from huggingface_hub import HfFileSystem, get_token, hf_hub_url |
|
|
import json |
|
|
import io |
|
|
import torch |
|
|
import av |
|
|
import numpy as np |
|
|
from torch.utils.data import DataLoader |
|
|
|
|
|
fs = HfFileSystem() |
|
|
files = [fs.resolve_path(path) for path in fs.glob("hf://datasets/CVML-TueAI/grounding-YT-dataset/clips/*.tar")] |
|
|
urls = [hf_hub_url(file.repo_id, file.path_in_repo, repo_type="dataset") for file in files] |
|
|
urls = f"pipe: curl -s -L -H 'Authorization:Bearer {get_token()}' {'::'.join(urls)}" |
|
|
|
|
|
def load_video(video_bytes): |
|
|
container = av.open(io.BytesIO(video_bytes)) |
|
|
frames = [] |
|
|
for frame in container.decode(video=0): |
|
|
img = frame.to_ndarray(format="rgb24") |
|
|
frames.append(img) |
|
|
video_tensor = torch.from_numpy(np.stack(frames)) |
|
|
return video_tensor #[T, H, W, C] |
|
|
|
|
|
def load_json(json_bytes): |
|
|
"""Decode JSON metadata""" |
|
|
return json.loads(json_bytes.decode("utf-8")) |
|
|
|
|
|
dataset = ( |
|
|
wds.WebDataset(urls,) |
|
|
.shuffle(100) |
|
|
.to_tuple("mp4", "json") |
|
|
.map_tuple(load_video, load_json) |
|
|
) |
|
|
``` |
|
|
|
|
|
## Evaluation - Pointwise accuracy: |
|
|
For pointwise accuracy, a prediction is considered correct if the predicted point lies inside the annotated ground truth bounding box. In order to evaluate your predictions, see [evaluation](https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/tree/main/evaluation) |
|
|
|
|
|
## Visualization: |
|
|
[Visualization](https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/tree/main/visualization) contains scripts to generate frames with the ground truth box and the predicted point. |
|
|
One should follow the prediction json format given in random_preds.json files. Here are a few visualizations generated: |
|
|
|
|
|
<table width="100%"> |
|
|
<tr> |
|
|
<td><img src="https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/resolve/main/sample_images/-1okAudsnAc_5769.jpg" style="width:100%; height:auto;"/></td> |
|
|
<td><img src="https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/resolve/main/sample_images/0tcT84VeD2c_2315.jpg" style="width:100%; height:auto;"/></td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td><img src="https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/resolve/main/sample_images/1Q1_jE4IIls_2036.jpg" style="width:100%; height:auto;"/></td> |
|
|
<td><img src="https://huggingface.co/datasets/CVML-TueAI/grounding-YT-dataset/resolve/main/sample_images/_7RI2fa78aE_1033.jpg" style="width:100%; height:auto;"/></td> |
|
|
</tr> |
|
|
</table> |
|
|
|
|
|
The red dot shows the predicted point. Prediction is None in case no action is predicted. |
|
|
|
|
|
## Citation Information |
|
|
If you're using GroundingYouTube in your research or applications, please cite using this BibTeX: |
|
|
```bibtex |
|
|
@InProceedings{Chen_2024_CVPR, |
|
|
author = {Chen, Brian and Shvetsova, Nina and Rouditchenko, Andrew and Kondermann, Daniel and Thomas, Samuel and Chang, Shih-Fu and Feris, Rogerio and Glass, James and Kuehne, Hilde}, |
|
|
title = {What When and Where? Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions}, |
|
|
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, |
|
|
month = {June}, |
|
|
year = {2024}, |
|
|
pages = {18419-18429} |
|
|
} |
|
|
``` |