Datasets:
license: cc-by-4.0
configs:
- config_name: videos
data_files: videos/*.tar
- config_name: clips
data_files: clips/*.tar
- config_name: frames
data_files: frames/*.tar
tags:
- webdataset
Grounding YouTube Dataset
What, when, and where? -- Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions arxiv
This dataset is packed in WebDataset format.
The dataset is present in three styles:
- Untrimmed videos + annotations within the entire video
- Action clips extracted from the videos + annotations in each clip
- Action frames extracted from the videos + annotation of the frame
Example usage for clips:
Also decoding raw binary video data and json
import webdataset as wds
from huggingface_hub import HfFileSystem, get_token, hf_hub_url
import json
import io
import torch
import av
import numpy as np
from torch.utils.data import DataLoader
fs = HfFileSystem()
files = [fs.resolve_path(path) for path in fs.glob("hf://datasets/CVML-TueAI/grounding-YT-dataset/clips/*.tar")]
urls = [hf_hub_url(file.repo_id, file.path_in_repo, repo_type="dataset") for file in files]
urls = f"pipe: curl -s -L -H 'Authorization:Bearer {get_token()}' {'::'.join(urls)}"
def load_video(video_bytes):
container = av.open(io.BytesIO(video_bytes))
frames = []
for frame in container.decode(video=0):
img = frame.to_ndarray(format="rgb24")
frames.append(img)
video_tensor = torch.from_numpy(np.stack(frames))
return video_tensor #[T, H, W, C]
def load_json(json_bytes):
"""Decode JSON metadata"""
return json.loads(json_bytes.decode("utf-8"))
dataset = (
wds.WebDataset(urls,)
.shuffle(100)
.to_tuple("mp4", "json")
.map_tuple(load_video, load_json)
)
Evaluation - Pointwise accuracy:
For pointwise accuracy, a prediction is considered correct if the predicted point lies inside the annotated ground truth bounding box. In order to evaluate your predictions, see evaluation
Visualization:
Visualization contains scripts to generate frames with the ground truth box and the predicted point. One should follow the prediction json format given in random_preds.json files. Here are a few visualizations generated:
![]() |
![]() |
![]() |
![]() |
The red dot shows the predicted point. Prediction is None in case no action is predicted.
Citation Information
If you're using GroundingYouTube in your research or applications, please cite using this BibTeX:
@InProceedings{Chen_2024_CVPR,
author = {Chen, Brian and Shvetsova, Nina and Rouditchenko, Andrew and Kondermann, Daniel and Thomas, Samuel and Chang, Shih-Fu and Feris, Rogerio and Glass, James and Kuehne, Hilde},
title = {What When and Where? Self-Supervised Spatio-Temporal Grounding in Untrimmed Multi-Action Videos from Narrated Instructions},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {18419-18429}
}



