metadata
dataset_info: null
tags:
- video
- text
license: cc-by-4.0
language:
- en
pretty_name: Implicit-VidSRL
Implicit-VidSRL Dataset
- Paper: https://arxiv.org/abs/2505.21068
- Project: https://anilbatra2185.github.io/p/ividsrl/
- Curated by: Anil Batra, Laura Sevilla-Lara, Marcus Rohrbach, Frank Keller
- Language(s) (NLP): English
- License: CC-BY-4.0
Dataset Summary
Implicit-VidSRL is a benchmark for the understanding of procedural steps in instructional videos. The dataset uses semantic role labeling (SRL) to model the semantics of narrated instructional video as simple predicate-argument structures like {verb,what,where/with}. The dataset contains implicit and explicit arguments which can be infered from contextual information in multimodal cooking procedures.
Dataset Details
Dataset Sources
- Youcook2: http://youcook2.eecs.umich.edu/
- Tasty: https://cvml.comp.nus.edu.sg/tasty/
Dataset Fields
Each record in the dataset contains the following fields:
- video_id (
str): Video Id from the source dataset. - dataset (
str): The name of the source dataset. - title (
int): The title of the recipe. - duration (
str): Duration of video in seconds. - timestamps (
List[List[float]]): The list of timestamps corresponding to each recipe step in the video. Each timestamp entry contains start and end timestamp. - sentences (
List[str]): The list of steps in the recipe. Each recipe step is natural text in English. - srl (
List[Dict]): The new annotation for each recipe step. Each list item corresponds to the recipe step with key enteries of verb/what/where containing ingredient entities. In addition it also has information about the implicit entities for each step.
Citation
@inproceedings{
batra2025implicit,
title={Predicting Implicit Arguments in Procedural Video Instructions},
author={Anil Batra, Laura Sevilla-Lara, Marcus Rohrbach, Frank Keller},
booktitle={The 63rd Annual Meeting of the Association for Computational Linguistics},
year={2025}
}