Implicit-VidSRL / README.md
anilbatra's picture
Update README.md
d6c3485 verified
metadata
dataset_info: null
tags:
  - video
  - text
license: cc-by-4.0
language:
  - en
pretty_name: Implicit-VidSRL

Implicit-VidSRL Dataset

implicit video SRL dataset

Dataset Summary

Implicit-VidSRL is a benchmark for the understanding of procedural steps in instructional videos. The dataset uses semantic role labeling (SRL) to model the semantics of narrated instructional video as simple predicate-argument structures like {verb,what,where/with}. The dataset contains implicit and explicit arguments which can be infered from contextual information in multimodal cooking procedures.

Dataset Details

Dataset Sources

Dataset Fields

Each record in the dataset contains the following fields:

  • video_id (str): Video Id from the source dataset.
  • dataset (str): The name of the source dataset.
  • title (int): The title of the recipe.
  • duration (str): Duration of video in seconds.
  • timestamps (List[List[float]]): The list of timestamps corresponding to each recipe step in the video. Each timestamp entry contains start and end timestamp.
  • sentences (List[str]): The list of steps in the recipe. Each recipe step is natural text in English.
  • srl (List[Dict]): The new annotation for each recipe step. Each list item corresponds to the recipe step with key enteries of verb/what/where containing ingredient entities. In addition it also has information about the implicit entities for each step.

Citation


@inproceedings{
    batra2025implicit,
    title={Predicting Implicit Arguments in Procedural Video Instructions},
    author={Anil Batra, Laura Sevilla-Lara, Marcus Rohrbach, Frank Keller},
    booktitle={The 63rd Annual Meeting of the Association for Computational Linguistics},
    year={2025}
}