iGround / README.md
ekazakos's picture
Update README.md
4a1543b verified
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - video-text-to-text
language:
  - en
tags:
  - text-generaration
  - video-captioning
  - video-grounding
size_categories:
  - 1K<n<10K
configs:
  - config_name: data_processed
    data_files:
      - split: train
        path:
          - iGround_train_set_processed.jsonl
      - split: val
        path:
          - iGround_val_set_processed.jsonl
      - split: test
        path:
          - iGround_test_set_processed.jsonl
  - config_name: data_raw
    data_files:
      - split: train
        path:
          - iGround_train_set_raw.jsonl
      - split: val
        path:
          - iGround_val_set_raw.jsonl
      - split: test
        path:
          - iGround_test_set_raw.jsonl
  - config_name: keys
    data_files:
      - split: train
        path:
          - iGround_train_set_keys.jsonl
      - split: val
        path:
          - iGround_val_set_keys.jsonl

This repo contains the manually annotated dataset, iGround, introduced in the paper "Large-scale Pre-training for Grounded Video Caption Generation".

📦 Loading the Dataset

You can load each configuration and split directly with the 🤗 Datasets library:

from datasets import load_dataset

repo = "ekazakos/iGround"

# Available configs:
# - data_processed
# - data_raw
# - keys
#
# Each config includes the standard splits: train, val, and test.

# data_processed: annotations after processing used to train GROVE.
#   Processing merges multiple instances of the same object type in a clip
#   into a single annotation by taking the union of all boxes for that instance.
ds_proc_train = load_dataset(repo, "data_processed", split="train")
ds_proc_val = load_dataset(repo, "data_processed", split="val")
ds_proc_test = load_dataset(repo, "data_processed", split="test")

# data_raw: raw annotations without any processing.
#   The same object type can appear multiple times in a video,
#   with distinct bounding boxes per instance and per frame.
ds_raw_train = load_dataset(repo, "data_raw", split="train")
ds_raw_val = load_dataset(repo, "data_raw", split="val")
ds_raw_test = load_dataset(repo, "data_raw", split="test")

# keys: contains the corresponding video_ids for the above splits.
ds_keys_train = load_dataset(repo, "keys", split="train")
ds_keys_val = load_dataset(repo, "keys", split="val")

🎥 Download iGround videos

  • Fill in this form to obtain links to the iGround videos
  • Run the following script, found here, to download the iGround videos using the provided links
    bash scripts/download_iGround.sh iGround_links.txt /path/to/iground_videos_dir
    
  • Caution: the links expire in 7 days

If you use this dataset, please cite:

@inproceedings{kazakos2025grove,
  title     = {Large-scale Pre-training for Grounded Video Caption Generation},
  author    = {Evangelos Kazakos and Cordelia Schmid and Josef Sivic},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      = {2025}
}