VISTA / README.md
dongqi-me's picture
Update README.md
39c60af verified
|
raw
history blame
5.43 kB
metadata
license: cc-by-4.0
task_categories:
  - summarization
  - text-generation
size_categories:
  - n>1T
language:
  - en
extra_gated_heading: Request Access to the Dataset
extra_gated_description: >-
  This dataset is restricted. Please complete the form below to request access.
  Incomplete requests may be rejected.
extra_gated_prompt: >
  By requesting access, you acknowledge that:

  - You will only use the dataset for non-commercial academic research or
  educational purposes.

  - You agree to comply with all applicable data privacy and ethical standards.

  - Any use involving human subjects will require separate IRB/ethics approval
  from your institution, if applicable.
extra_gated_fields:
  Full Name: text
  Official Email Address: text
  Institution / Organization: text
  Department / Research Group: text
  Position / Title:
    type: select
    options:
      - Professor / PI
      - Postdoc
      - PhD Student
      - Master Student
      - Undergraduate
      - Research Scientist / Engineer
      - label: Other
        value: other
  Country: country
  Intended Use Case (brief description): text
  Supervisor or Advisor (if student): text
  Will you process or combine this dataset with other sensitive data? (If yes, please describe): text
  I confirm I will not use the dataset for any commercial or for-profit purposes: checkbox
  I agree to follow all relevant data protection, privacy, and ethical guidelines: checkbox
  I understand that data access may be revoked at any time if terms are violated: checkbox

VISTA Dataset

Dataset Structure

dataset/
├── videos/                 # Video files directory
│   ├── train_part1/        # Training set videos (first 8000 samples)
│   ├── train_part2/        # Training set videos (remaining samples)
│   ├── val/                # Validation set videos
│   └── test/               # Test set videos
├── train_part1.json        # Training set metadata (first 8000 samples)
├── train_part2.json        # Training set metadata (remaining samples)
├── val.json                # Validation set metadata
├── test.json               # Test set metadata
└── dataset_info.json       # Dataset description and statistics

Note: Due to the limitation of Hugging Face datasets where a single file folder can store up to 10000 files, the training set videos have been split into train_part1 and train_part2 folders.

Video Files

The video files are placed in the videos/ folder. Videos are organized by split (train_part1/train_part2/val/test).

Metadata Format

Each sample includes the following information:

  • id: Sample ID
  • title: Paper title
  • authors: List of authors
  • abstract: Paper abstract (ground truth summary)
  • video_file: Video filename
  • video_path: Path to the video file (this is processed as a video feature)
  • paper_url: Link to the paper PDF
  • venue: Publication venue

Usage with Hugging Face Datasets

from datasets import load_dataset

# Load the dataset from Hugging Face Hub
dataset = load_dataset("dongqi-me/VISTA")

# Access splits
train_part1 = dataset["train_part1"]
train_part2 = dataset["train_part2"]
val_data = dataset["validation"]
test_data = dataset["test"]

# Access videos
video_part1 = train_part1[0]["video_path"]  # Access video in train_part1
video_part2 = train_part2[0]["video_path"]  # Access video in train_part2

# Or load from local directory
local_dataset = load_dataset("json", data_files={
    "train_part1": "train_part1.json",
    "train_part2": "train_part2.json",
    "validation": "val.json",
    "test": "test.json"
})

Citation

If you use this dataset in your research, please cite:

@inproceedings{liu-etal-2025-talk,
    title = "What Is That Talk About? A Video-to-Text Summarization Dataset for Scientific Presentations",
    author = "Liu, Dongqi  and
      Whitehouse, Chenxi  and
      Yu, Xi  and
      Mahon, Louis  and
      Saxena, Rohit  and
      Zhao, Zheng  and
      Qiu, Yifu  and
      Lapata, Mirella  and
      Demberg, Vera",
    editor = "Che, Wanxiang  and
      Nabende, Joyce  and
      Shutova, Ekaterina  and
      Pilehvar, Mohammad Taher",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.acl-long.310/",
    pages = "6187--6210",
    ISBN = "979-8-89176-251-0",
    abstract = "Transforming recorded videos into concise and accurate textual summaries is a growing challenge in multimodal learning. This paper introduces VISTA, a dataset specifically designed for video-to-text summarization in scientific domains. VISTA contains 18,599 recorded AI conference presentations paired with their corresponding paper abstracts. We benchmark the performance of state-of-the-art large models and apply a plan-based framework to better capture the structured nature of abstracts. Both human and automated evaluations confirm that explicit planning enhances summary quality and factual consistency. However, a considerable gap remains between models and human performance, highlighting the challenges of our dataset. This study aims to pave the way for future research on scientific video-to-text summarization."
}