VideoScienceBench / README.md
yixinhuang48's picture
Update README
e22eb01
|
raw
history blame
2.02 kB
metadata
license: mit
language:
  - en
task_categories:
  - question-answering
  - text-generation
tags:
  - scientific-reasoning
  - video-understanding
  - physics
  - chemistry
  - benchmark

VideoScienceBench

A benchmark for evaluating video understanding and scientific reasoning in vision-language models. Each example pairs a textual description of an experiment (what is shown) with the correct scientific explanation (expected phenomenon).

Dataset Summary

Attribute Value
Examples 160
Domains Physics, Chemistry
Format JSONL (prompt + expected phenomenon + vid)

Data Creation Pipeline

Each researcher selects two or more scientific concepts and references relevant educational materials or videos to design a prompt. Prompts undergo peer and model review, followed by model-based quality checking, before being finalized for dataset inclusion.

Dataset Structure

Each line is a JSON object with:

Field Type Description
keywords list[str] Relevant scientific concepts
field str Scientific discipline (e.g., Physics)
prompt str Textual description of what is shown in the video/experiment
expected phenomenon str The correct scientific explanation
vid str Video identifier

Example

{
  "keywords": ["Buoyancy", "Gas Laws", "Pressure"],
  "field": "Physics",
  "prompt": "A sealed plastic bottle is filled with water containing a floating eyedropper with an air bubble inside. A person squeezes the sides of the bottle.",
  "expected phenomenon": "The eyedropper immediately sinks when the bottle is squeezed, then rises again when released, as increased pressure compresses the air bubble, reducing buoyancy.",
  "vid": "98"
}

Usage

from datasets import load_dataset

dataset = load_dataset("lmgame/VideoScienceBench", data_files="data_filtered.jsonl")
# Access the data (single split)
data = dataset["train"]

License

MIT