metadata
license: mit
language:
- en
task_categories:
- question-answering
- text-generation
tags:
- scientific-reasoning
- video-understanding
- physics
- chemistry
- benchmark
VideoScienceBench
A benchmark for evaluating video understanding and scientific reasoning in vision-language models. Each example pairs a textual description of an experiment (what is shown) with the correct scientific explanation (expected phenomenon), plus source links to real educational videos.
Dataset Summary
| Attribute | Value |
|---|---|
| Examples | 160 |
| Domains | Physics, Chemistry |
| Format | CSV (prompt + expected answer + source URL) |
Data Creation Pipeline
Each researcher selects two or more scientific concepts and references relevant educational materials or videos to design a prompt. Prompts undergo peer and model review, followed by model-based quality checking, before being finalized for dataset inclusion.
Dataset Structure
| Column | Description |
|---|---|
| Example Title | Name of the scientific demonstration |
| Fields | Scientific discipline (e.g., Physics) |
| Keywords | Relevant scientific concepts |
| Prompts | Textual description of what is shown in the video/experiment |
| Source | URL to educational material or video |
| Expected phenomenon | The correct scientific explanation |
| Unique ID | Integer identifier |
Example
| Example Title | Prompts (excerpt) | Expected phenomenon (excerpt) |
|---|---|---|
| Chain Fountain | A glass beaker is filled with a loosely coiled metal ball chain... | The chain rises up out of the beaker in an elegant upward arc... |
| Prince Rupert's Drop | A teardrop-shaped piece of tempered glass is held at its bulbous head. Small pliers snip the thin tail... | The entire drop explosively shatters into powder... |
Usage
from datasets import load_dataset
dataset = load_dataset("lmgame/VideoScienceBench")
# Access the data
data = dataset["train"] # or default split
License
MIT