Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
VideoEduBench Dataset
This dataset contains expert-annotated multimodal interaction events for VideoEduBench, a diagnostic benchmark for evaluating multimodal large language models (MLLMs) on long-horizon classroom interaction understanding.
π Paper: VideoEduBench: Diagnostic Benchmarking of Video Agents for Long-Horizon Classroom Interaction Understanding (KDD 2026 Undergraduate Consortium)
π Code repository: https://github.com/tinaxie123/VideoDR-Benchmark
π Dataset Statistics
- 62 classroom lessons (~40 hours of HD video, 1080p / 30fps)
- 10,552 expert-annotated multimodal interaction events
- Cohen's ΞΊ = 0.83 inter-rater agreement on 10% double-annotated subset
- Subject coverage: 30 Science / 26 Liberal Arts / 6 other lessons
- Grade levels: 55 elementary, 7 secondary
ποΈ Files
| File | Description |
|---|---|
1.xlsx ~ 62.xlsx |
Per-lesson annotation files (one row per multimodal interaction event) |
all_62_cases_long.xlsx |
Aggregated long-format dataset across all 62 lessons |
π Annotation Schema (MCDCS)
Each event is annotated using the Multi-agent Classroom Dialogue Coding System (MCDCS):
| Field | Type | Description |
|---|---|---|
timestamp |
float | Time in seconds from lesson start |
agent |
string | One of: Teacher / Student / AI |
behavior |
string | Fine-grained behavior code (~24 categories) |
intent |
string | Pedagogical intent: Instructional Guidance / Evaluation / Dynamic Presentation / Content Generation |
modality |
string | visual / audio / multimodal |
π― Dual-Path Evaluation Paradigm
The benchmark introduces a dual-path diagnostic evaluation:
- Fine-grained Multimodal Perception (FMP): Tests whether models can localize and identify specific behavioral codes (gestures, screen operations, audio feedback) over the full 40-minute context.
- Pedagogical Logic Reasoning (PLR): Probes deep instructional inference β e.g., distinguishing a teacher's deliberate wait-time strategy from a technical glitch.
π Evaluation Metrics
- ISA (Interaction Sequence Accuracy): Perception fidelity
- PI-F1 (Pedagogical Intent F1): Reasoning quality
- ARS (Agentic Retrieval Success): Long-horizon tool-use, with 2-second temporal tolerance
π₯ Video Access
The 62 raw classroom video recordings are hosted externally due to size and original platform licensing. Access instructions are provided in the GitHub repository.
π Quick Start
import pandas as pd
# Load aggregated dataset
df = pd.read_excel("all_62_cases_long.xlsx")
# Or load per-lesson files
lesson_1 = pd.read_excel("1.xlsx")
print(lesson_1.head())
π Citation
@inproceedings{xie2026videoedubench,
title = {VideoEduBench: Diagnostic Benchmarking of Video Agents for Long-Horizon Classroom Interaction Understanding},
author = {Xie, Haotong},
booktitle = {KDD 2026 Undergraduate Consortium},
year = {2026}
}
π License
- Annotations (this repository): CC BY-NC 4.0
- Original videos: Governed by the Socrates platform's licensing terms
- Code: MIT License (in the GitHub repository)
π§ Contact
Haotong Xie Β· Shanghai University of Finance and Economics
π§ haotongxieqaq@163.com
π GitHub Profile Β· HuggingFace Profile
- Downloads last month
- 66