Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

VideoEduBench Dataset

This dataset contains expert-annotated multimodal interaction events for VideoEduBench, a diagnostic benchmark for evaluating multimodal large language models (MLLMs) on long-horizon classroom interaction understanding.

πŸ“„ Paper: VideoEduBench: Diagnostic Benchmarking of Video Agents for Long-Horizon Classroom Interaction Understanding (KDD 2026 Undergraduate Consortium)

πŸ”— Code repository: https://github.com/tinaxie123/VideoDR-Benchmark

πŸ“Š Dataset Statistics

  • 62 classroom lessons (~40 hours of HD video, 1080p / 30fps)
  • 10,552 expert-annotated multimodal interaction events
  • Cohen's ΞΊ = 0.83 inter-rater agreement on 10% double-annotated subset
  • Subject coverage: 30 Science / 26 Liberal Arts / 6 other lessons
  • Grade levels: 55 elementary, 7 secondary

πŸ—‚οΈ Files

File Description
1.xlsx ~ 62.xlsx Per-lesson annotation files (one row per multimodal interaction event)
all_62_cases_long.xlsx Aggregated long-format dataset across all 62 lessons

πŸ“‹ Annotation Schema (MCDCS)

Each event is annotated using the Multi-agent Classroom Dialogue Coding System (MCDCS):

Field Type Description
timestamp float Time in seconds from lesson start
agent string One of: Teacher / Student / AI
behavior string Fine-grained behavior code (~24 categories)
intent string Pedagogical intent: Instructional Guidance / Evaluation / Dynamic Presentation / Content Generation
modality string visual / audio / multimodal

🎯 Dual-Path Evaluation Paradigm

The benchmark introduces a dual-path diagnostic evaluation:

  1. Fine-grained Multimodal Perception (FMP): Tests whether models can localize and identify specific behavioral codes (gestures, screen operations, audio feedback) over the full 40-minute context.
  2. Pedagogical Logic Reasoning (PLR): Probes deep instructional inference β€” e.g., distinguishing a teacher's deliberate wait-time strategy from a technical glitch.

πŸ“ Evaluation Metrics

  • ISA (Interaction Sequence Accuracy): Perception fidelity
  • PI-F1 (Pedagogical Intent F1): Reasoning quality
  • ARS (Agentic Retrieval Success): Long-horizon tool-use, with 2-second temporal tolerance

πŸŽ₯ Video Access

The 62 raw classroom video recordings are hosted externally due to size and original platform licensing. Access instructions are provided in the GitHub repository.

πŸš€ Quick Start

import pandas as pd

# Load aggregated dataset
df = pd.read_excel("all_62_cases_long.xlsx")

# Or load per-lesson files
lesson_1 = pd.read_excel("1.xlsx")
print(lesson_1.head())

πŸ“„ Citation

@inproceedings{xie2026videoedubench,
  title     = {VideoEduBench: Diagnostic Benchmarking of Video Agents for Long-Horizon Classroom Interaction Understanding},
  author    = {Xie, Haotong},
  booktitle = {KDD 2026 Undergraduate Consortium},
  year      = {2026}
}

πŸ“œ License

  • Annotations (this repository): CC BY-NC 4.0
  • Original videos: Governed by the Socrates platform's licensing terms
  • Code: MIT License (in the GitHub repository)

πŸ“§ Contact

Haotong Xie Β· Shanghai University of Finance and Economics
πŸ“§ haotongxieqaq@163.com
πŸ”— GitHub Profile Β· HuggingFace Profile

Downloads last month
66