ProactiveVideoQA / README.md
nielsr's picture
nielsr HF Staff
Enhance dataset card: Add task category, tags, and abstract
19fcacf verified
|
raw
history blame
4.2 kB
metadata
task_categories:
  - video-text-to-text
tags:
  - multimodal
  - dialogue
  - vllm
  - proactive-interaction
  - video-understanding
  - robotics
  - qa
  - speech
  - anomaly-detection
  - benchmark

ProactiveBench: A Comprehensive Benchmark Evaluating Proactive Interactions in Video Large Language Models

Abstract

With the growing research focus on multimodal dialogue systems, the capability for proactive interaction is gradually gaining recognition. As an alternative to conventional turn-by-turn dialogue, users increasingly expect multimodal systems to be more initiative, for example, by autonomously determining the timing of multi-turn responses in real time during video playback. To facilitate progress in this emerging area, we introduce ProactiveBench, the first comprehensive benchmark to evaluate a system's ability to engage in proactive interaction. Since model responses are generated at varying timestamps, we further propose PAUC, the first metric that accounts for the temporal dynamics of model responses. This enables a more accurate evaluation of systems operating in proactive settings. Through extensive benchmarking of various baseline systems on ProactiveBench and a user study of human preferences, we show that PAUC is in better agreement with human preferences than traditional evaluation metrics, which typically only consider the textual content of responses. These findings demonstrate that PAUC provides a more faithful assessment of user experience in proactive interaction scenarios. Project homepage: this https URL

Introduction

ProactiveBench is the first comprehensive benchmark designed to evaluate a system's ability to engage in proactive interaction in multimodal dialogue settings. Unlike traditional turn-by-turn dialogue systems, in proactive intraction model need to determine when to repsond during the playback, so both response timing and response textual content are important points for evaluation.

Dataset Statistics

ProactiveBench contains 4 tasks:

  1. Proactive web-video QA [WEB]: centering on general web-video understanding.

  2. Proactive ego-centric video QA [EGO]: centering on first-person-view video comprehension, particularly relevant in robotics and daily assistant applications.

  3. Proactive TV-series video QA [TV]: emphasizing dialogue and social relationship understanding with speech input, and

  4. Proactive video anomaly detection [VAD] targeting surveillance video monitoring and alerting.

  • 1377 videos from different sources
  • 1427 different qeustions, and 3510 ground truth reply turns
  • Fully proactive questions and open-ended answers ✅

Data Format

Each test example in {dataset}/anno.json has the following format:

{
    "question_id": "OSfMU69X3C4.7.mp4", // unique identifier for this test example
    "video": "OSfMU69X3C4.7.mp4",       // video file name in `video` folder
    "conversation": [       // model input
        {"role": "user", "time": 0, "content": "What are the people doing in the office?"}
    ],
    "answer": [     // expected model output
        {       // model are expected to reply with the content in the reply timespan
            "role": "assistant", "content": "People are working at workstations.",
            "reply_timespan": [0.0, 9.88]
        },
        { ... }
    ]
}

Citation

@misc{wang2025proactivebenchcomprehensivebenchmarkevaluating,
      title={ProactiveBench: A Comprehensive Benchmark Evaluating Proactive Interactions in Video Large Language Models},
      author={Yueqian Wang and Xiaojun Meng and Yifan Wang and Huishuai Zhang and Dongyan Zhao},
      year={2025},
      eprint={2507.09313},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.09313},
}