Enhance dataset card: Add task category, tags, and abstract

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +23 -4
README.md CHANGED
@@ -1,3 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ProactiveBench: A Comprehensive Benchmark Evaluating Proactive Interactions in Video Large Language Models
2
  ---
3
 
@@ -10,6 +26,9 @@ ProactiveBench: A Comprehensive Benchmark Evaluating Proactive Interactions in V
10
  </div>
11
  </div>
12
 
 
 
 
13
  ## Introduction
14
  ProactiveBench is the first comprehensive benchmark designed to evaluate a system's ability to engage in proactive interaction in multimodal dialogue settings.
15
  Unlike traditional turn-by-turn dialogue systems, in proactive intraction model need to determine when to repsond during the playback, so both response timing and response textual content are important points for evaluation.
@@ -44,7 +63,7 @@ Each test example in `{dataset}/anno.json` has the following format:
44
  "role": "assistant", "content": "People are working at workstations.",
45
  "reply_timespan": [0.0, 9.88]
46
  },
47
- { ... }
48
  ]
49
  }
50
  ```
@@ -52,12 +71,12 @@ Each test example in `{dataset}/anno.json` has the following format:
52
  ## Citation
53
  ```bibtex
54
  @misc{wang2025proactivebenchcomprehensivebenchmarkevaluating,
55
- title={ProactiveBench: A Comprehensive Benchmark Evaluating Proactive Interactions in Video Large Language Models},
56
  author={Yueqian Wang and Xiaojun Meng and Yifan Wang and Huishuai Zhang and Dongyan Zhao},
57
  year={2025},
58
  eprint={2507.09313},
59
  archivePrefix={arXiv},
60
  primaryClass={cs.CV},
61
- url={https://arxiv.org/abs/2507.09313},
62
  }
63
- ```
 
1
+ ---
2
+ task_categories:
3
+ - video-text-to-text
4
+ tags:
5
+ - multimodal
6
+ - dialogue
7
+ - vllm
8
+ - proactive-interaction
9
+ - video-understanding
10
+ - robotics
11
+ - qa
12
+ - speech
13
+ - anomaly-detection
14
+ - benchmark
15
+ ---
16
+
17
  ProactiveBench: A Comprehensive Benchmark Evaluating Proactive Interactions in Video Large Language Models
18
  ---
19
 
 
26
  </div>
27
  </div>
28
 
29
+ ## Abstract
30
+ With the growing research focus on multimodal dialogue systems, the capability for proactive interaction is gradually gaining recognition. As an alternative to conventional turn-by-turn dialogue, users increasingly expect multimodal systems to be more initiative, for example, by autonomously determining the timing of multi-turn responses in real time during video playback. To facilitate progress in this emerging area, we introduce ProactiveBench, the first comprehensive benchmark to evaluate a system's ability to engage in proactive interaction. Since model responses are generated at varying timestamps, we further propose PAUC, the first metric that accounts for the temporal dynamics of model responses. This enables a more accurate evaluation of systems operating in proactive settings. Through extensive benchmarking of various baseline systems on ProactiveBench and a user study of human preferences, we show that PAUC is in better agreement with human preferences than traditional evaluation metrics, which typically only consider the textual content of responses. These findings demonstrate that PAUC provides a more faithful assessment of user experience in proactive interaction scenarios. Project homepage: this https URL
31
+
32
  ## Introduction
33
  ProactiveBench is the first comprehensive benchmark designed to evaluate a system's ability to engage in proactive interaction in multimodal dialogue settings.
34
  Unlike traditional turn-by-turn dialogue systems, in proactive intraction model need to determine when to repsond during the playback, so both response timing and response textual content are important points for evaluation.
 
63
  "role": "assistant", "content": "People are working at workstations.",
64
  "reply_timespan": [0.0, 9.88]
65
  },
66
+ { ... }
67
  ]
68
  }
69
  ```
 
71
  ## Citation
72
  ```bibtex
73
  @misc{wang2025proactivebenchcomprehensivebenchmarkevaluating,
74
+ title={ProactiveBench: A Comprehensive Benchmark Evaluating Proactive Interactions in Video Large Language Models},
75
  author={Yueqian Wang and Xiaojun Meng and Yifan Wang and Huishuai Zhang and Dongyan Zhao},
76
  year={2025},
77
  eprint={2507.09313},
78
  archivePrefix={arXiv},
79
  primaryClass={cs.CV},
80
+ url={https://arxiv.org/abs/2507.09313},
81
  }
82
+ ```