File size: 3,103 Bytes
f43119a
0338ea7
 
f43119a
 
68c1f32
f43119a
68c1f32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f43119a
 
68c1f32
 
 
 
f43119a
 
 
 
 
 
0338ea7
4b39acb
 
 
529c594
4b39acb
 
 
 
 
 
 
 
 
529c594
2a09543
529c594
 
4b39acb
 
529c594
4b39acb
529c594
4b39acb
 
 
529c594
 
4b39acb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0338ea7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
task_categories:
- video-text-to-text
dataset_info:
  features:
  - name: video_source
    dtype: string
  - name: video_id
    dtype: string
  - name: duration_sec
    dtype: float64
  - name: fps
    dtype: float64
  - name: question_id
    dtype: string
  - name: question
    dtype: string
  - name: choices
    sequence: string
  - name: correct_answer
    dtype: string
  - name: time_reference
    sequence: float64
  - name: question_type
    dtype: string
  - name: question_time
    dtype: float64
  splits:
  - name: train
    num_bytes: 291464
    num_examples: 900
  download_size: 98308
  dataset_size: 291464
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

<div align="center">

<h2>
    RIVER: A Real-Time Interaction Benchmark for Video LLMs
</h2>

<img src="assets/RIVER logo.png" width="80" alt="RIVER logo">

[Yansong Shi<sup>*</sup>](https://scholar.google.com/citations?user=R7J57vQAAAAJ), 
[Qingsong Zhao<sup>*</sup>](https://scholar.google.com/citations?user=ux-dlywAAAAJ), 
[Tianxiang Jiang<sup>*</sup>](https://github.com/Arsiuuu), 
[Xiangyu Zeng](https://scholar.google.com/citations?user=jS13DXkAAAAJ&hl), 
[Yi Wang](https://scholar.google.com/citations?user=Xm2M8UwAAAAJ), 
[Limin Wang<sup>†</sup>](https://scholar.google.com/citations?user=HEuN8PcAAAAJ)  
[[💻 GitHub]](https://github.com/OpenGVLab/RIVER), 
[[🤗 Dataset on HF]](https://huggingface.co/datasets/nanamma/RIVER), 
[[📄 ArXiv]](https://arxiv.org/abs/2603.03985)
</div>


## Introduction
This project introduces **RIVER Bench**, designed to evaluate the real-time interactive capabilities of Video Large Language Models through streaming video perception, featuring novel tasks for memory, live-perception, and proactive response.

![RIVER](assets/river.jpg)

Based on the frequency and timing of reference events, questions, and answers, we further categorize online interaction tasks into four distinct subclasses, as visually depicted in the figure. For the Retro-Memory, the clue is drawn from the past; for the live-Perception, it comes from the present—both demand an immediate response. For the Pro-Response task, Video LLMs need to wait until the corresponding clue appears and then respond as quickly as possible.

## Dataset Preparation
|Dataset       |URL|
|--------------|---|
|LongVideoBench|https://github.com/longvideobench/LongVideoBench|
|Vript-RR      |https://github.com/mutonix/Vript|
|LVBench       |https://github.com/zai-org/LVBench|
|Ego4D         |https://github.com/facebookresearch/Ego4d|
|QVHighlights  |https://github.com/jayleicn/moment_detr|

## Citation

If you find this project useful in your research, please consider cite:
```BibTeX
@misc{shi2026riverrealtimeinteractionbenchmark,
      title={RIVER: A Real-Time Interaction Benchmark for Video LLMs}, 
      author={Yansong Shi and Qingsong Zhao and Tianxiang Jiang and Xiangyu Zeng and Yi Wang and Limin Wang},
      year={2026},
      eprint={2603.03985},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2603.03985}, 
}
```