RIVER / README.md
nanamma's picture
Update README.md
2a09543 verified
metadata
dataset_info:
  features:
    - name: video_source
      dtype: string
    - name: video_id
      dtype: string
    - name: duration_sec
      dtype: float64
    - name: fps
      dtype: float64
    - name: question_id
      dtype: string
    - name: question
      dtype: string
    - name: choices
      sequence: string
    - name: correct_answer
      dtype: string
    - name: time_reference
      sequence: float64
    - name: question_type
      dtype: string
    - name: question_time
      dtype: float64
  splits:
    - name: train
      num_bytes: 291464
      num_examples: 900
  download_size: 98308
  dataset_size: 291464
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Introduction

This project introduces RIVER Bench, designed to evaluate the real-time interactive capabilities of Video Large Language Models through streaming video perception, featuring novel tasks for memory, live-perception, and proactive response.

RIVER

Based on the frequency and timing of reference events, questions, and answers, we further categorize online interaction tasks into four distinct subclasses, as visually depicted in the figure. For the Retro-Memory, the clue is drawn from the past; for the live-Perception, it comes from the present—both demand an immediate response. For the Pro-Response task, Video LLMs need to wait until the corresponding clue appears and then respond as quickly as possible.

Dataset Preparation

Citation

If you find this project useful in your research, please consider cite:

@misc{shi2026riverrealtimeinteractionbenchmark,
      title={RIVER: A Real-Time Interaction Benchmark for Video LLMs}, 
      author={Yansong Shi and Qingsong Zhao and Tianxiang Jiang and Xiangyu Zeng and Yi Wang and Limin Wang},
      year={2026},
      eprint={2603.03985},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2603.03985}, 
}