nanamma commited on
Commit
1aecacc
·
verified ·
1 Parent(s): b8a078c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -41,7 +41,7 @@ configs:
41
  RIVER: A Real-Time Interaction Benchmark for Video LLMs
42
  </h2>
43
 
44
- <img src="assets/RIVER logo.png" width="80" alt="RIVER logo">
45
 
46
  [Yansong Shi<sup>*</sup>](https://scholar.google.com/citations?user=R7J57vQAAAAJ),
47
  [Qingsong Zhao<sup>*</sup>](https://scholar.google.com/citations?user=ux-dlywAAAAJ),
@@ -58,7 +58,7 @@ configs:
58
  ## Introduction
59
  This project introduces **RIVER Bench**, designed to evaluate the real-time interactive capabilities of Video Large Language Models through streaming video perception, featuring novel tasks for memory, live-perception, and proactive response.
60
 
61
- ![RIVER](assets/river.jpg)
62
 
63
  Based on the frequency and timing of reference events, questions, and answers, we further categorize online interaction tasks into four distinct subclasses, as visually depicted in the figure. For the Retro-Memory, the clue is drawn from the past; for the live-Perception, it comes from the present—both demand an immediate response. For the Pro-Response task, Video LLMs need to wait until the corresponding clue appears and then respond as quickly as possible.
64
 
 
41
  RIVER: A Real-Time Interaction Benchmark for Video LLMs
42
  </h2>
43
 
44
+ <img src="https://github.com/OpenGVLab/RIVER/blob/master/assets/RIVER%20logo.png" width="80" alt="RIVER logo">
45
 
46
  [Yansong Shi<sup>*</sup>](https://scholar.google.com/citations?user=R7J57vQAAAAJ),
47
  [Qingsong Zhao<sup>*</sup>](https://scholar.google.com/citations?user=ux-dlywAAAAJ),
 
58
  ## Introduction
59
  This project introduces **RIVER Bench**, designed to evaluate the real-time interactive capabilities of Video Large Language Models through streaming video perception, featuring novel tasks for memory, live-perception, and proactive response.
60
 
61
+ [![RIVER](https://github.com/OpenGVLab/RIVER/blob/master/assets/river.jpg)](https://github.com/OpenGVLab/RIVER/blob/master/assets/river.jpg)
62
 
63
  Based on the frequency and timing of reference events, questions, and answers, we further categorize online interaction tasks into four distinct subclasses, as visually depicted in the figure. For the Retro-Memory, the clue is drawn from the past; for the live-Perception, it comes from the present—both demand an immediate response. For the Pro-Response task, Video LLMs need to wait until the corresponding clue appears and then respond as quickly as possible.
64