Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -150,5 +150,128 @@ configs:
|
|
| 150 |
path: StreamingBench/Proactive_Output.csv
|
| 151 |
|
| 152 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 153 |
|
| 154 |
https://arxiv.org/abs/2411.03628
|
|
|
|
| 150 |
path: StreamingBench/Proactive_Output.csv
|
| 151 |
|
| 152 |
---
|
| 153 |
+
# StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding
|
| 154 |
+
|
| 155 |
+
<div align="center">
|
| 156 |
+
<img src="./figs/icon.png" width="100%" alt="StreamingBench Banner">
|
| 157 |
+
|
| 158 |
+
<div style="margin: 30px 0">
|
| 159 |
+
<a href="https://streamingbench.github.io/" style="margin: 0 10px">๐ Project Page</a> |
|
| 160 |
+
<a href="https://arxiv.org/abs/2411.03628" style="margin: 0 10px">๐ arXiv Paper</a> |
|
| 161 |
+
<a href="https://huggingface.co/datasets/mjuicem/StreamingBench" style="margin: 0 10px">๐ฆ Dataset</a> |
|
| 162 |
+
<a href="https://streamingbench.github.io/#leaderboard" style="margin: 0 10px">๐
Leaderboard</a>
|
| 163 |
+
</div>
|
| 164 |
+
</div>
|
| 165 |
+
|
| 166 |
+
**StreamingBench** evaluates **Multimodal Large Language Models (MLLMs)** in real-time, streaming video understanding tasks. ๐
|
| 167 |
+
|
| 168 |
+
## ๐๏ธ Overview
|
| 169 |
+
|
| 170 |
+
As MLLMs continue to advance, they remain largely focused on offline video comprehension, where all frames are pre-loaded before making queries. However, this is far from the human ability to process and respond to video streams in real-time, capturing the dynamic nature of multimedia content. To bridge this gap, **StreamingBench** introduces the first comprehensive benchmark for streaming video understanding in MLLMs.
|
| 171 |
+
|
| 172 |
+
### Key Evaluation Aspects
|
| 173 |
+
- ๐ฏ **Real-time Visual Understanding**: Can the model process and respond to visual changes in real-time?
|
| 174 |
+
- ๐ **Omni-source Understanding**: Does the model integrate visual and audio inputs synchronously in real-time video streams?
|
| 175 |
+
- ๐ฌ **Contextual Understanding**: Can the model comprehend the broader context within video streams?
|
| 176 |
+
|
| 177 |
+
### Dataset Statistics
|
| 178 |
+
- ๐ **900** diverse videos
|
| 179 |
+
- ๐ **4,500** human-annotated QA pairs
|
| 180 |
+
- โฑ๏ธ Five questions per video at different timestamps
|
| 181 |
+
#### ๐ฌ Video Categories
|
| 182 |
+
<div align="center">
|
| 183 |
+
<img src="./figs/StreamingBench_Video.png" width="80%" alt="Video Categories">
|
| 184 |
+
</div>
|
| 185 |
+
|
| 186 |
+
#### ๐ Task Taxonomy
|
| 187 |
+
<div align="center">
|
| 188 |
+
<img src="./figs/task_taxonomy.png" width="80%" alt="Task Taxonomy">
|
| 189 |
+
</div>
|
| 190 |
+
|
| 191 |
+
## ๐ Dataset Examples
|
| 192 |
+
https://github.com/user-attachments/assets/e6d1655d-ab3f-47a7-973a-8fd6c8962307
|
| 193 |
+
<div align="center">
|
| 194 |
+
<video width="100%" controls>
|
| 195 |
+
<source src="./figs/example.video" type="video/mp4">
|
| 196 |
+
Your browser does not support the video tag.
|
| 197 |
+
</video>
|
| 198 |
+
</div>
|
| 199 |
+
|
| 200 |
+
## ๐ฎ Evaluation Pipeline
|
| 201 |
+
|
| 202 |
+
### Requirements
|
| 203 |
+
|
| 204 |
+
- Python 3.x
|
| 205 |
+
- moviepy
|
| 206 |
+
|
| 207 |
+
### Data Preparation
|
| 208 |
+
|
| 209 |
+
1. **Download Dataset**: Retrieve all necessary files from the [StreamingBench Dataset](https://huggingface.co/datasets/mjuicem/StreamingBench).
|
| 210 |
+
|
| 211 |
+
2. **Decompress Files**: Extract the downloaded files and organize them in the `./data` directory as follows:
|
| 212 |
+
|
| 213 |
+
```
|
| 214 |
+
StreamingBench/
|
| 215 |
+
โโโ data/
|
| 216 |
+
โ โโโ real/ # Unzip Real Time Visual Understanding_*.zip into this folder
|
| 217 |
+
โ โโโ omini/ # Unzip other .zip files into this folder
|
| 218 |
+
โ โโโ sqa/ # Unzip Sequential Question Answering_*.zip into this folder
|
| 219 |
+
โ โโโ proactive/ # Unzip Proactive Output_*.zip into this folder
|
| 220 |
+
```
|
| 221 |
+
|
| 222 |
+
3. **Preprocess Data**: Run the following command to preprocess the data:
|
| 223 |
+
|
| 224 |
+
```bash
|
| 225 |
+
bash scripts/preprocess.sh
|
| 226 |
+
```
|
| 227 |
+
|
| 228 |
+
### Model Preparation
|
| 229 |
+
|
| 230 |
+
Prepare your own model for evaluation by following the instructions provided [here](./docs/model_guide.md). This guide will help you set up and configure your model to ensure it is ready for testing against the dataset.
|
| 231 |
+
|
| 232 |
+
### Evaluation
|
| 233 |
+
|
| 234 |
+
Now you can run the benchmark:
|
| 235 |
+
|
| 236 |
+
```sh
|
| 237 |
+
bash eval.sh
|
| 238 |
+
```
|
| 239 |
+
|
| 240 |
+
This will run the benchmark and save the results to the specified output file.
|
| 241 |
+
|
| 242 |
+
## ๐ฌ Experimental Results
|
| 243 |
+
|
| 244 |
+
### Performance of Various MLLMs on StreamingBench
|
| 245 |
+
- All Context
|
| 246 |
+
<div align="center">
|
| 247 |
+
<img src="./figs/result_1.png" width="80%" alt="Task Taxonomy">
|
| 248 |
+
</div>
|
| 249 |
+
|
| 250 |
+
- 60 seconds of context preceding the query time
|
| 251 |
+
<div align="center">
|
| 252 |
+
<img src="./figs/result_2.png" width="80%" alt="Task Taxonomy">
|
| 253 |
+
</div>
|
| 254 |
+
|
| 255 |
+
- Comparison of Main Experiment vs. 60 Seconds of Video Context
|
| 256 |
+
- <div align="center">
|
| 257 |
+
<img src="./figs/heatmap.png" width="80%" alt="Task Taxonomy">
|
| 258 |
+
</div>
|
| 259 |
+
|
| 260 |
+
### Performance of Different MLLMs on the Proactive Output Task
|
| 261 |
+
*"โค xs" means that the answer is considered correct if the actual output time is within x seconds of the ground truth.*
|
| 262 |
+
<div align="center">
|
| 263 |
+
<img src="./figs/po.png" width="80%" alt="Task Taxonomy">
|
| 264 |
+
</div>
|
| 265 |
+
|
| 266 |
+
|
| 267 |
+
## ๐ Citation
|
| 268 |
+
```bibtex
|
| 269 |
+
@article{lin2024streaming,
|
| 270 |
+
title={StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding},
|
| 271 |
+
author={Junming Lin and Zheng Fang and Chi Chen and Zihao Wan and Fuwen Luo and Peng Li and Yang Liu and Maosong Sun},
|
| 272 |
+
journal={arXiv preprint arXiv:2411.03628},
|
| 273 |
+
year={2024}
|
| 274 |
+
}
|
| 275 |
+
```
|
| 276 |
|
| 277 |
https://arxiv.org/abs/2411.03628
|