Add initial dataset card for StreamingVLM datasets
#1
by
nielsr
HF Staff
- opened
README.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- video-text-to-text
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
tags:
|
| 7 |
+
- video-understanding
|
| 8 |
+
- streaming-video
|
| 9 |
+
- real-time
|
| 10 |
+
- long-video
|
| 11 |
+
- vision-language-model
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# StreamingVLM Datasets
|
| 15 |
+
|
| 16 |
+
This repository contains datasets used in the paper [StreamingVLM: Real-Time Understanding for Infinite Video Streams](https://huggingface.co/papers/2510.09608).
|
| 17 |
+
|
| 18 |
+
StreamingVLM introduces a model designed for real-time, stable understanding of infinite visual input. It addresses challenges like escalating latency and memory usage in processing long video streams by maintaining a compact KV cache and aligning training with streaming inference. The project includes novel datasets for both training and evaluation, particularly `Inf-Streams-Eval`, a new benchmark with videos averaging over two hours that requires dense, per-second alignment between frames and text.
|
| 19 |
+
|
| 20 |
+
- **Paper**: [https://huggingface.co/papers/2510.09608](https://huggingface.co/papers/2510.09608)
|
| 21 |
+
- **Code**: [https://github.com/mit-han-lab/streaming-vlm](https://github.com/mit-han-lab/streaming-vlm)
|
| 22 |
+
- **Project/Demo Page**: [https://streamingvlm.hanlab.ai](https://streamingvlm.hanlab.ai)
|
| 23 |
+
|
| 24 |
+
## Included Datasets
|
| 25 |
+
|
| 26 |
+
The datasets associated with the StreamingVLM project include:
|
| 27 |
+
|
| 28 |
+
* **Inf-Stream-Train**: This dataset is used for supervised fine-tuning (SFT) of the StreamingVLM model.
|
| 29 |
+
* **Live-WhisperX-526K**: An additional dataset utilized during the SFT process, described as `Livecc_sft` in the project's setup.
|
| 30 |
+
* **Inf-Stream-Eval**: A new benchmark dataset introduced for evaluating real-time video understanding, featuring long videos (averaging over two hours) and requiring dense, per-second alignment between frames and text.
|
| 31 |
+
|
| 32 |
+
## Sample Usage: Dataset Preparation for SFT
|
| 33 |
+
|
| 34 |
+
To prepare the necessary datasets for Supervised Fine-Tuning (SFT) as described in the [GitHub repository](https://github.com/mit-han-lab/streaming-vlm):
|
| 35 |
+
|
| 36 |
+
First, download `mit-han-lab/Inf-Stream-Train` to `/path/to/your/Inf-Stream-Train`.
|
| 37 |
+
Then, download `chenjoya/Live-WhisperX-526K` to `/path/to/your/Inf-Stream-Train/Livecc_sft`.
|
| 38 |
+
Preprocess the LiveCC dataset with the following command:
|
| 39 |
+
|
| 40 |
+
```bash
|
| 41 |
+
cd $DATASET_PATH/Livecc_sft
|
| 42 |
+
find . -type f -exec mv -t . {} +
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
Download `mit-han-lab/Inf-Stream-Eval` to `/path/to/your/Inf-Stream-Eval`.
|
| 46 |
+
|
| 47 |
+
Finally, set environment paths:
|
| 48 |
+
|
| 49 |
+
```bash
|
| 50 |
+
export DATASET_PATH=/path/to/your/Inf-Stream-Train
|
| 51 |
+
export EVAL_DATASET_PATH=/path/to/your/Inf-Stream-Eval
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
## Citation
|
| 55 |
+
|
| 56 |
+
If you find StreamingVLM useful or relevant to your project and research, please kindly cite our paper:
|
| 57 |
+
|
| 58 |
+
```bibtex
|
| 59 |
+
@misc{xu2025streamingvlmrealtimeunderstandinginfinite,
|
| 60 |
+
title={StreamingVLM: Real-Time Understanding for Infinite Video Streams},
|
| 61 |
+
author={Ruyi Xu and Guangxuan Xiao and Yukang Chen and Liuning He and Kelly Peng and Yao Lu and Song Han},
|
| 62 |
+
year={2025},
|
| 63 |
+
eprint={2510.09608},
|
| 64 |
+
archivePrefix={arXiv},
|
| 65 |
+
primaryClass={cs.CV},
|
| 66 |
+
url={https://arxiv.org/abs/2510.09608},
|
| 67 |
+
}
|
| 68 |
+
```
|