| | --- |
| | task_categories: |
| | - video-text-to-text |
| | language: |
| | - en |
| | tags: |
| | - video-understanding |
| | - streaming-video |
| | - real-time |
| | - long-video |
| | - vision-language-model |
| | --- |
| | |
| | # StreamingVLM Datasets |
| |
|
| | This repository contains datasets used in the paper [StreamingVLM: Real-Time Understanding for Infinite Video Streams](https://huggingface.co/papers/2510.09608). |
| |
|
| | StreamingVLM introduces a model designed for real-time, stable understanding of infinite visual input. It addresses challenges like escalating latency and memory usage in processing long video streams by maintaining a compact KV cache and aligning training with streaming inference. The project includes novel datasets for both training and evaluation, particularly `Inf-Streams-Eval`, a new benchmark with videos averaging over two hours that requires dense, per-second alignment between frames and text. |
| |
|
| | - **Paper**: [https://huggingface.co/papers/2510.09608](https://huggingface.co/papers/2510.09608) |
| | - **Code**: [https://github.com/mit-han-lab/streaming-vlm](https://github.com/mit-han-lab/streaming-vlm) |
| | - **Project/Demo Page**: [https://streamingvlm.hanlab.ai](https://streamingvlm.hanlab.ai) |
| |
|
| | ## Included Datasets |
| |
|
| | The datasets associated with the StreamingVLM project include: |
| |
|
| | * **Inf-Stream-Train**: This dataset is used for supervised fine-tuning (SFT) of the StreamingVLM model. |
| | * **Live-WhisperX-526K**: An additional dataset utilized during the SFT process, described as `Livecc_sft` in the project's setup. |
| | * **Inf-Stream-Eval**: A new benchmark dataset introduced for evaluating real-time video understanding, featuring long videos (averaging over two hours) and requiring dense, per-second alignment between frames and text. |
| |
|
| | ## Sample Usage: Dataset Preparation for SFT |
| |
|
| | To prepare the necessary datasets for Supervised Fine-Tuning (SFT) as described in the [GitHub repository](https://github.com/mit-han-lab/streaming-vlm): |
| |
|
| | First, download `mit-han-lab/Inf-Stream-Train` to `/path/to/your/Inf-Stream-Train`. |
| | Then, download `chenjoya/Live-WhisperX-526K` to `/path/to/your/Inf-Stream-Train/Livecc_sft`. |
| | Preprocess the LiveCC dataset with the following command: |
| |
|
| | ```bash |
| | cd $DATASET_PATH/Livecc_sft |
| | find . -type f -exec mv -t . {} + |
| | ``` |
| |
|
| | Download `mit-han-lab/Inf-Stream-Eval` to `/path/to/your/Inf-Stream-Eval`. |
| |
|
| | Finally, set environment paths: |
| |
|
| | ```bash |
| | export DATASET_PATH=/path/to/your/Inf-Stream-Train |
| | export EVAL_DATASET_PATH=/path/to/your/Inf-Stream-Eval |
| | ``` |
| |
|
| | ## Citation |
| |
|
| | If you find StreamingVLM useful or relevant to your project and research, please kindly cite our paper: |
| |
|
| | ```bibtex |
| | @misc{xu2025streamingvlmrealtimeunderstandinginfinite, |
| | title={StreamingVLM: Real-Time Understanding for Infinite Video Streams}, |
| | author={Ruyi Xu and Guangxuan Xiao and Yukang Chen and Liuning He and Kelly Peng and Yao Lu and Song Han}, |
| | year={2025}, |
| | eprint={2510.09608}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CV}, |
| | url={https://arxiv.org/abs/2510.09608}, |
| | } |
| | ``` |