Papers
arxiv:2603.12262

Video Streaming Thinking: VideoLLMs Can Watch and Think Simultaneously

Published on Mar 12
· Submitted by
Guan Yiran
on Mar 16
Authors:
,
,
,
,
,

Abstract

Video Streaming Thinking (VST) introduces a novel streaming video understanding paradigm that enables real-time reasoning during video playback through causal streaming adaptation and multi-turn interaction optimization.

AI-generated summary

Online Video Large Language Models (VideoLLMs) play a critical role in supporting responsive, real-time interaction. Existing methods focus on streaming perception, lacking a synchronized logical reasoning stream. However, directly applying test-time scaling methods incurs unacceptable response latency. To address this trade-off, we propose Video Streaming Thinking (VST), a novel paradigm for streaming video understanding. It supports a thinking while watching mechanism, which activates reasoning over incoming video clips during streaming. This design improves timely comprehension and coherent cognition while preserving real-time responsiveness by amortizing LLM reasoning latency over video playback. Furthermore, we introduce a comprehensive post-training pipeline that integrates VST-SFT, which structurally adapts the offline VideoLLM to causal streaming reasoning, and VST-RL, which provides end-to-end improvement through self-exploration in a multi-turn video interaction environment. Additionally, we devise an automated training-data synthesis pipeline that uses video knowledge graphs to generate high-quality streaming QA pairs, with an entity-relation grounded streaming Chain-of-Thought to enforce multi-evidence reasoning and sustained attention to the video stream. Extensive evaluations show that VST-7B performs strongly on online benchmarks, e.g. 79.5% on StreamingBench and 59.3% on OVO-Bench. Meanwhile, VST remains competitive on offline long-form or reasoning benchmarks. Compared with Video-R1, VST responds 15.7 times faster and achieves +5.4% improvement on VideoHolmes, demonstrating higher efficiency and strong generalization across diverse video understanding tasks. Code, data, and models will be released at https://github.com/1ranGuan/VST.

Community

Paper author Paper submitter

Why do video LLMs lag when answering questions? A key reason is that they often wait until a question is asked before they start reasoning. By then, the model has to look back, retrieve relevant details, and connect events after the fact, which can slow down responses and weaken coherence.

To address this, we introduce Video Streaming Thinking (VST), a new approach that enables models to reason synchronously while video is streaming. Instead of passively watching, VST helps the model organize information and connect clues on the fly. When the prompt arrives, the model is already prepared to answer smoothly, without having to look back.

We observe consistent improvements across models ranging from 3B to 32B, highlighting the promise of streaming-time reasoning for video understanding. We’d love to hear your thoughts and discuss where this direction could go next.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.12262 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.12262 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.12262 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.