Add 1 files
Browse files- 2603/2603.02872.md +642 -0
2603/2603.02872.md
ADDED
|
@@ -0,0 +1,642 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2603.02872
|
| 4 |
+
|
| 5 |
+
Markdown Content:
|
| 6 |
+
Jialiang Zhang 1,2 Junlong Tong 1,3 1 1 footnotemark: 1 Junyan Lin 1,4 1 1 footnotemark: 1 Hao Wu 1
|
| 7 |
+
|
| 8 |
+
Yirong Sun 1 Yunpu Ma 5 Xiaoyu Shen 1,6†
|
| 9 |
+
|
| 10 |
+
1 Institute of Digital Twin, Eastern Institute of Technology, Ningbo 2 Ocean University of China
|
| 11 |
+
|
| 12 |
+
3 Shanghai Jiao Tong University 4 The Hong Kong Polytechnic University
|
| 13 |
+
|
| 14 |
+
5 Munich Center for Machine Learning, LMU Munich
|
| 15 |
+
|
| 16 |
+
6 Ningbo Key Laboratory of Spatial Intelligence and Digital Derivative
|
| 17 |
+
|
| 18 |
+
zhangjia_liang@foxmail.com xyshen@eitech.edu.cn
|
| 19 |
+
|
| 20 |
+
###### Abstract
|
| 21 |
+
|
| 22 |
+
Large Vision-Language Models (LVLMs) have made significant strides in video reasoning, yet most existing systems rely on a batch inference paradigm that processes the entire video before reasoning begins. This “wait-and-see” approach neglects the inherently streaming nature of real-world video, introducing substantial latency and exacerbating temporal drift. In this paper, we propose Think-as-You-See (TaYS), a framework that shifts LVLMs toward a streaming reasoning paradigm, enabling continuous, incremental inference synchronized with the visual stream. We introduce three key innovations: (1) a streaming attention mask to enforce temporal causality; (2) a decoupled positional encoding strategy to resolve cross-modal index conflicts; and (3) a parallel dual KV-cache mechanism that decouples visual encoding from reasoning generation, enabling concurrent frame ingestion and token decoding. Empirical evaluations on the VideoEspresso benchmark using the Qwen2.5-VL family demonstrate that TaYS improves reasoning accuracy by 2.9%, reduces Time-to-First-Token (TTFT) from 10.6s to near-zero, and cuts reasoning-event deviation by 55%. Our results suggest that aligning LVLM reasoning with the streaming nature of video is a vital step toward responsive, real-time multimodal intelligence. The code is available at [this repository.](https://github.com/EIT-NLP/StreamingLLM/tree/main/TaYS)
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
|
| 26 |
+
Figure 1: Conventional LVLM reasoning adheres to the batch thinking paradigm, deferring inference until the entire input is received. This approach often leads to high latency and uneven attention allocation across inputs. In contrast, our proposed streaming thinking paradigm enables LVLMs to reason concurrently with input reception, thereby reducing latency and ensuring consistency between attention and input order.
|
| 27 |
+
|
| 28 |
+
1 Introduction
|
| 29 |
+
--------------
|
| 30 |
+
|
| 31 |
+
Large Vision-Language Models (LVLMs) have recently achieved remarkable milestones in multimodal reasoning[[60](https://arxiv.org/html/2603.02872#bib.bib7 "A survey on multimodal large language models"), [30](https://arxiv.org/html/2603.02872#bib.bib12 "A survey of state of the art large vision language models: alignment, benchmark, evaluations and challenges")], as demonstrated by state-of-the-art systems such as GPT-4o[[28](https://arxiv.org/html/2603.02872#bib.bib63 "Gpt-4o system card")], Gemini[[12](https://arxiv.org/html/2603.02872#bib.bib65 "Gemini 2.5: pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities")] and Qwen-VL[[2](https://arxiv.org/html/2603.02872#bib.bib18 "Qwen3-vl technical report")]. Despite these advancements, a pervasive bottleneck remains: the vast majority of LVLM-based video reasoning systems are anchored to a _batch inference_ paradigm where the model requires the full video to be available offline before processing begins[[8](https://arxiv.org/html/2603.02872#bib.bib27 "VideoLLM-online: online video large language model for streaming video"), [42](https://arxiv.org/html/2603.02872#bib.bib26 "Towards universal soccer video understanding")]. Under this “wait-and-see” paradigm, both the information density and the computational complexity scale directly with video length, making accurate and coherent interpretation increasingly difficult[[31](https://arxiv.org/html/2603.02872#bib.bib56 "Video-LLaVA: learning united visual representation by alignment before projection"), [53](https://arxiv.org/html/2603.02872#bib.bib29 "LongVLM: efficient long video understanding via large language models"), [27](https://arxiv.org/html/2603.02872#bib.bib28 "PruneVid: visual token pruning for efficient video large language models"), [55](https://arxiv.org/html/2603.02872#bib.bib17 "From data to model: a survey of the compression lifecycle in mllms"), [15](https://arxiv.org/html/2603.02872#bib.bib14 "What do visual tokens really encode? uncovering sparsity and redundancy in multimodal large language models")].
|
| 32 |
+
|
| 33 |
+
Current research attempts to mitigate this issue using Chain-of-Thought (CoT) reasoning[[23](https://arxiv.org/html/2603.02872#bib.bib21 "Let’s think frame by frame with VIP: a video infilling and prediction dataset for evaluating video chain-of-thought"), [17](https://arxiv.org/html/2603.02872#bib.bib22 "Video-of-thought: step-by-step video reasoning from perception to cognition"), [62](https://arxiv.org/html/2603.02872#bib.bib24 "ViTCoT: video-text interleaved chain-of-thought for boosting video understanding in large language models"), [65](https://arxiv.org/html/2603.02872#bib.bib31 "Ddcot: duty-distinct chain-of-thought prompting for multimodal reasoning in language models"), [19](https://arxiv.org/html/2603.02872#bib.bib19 "Chain-of-frames: advancing video understanding in multimodal llms via frame-aware reasoning")] paired with auxiliary modules for explicit frame referencing[[1](https://arxiv.org/html/2603.02872#bib.bib25 "Temporal chain of thought: long-video understanding by thinking in frames"), [21](https://arxiv.org/html/2603.02872#bib.bib23 "VideoEspresso: a large-scale chain-of-thought dataset for fine-grained video reasoning via core frame selection"), [18](https://arxiv.org/html/2603.02872#bib.bib32 "FameMind: frame-interleaved video reasoning via reinforcement learning"), [63](https://arxiv.org/html/2603.02872#bib.bib74 "Multimodal chain-of-thought reasoning in language models")]. By grounding predictions in specific keyframes and reasoning traces, these methods enhance both interpretability and accuracy. However, they are still restricted to the same _batch_ inference paradigm. As the temporal window of the input video expands, the delay between a visual event and the model’s corresponding reasoning step grows proportionally[[26](https://arxiv.org/html/2603.02872#bib.bib55 "Revisiting multimodal positional encoding in vision-language models"), [36](https://arxiv.org/html/2603.02872#bib.bib35 "When thinking drifts: evidential grounding for robust video reasoning")]. This latency accumulation often leads to “temporal drift”, where the model loses track of early cues, resulting in significant hallucinations and a loss of contextual coherence[[52](https://arxiv.org/html/2603.02872#bib.bib33 "Videohallucer: evaluating intrinsic and extrinsic hallucinations in large video-language models"), [61](https://arxiv.org/html/2603.02872#bib.bib34 "Eventhallusion: diagnosing event hallucinations in video llms"), [10](https://arxiv.org/html/2603.02872#bib.bib73 "Towards reasoning era: a survey of long chain-of-thought for reasoning large language models")].
|
| 34 |
+
|
| 35 |
+
This batch-processing assumption is increasingly at odds with the demands of the real world. In domains such as robotics teleoperation, autonomous driving, and live surveillance, video is not a static file but an _evolving stream_[[50](https://arxiv.org/html/2603.02872#bib.bib4 "From static inference to dynamic interaction: navigating the landscape of streaming large language models")]. Human cognition naturally does not wait for a sequence to end before processing; rather, we update our mental models incrementally as new evidence unfolds[[20](https://arxiv.org/html/2603.02872#bib.bib9 "Constructing inferences during narrative text comprehension."), [45](https://arxiv.org/html/2603.02872#bib.bib3 "Human reasoning and cognitive science")]. Bridging this gap requires a paradigm shift: models must transition from post-hoc analysis to active, concurrent understanding[[48](https://arxiv.org/html/2603.02872#bib.bib68 "StreamingThinker: large language models can think while reading")].
|
| 36 |
+
|
| 37 |
+
Motivated by this streaming characteristics of video, we propose Think-as-You-See (TaYS), a unified framework that equips LVLMs with streaming video CoT capabilities. In this framework, reasoning is not a terminal step but a continuous process that evolves in tandem with the visual stream. This approach ensures that inference trajectories are progressively refined, minimizing cognitive lag and ensuring that reasoning is always synchronized with the most relevant visual context.
|
| 38 |
+
|
| 39 |
+
A naive implementation that supports this framework is interleaved streaming where the model alternatingly processes a video segment and generates a corresponding reasoning trace[[50](https://arxiv.org/html/2603.02872#bib.bib4 "From static inference to dynamic interaction: navigating the landscape of streaming large language models")]. This implementation, however, is fundamentally limited by its sequential nature. This “blocking” mechanism forces the model to pause visual ingestion until token generation is complete, creating a computational bottleneck that contradicts the fluid nature of live video this implementation[[49](https://arxiv.org/html/2603.02872#bib.bib79 "Llm as effective streaming processor: bridging streaming-batch mismatches with group position encoding"), [32](https://arxiv.org/html/2603.02872#bib.bib80 "Speak while watching: unleashing true real-time video understanding capability of multimodal large language models")]. To overcome this, TaYS harmonizes stream-aligned training with true parallel inference via three key innovations: _(1) a streaming attention mask_ to enforce temporal causality, _(2) a decoupled positional encoding strategy_ that independently indexes visual and reasoning tokens to avoid cross-modal index conflicts, and _(3) a parallel dual KV-cache mechanism_ that decouples visual encoding from reasoning generation, enabling concurrent frame ingestion and token decoding.
|
| 40 |
+
|
| 41 |
+
We instantiate TaYS on the Qwen2.5-VL family[[3](https://arxiv.org/html/2603.02872#bib.bib16 "Qwen2.5-vl technical report")] and evaluate its efficacy across tasks requiring complex event dynamics and causal reasoning. On the extended VideoEspresso[[21](https://arxiv.org/html/2603.02872#bib.bib23 "VideoEspresso: a large-scale chain-of-thought dataset for fine-grained video reasoning via core frame selection")] benchmark, TaYS improves reasoning accuracy by _+2.9%_ over batch CoT baselines and achieves a _43.7% win rate_ in human-aligned GPT-5 evaluations. Critically, TaYS reduces the Time-to-First-Token (TTFT) from _10.6s_ in batch mode to nearly _zero_, while improving temporal grounding by reducing reasoning-event deviation from _1.52s_ to _0.69s_. These results demonstrate that aligning LVLM reasoning with the streaming nature of video is not only biologically intuitive but also a practical necessity for the next generation of real-time AI applications.
|
| 42 |
+
|
| 43 |
+
#### Contributions.
|
| 44 |
+
|
| 45 |
+
Our contributions are fourfold:
|
| 46 |
+
|
| 47 |
+
* •
|
| 48 |
+
We introduce a principled streaming reasoning paradigm for LVLMs, enabling incremental, temporally grounded inference aligned with unfolding visual evidence.
|
| 49 |
+
|
| 50 |
+
* •
|
| 51 |
+
We design a cohesive training and inference architecture that operationalizes streaming reasoning, combining causal masking, decoupled positional encoding, and a parallel dual-cache mechanism.
|
| 52 |
+
|
| 53 |
+
* •
|
| 54 |
+
We conduct comprehensive empirical evaluations on streaming video reasoning tasks, demonstrating improved reasoning quality and significantly enhanced responsiveness compared to batch and interleaved baselines.
|
| 55 |
+
|
| 56 |
+
2 Related Work
|
| 57 |
+
--------------
|
| 58 |
+
|
| 59 |
+
#### Multimodal Chain-of-Thought Reasoning.
|
| 60 |
+
|
| 61 |
+
Multimodal reasoning enables LVLMs to integrate visual and textual information for complex decision making. Existing approaches generally fall into two paradigms. The first, _text-centric reasoning_, converts visual inputs into captions or symbolic descriptions, enabling subsequent linguistic inference[[25](https://arxiv.org/html/2603.02872#bib.bib30 "Promptcap: prompt-guided task-aware image captioning"), [65](https://arxiv.org/html/2603.02872#bib.bib31 "Ddcot: duty-distinct chain-of-thought prompting for multimodal reasoning in language models"), [51](https://arxiv.org/html/2603.02872#bib.bib20 "VideoCoT: a video chain-of-thought dataset with active annotation tool"), [19](https://arxiv.org/html/2603.02872#bib.bib19 "Chain-of-frames: advancing video understanding in multimodal llms via frame-aware reasoning"), [24](https://arxiv.org/html/2603.02872#bib.bib40 "StreamingCoT: a dataset for temporal dynamics and multimodal chain-of-thought reasoning in streaming videoqa"), [63](https://arxiv.org/html/2603.02872#bib.bib74 "Multimodal chain-of-thought reasoning in language models")]. While effective for interpretability, this pipeline assumes full input availability before reasoning, leading to high latency and weak temporal grounding[[36](https://arxiv.org/html/2603.02872#bib.bib35 "When thinking drifts: evidential grounding for robust video reasoning")].
|
| 62 |
+
|
| 63 |
+
The second paradigm, _interleaved multimodal reasoning_, alternates visual and textual tokens to promote more structured cross-modal interaction[[23](https://arxiv.org/html/2603.02872#bib.bib21 "Let’s think frame by frame with VIP: a video infilling and prediction dataset for evaluating video chain-of-thought"), [21](https://arxiv.org/html/2603.02872#bib.bib23 "VideoEspresso: a large-scale chain-of-thought dataset for fine-grained video reasoning via core frame selection"), [62](https://arxiv.org/html/2603.02872#bib.bib24 "ViTCoT: video-text interleaved chain-of-thought for boosting video understanding in large language models"), [1](https://arxiv.org/html/2603.02872#bib.bib25 "Temporal chain of thought: long-video understanding by thinking in frames"), [18](https://arxiv.org/html/2603.02872#bib.bib32 "FameMind: frame-interleaved video reasoning via reinforcement learning"), [17](https://arxiv.org/html/2603.02872#bib.bib22 "Video-of-thought: step-by-step video reasoning from perception to cognition"), [47](https://arxiv.org/html/2603.02872#bib.bib75 "Thinking with images for multimodal reasoning: foundations, methods, and future frontiers"), [11](https://arxiv.org/html/2603.02872#bib.bib76 "Comt: a novel benchmark for chain of multi-modal thought on large vision-language models")]. Although this improves transparency and causal interpretability, it typically relies on sequential processing and explicit intermediate generation, which increases inference latency and computational overhead.
|
| 64 |
+
|
| 65 |
+
Recent works also explore efficiency-oriented designs, such as adaptive reasoning depth[[35](https://arxiv.org/html/2603.02872#bib.bib36 "Prolonged reasoning is not all you need: certainty-based adaptive routing for efficient llm/mllm reasoning"), [16](https://arxiv.org/html/2603.02872#bib.bib10 "VisiPruner: decoding discontinuous cross-modal dynamics for efficient multimodal LLMs"), [14](https://arxiv.org/html/2603.02872#bib.bib5 "From llms to lrms: rethinking pruning for reasoning-centric models"), [54](https://arxiv.org/html/2603.02872#bib.bib13 "HiDrop: hierarchical vision token reduction in mllms via late injection, concave pyramid pruning, and early exit"), [34](https://arxiv.org/html/2603.02872#bib.bib8 "ViCA: efficient multimodal llms with vision-only cross-attention")], compact cot tokens[[56](https://arxiv.org/html/2603.02872#bib.bib37 "Can atomic step decomposition enhance the self-structured reasoning of multimodal large models?"), [38](https://arxiv.org/html/2603.02872#bib.bib38 "Skywork r1v: pioneering multimodal reasoning with chain-of-thought"), [44](https://arxiv.org/html/2603.02872#bib.bib39 "Efficient reasoning with hidden thinking"), [64](https://arxiv.org/html/2603.02872#bib.bib6 "On-policy supervised fine-tuning for efficient reasoning")]. However, these studies primarily optimize computation under offline settings, and do not explicitly address temporally grounded, low-latency reasoning over streaming inputs.
|
| 66 |
+
|
| 67 |
+
#### Streaming and Memory-Based Video Understanding.
|
| 68 |
+
|
| 69 |
+
The demand for real-time multimodal systems has stimulated research on _streaming video understanding_, where models process frames incrementally instead of in batch mode[[7](https://arxiv.org/html/2603.02872#bib.bib41 "Videollm-online: online video large language model for streaming video"), [41](https://arxiv.org/html/2603.02872#bib.bib43 "Streaming long video understanding with large language models"), [48](https://arxiv.org/html/2603.02872#bib.bib68 "StreamingThinker: large language models can think while reading"), [32](https://arxiv.org/html/2603.02872#bib.bib80 "Speak while watching: unleashing true real-time video understanding capability of multimodal large language models")]. Representative efforts focus on streaming captioning, multi-round QA, and conversational agents[[9](https://arxiv.org/html/2603.02872#bib.bib42 "Livecc: learning video llm with streaming speech transcription at scale"), [39](https://arxiv.org/html/2603.02872#bib.bib44 "Dispider: enabling video llms with active real-time interaction via disentangled perception, decision, and reaction"), [57](https://arxiv.org/html/2603.02872#bib.bib45 "Streaming video understanding and multi-round interaction with memory-enhanced knowledge"), [13](https://arxiv.org/html/2603.02872#bib.bib46 "Streaming video question-answering with in-context video kv-cache retrieval"), [5](https://arxiv.org/html/2603.02872#bib.bib48 "Streaming videollms for real-time procedural video understanding"), [58](https://arxiv.org/html/2603.02872#bib.bib47 "StreamingVLM: real-time understanding for infinite video streams"), [33](https://arxiv.org/html/2603.02872#bib.bib70 "Streamchat: chatting with streaming video"), [59](https://arxiv.org/html/2603.02872#bib.bib71 "Streamagent: towards anticipatory agents for streaming video understanding")]. While these approaches improve temporal consistency and enable online interaction, they often emphasize description or response continuity rather than explicit, stepwise reasoning aligned with evolving visual evidence.
|
| 70 |
+
|
| 71 |
+
Another line of work leverages memory mechanisms or temporal compression to maintain long-context representations efficiently. [[22](https://arxiv.org/html/2603.02872#bib.bib81 "MA-lmm: memory-augmented large multimodal model for long-term video understanding"), [4](https://arxiv.org/html/2603.02872#bib.bib82 "Memory consolidation enables long-context video understanding"), [43](https://arxiv.org/html/2603.02872#bib.bib83 "Longvu: spatiotemporal adaptive compression for long video-language understanding")] By aggregating or consolidating historical features, these methods reduce computational cost but may sacrifice fine-grained temporal alignment and incremental interpretability. [[29](https://arxiv.org/html/2603.02872#bib.bib84 "Video token merging for long video understanding"), [40](https://arxiv.org/html/2603.02872#bib.bib85 "Streaming long video understanding with large language models")] In contrast, our formulation does not compress or abstract away temporal structure; instead, it explicitly synchronizes reasoning generation with frame-level updates through causal masking, decoupled positional encoding, and parallel cache management.
|
| 72 |
+
|
| 73 |
+
Overall, existing works either assume offline reasoning or prioritize temporal summarization over progressive inference. Our TaYS framework complements these directions by focusing on _true streaming reasoning_, where perception and reasoning evolve concurrently under strict temporal causality, enabling low-latency and temporally grounded video understanding.
|
| 74 |
+
|
| 75 |
+
3 Methodology
|
| 76 |
+
-------------
|
| 77 |
+
|
| 78 |
+
This section presents TaYS, a supervised fine-tuning framework that integrates streaming video CoT generation with streaming training and inference mechanisms. Its objective is to adapt batch-oriented Large Vision-Language Models to the streaming thinking paradigm.
|
| 79 |
+
|
| 80 |
+
### 3.1 Task Definition and Preliminaries
|
| 81 |
+
|
| 82 |
+
Streaming Video CoT demands that a model continuously process a video stream, performing temporal reasoning on queries regarding previously observed visual content at arbitrary time steps. In this section, we formalize this task and highlight its fundamental distinctions from the conventional offline paradigm.
|
| 83 |
+
|
| 84 |
+
#### Streaming Video CoT vs. Offline Video CoT.
|
| 85 |
+
|
| 86 |
+
Formally, let a video stream be represented as a sequence of visual frames 𝒱={F t∣1≤t≤T}\mathcal{V}=\{F_{t}\mid 1\leq t\leq T\}, and let C<t C_{<t} denote the accumulated multimodal context prior to time t t (e.g., historical textual or visual reasoning states).
|
| 87 |
+
|
| 88 |
+
Offline Video CoT. In the offline setting, the model assumes global access to all frames in 𝒱\mathcal{V} before generating any reasoning tokens. At the final time step t=T t=T, the reasoning process is formulated as:
|
| 89 |
+
|
| 90 |
+
h i\displaystyle h_{i}=Decoder(y<i;Enc(𝒱)),\displaystyle=\mathrm{Decoder}\big(y_{<i};\,\mathrm{Enc}(\mathcal{V})\big),(1)
|
| 91 |
+
y^i\displaystyle\hat{y}_{i}∼P θ(y i∣𝒱,y<i),\displaystyle\sim P_{\theta}(y_{i}\mid\mathcal{V},y_{<i}),
|
| 92 |
+
|
| 93 |
+
where Enc(𝒱)\mathrm{Enc}(\mathcal{V}) encodes the complete frame sequence {F 1,…,F T}\{F_{1},\ldots,F_{T}\}, and y i y_{i} denotes the i i-th reasoning token. Consequently, Offline Video CoT optimizes the joint probability over the entire sequence:
|
| 94 |
+
|
| 95 |
+
max θP θ(Y∣𝒱)=∏i=1 N P θ(y i∣𝒱,y<i),\max_{\theta}\;P_{\theta}(Y\mid\mathcal{V})=\prod_{i=1}^{N}P_{\theta}(y_{i}\mid\mathcal{V},y_{<i}),(2)
|
| 96 |
+
|
| 97 |
+
which necessitates full video observation prior to the onset of generation.
|
| 98 |
+
|
| 99 |
+
Streaming Video CoT. Conversely, Streaming Video CoT performs incremental reasoning as frames arrive. At any time step t t, only the partial frame sequence 𝒱≤t={F 1,…,F t}\mathcal{V}_{\leq t}=\{F_{1},\ldots,F_{t}\} is observable. The model generates reasoning tokens conditioned on this partial visual context and the prior reasoning states:
|
| 100 |
+
|
| 101 |
+
h i t\displaystyle h_{i}^{t}=Decoder(y<i t;Enc(𝒱≤t),C<t),\displaystyle=\mathrm{Decoder}\big(y_{<i}^{t};\,\mathrm{Enc}(\mathcal{V}_{\leq t}),C_{<t}\big),(3)
|
| 102 |
+
y^i t\displaystyle\hat{y}_{i}^{t}∼P θ(y i t∣𝒱≤t,y<i t,C<t).\displaystyle\sim P_{\theta}(y_{i}^{t}\mid\mathcal{V}_{\leq t},y_{<i}^{t},C_{<t}).
|
| 103 |
+
|
| 104 |
+
In contrast to Eq.[1](https://arxiv.org/html/2603.02872#S3.E1 "Equation 1 ‣ Streaming Video CoT vs. Offline Video CoT. ‣ 3.1 Task Definition and Preliminaries ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), the model is prohibited from accessing unseen future frames {F t+1,…,F T}\{F_{t+1},\ldots,F_{T}\}, enforcing a strict causal constraint on both visual and linguistic modalities. This paradigm optimizes the cumulative probability up to time t t:
|
| 105 |
+
|
| 106 |
+
max θP θ(Y≤t∣𝒱≤t)=∏i=1 N t P θ(y i t∣𝒱≤t,y<i t,C<t),\max_{\theta}\;P_{\theta}(Y_{\leq t}\mid\mathcal{V}_{\leq t})=\prod_{i=1}^{N_{t}}P_{\theta}(y_{i}^{t}\mid\mathcal{V}_{\leq t},y_{<i}^{t},C_{<t}),(4)
|
| 107 |
+
|
| 108 |
+
where N t N_{t} denotes the number of reasoning tokens generated up to time t t.
|
| 109 |
+
|
| 110 |
+
Architecturally, Streaming Video CoT updates its reasoning states concurrently with incoming frames, whereas Offline Video CoT encodes the entire video before reasoning commences. Notably, Offline Video CoT can be viewed as a degenerate case of Streaming Video CoT, wherein all reasoning is deferred until the video stream terminates.
|
| 111 |
+
|
| 112 |
+
#### Design Principles.
|
| 113 |
+
|
| 114 |
+
To facilitate real-time reasoning, Streaming Video CoT leverages the causal structure of LLM decoders to balance efficiency and accuracy while minimizing redundant computation. During streaming, KV-Caches are incrementally stored and reused as contextual memory, enabling state updates without re-encoding historical frames. A causal attention mask restricts token access to future information, ensuring that each video token attends exclusively to past visual inputs and prior reasoning states. This architecture effectively disentangles temporal visual processing from linguistic reasoning, achieving efficient and temporally consistent inference across dynamic video streams.
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
|
| 118 |
+
Figure 2: Overview of the two-step process for generating Streaming Video CoT. Step 1 Adjust the frame ID while maintaining frame caption alignment. Step 2 Generate a progressive frame aware trajectory using the original annotations.
|
| 119 |
+
|
| 120 |
+
### 3.2 Streaming Video CoT Generation
|
| 121 |
+
|
| 122 |
+
To enable temporally grounded incremental reasoning, we construct a streaming-style Video CoT dataset that departs from conventional batch reasoning trajectories, which assume full-video access and overlook progressive reasoning behavior. Our construction is based on the training split of VideoEspresso, which contains temporally coherent videos annotated with keyframe-level descriptions capturing causal and logical transitions. These keyframes serve as semantic anchors for extracting frame-aligned reasoning trajectories under streaming constraints. The overall pipeline is illustrated in Figure[2](https://arxiv.org/html/2603.02872#S3.F2 "Figure 2 ‣ Design Principles. ‣ 3.1 Task Definition and Preliminaries ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), with additional details provided in Appendix[A](https://arxiv.org/html/2603.02872#S1a "A Details of Streaming CoT Pipeline ‣ 5 Conclusion ‣ Temporal Coherence of Reasoning. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 123 |
+
|
| 124 |
+
#### Frame ID Alignment.
|
| 125 |
+
|
| 126 |
+
To ensure strict temporal alignment between visual inputs and reasoning units, we adopt timestamp-based resampling instead of uniform frame sampling. All videos are resampled to 2 FPS. For each target sampling timestamp τ t′′=0.5(t′−1)\tau^{\prime}_{t^{\prime}}=0.5(t^{\prime}-1) seconds, the selected frame F t′F_{t^{\prime}} is defined as:
|
| 127 |
+
|
| 128 |
+
F t′={F k,ifτ t′′∈[τ k start,τ k end]&F kis a keyframe,argmin F t|τ t−0.5(t′−1)|,otherwise.F_{t^{\prime}}\!=\begin{cases}\!F_{k},\text{if }\tau^{\prime}_{t^{\prime}}\in[\tau_{k}^{\text{start}},\tau_{k}^{\text{end}}]\&F_{k}\text{ is a keyframe},\\ \!\arg\min_{F_{t}}|\tau_{t}-0.5(t^{\prime}-1)|,\text{otherwise.}\end{cases}(5)
|
| 129 |
+
|
| 130 |
+
where {τ t}t=1 T\{\tau_{t}\}_{t=1}^{T} denote original frame timestamps. This strategy preserves annotated moments while maintaining temporal regularity. After resampling, frame indices are re-normalized and clips are truncated to the model’s maximum input length, ensuring consistency among visual frames, timestamps, and textual annotations.
|
| 131 |
+
|
| 132 |
+
#### Structured Trajectory Construction.
|
| 133 |
+
|
| 134 |
+
Each aligned keyframe F t F_{t} is associated with a reasoning sentence R t R_{t} and visual evidence E t E_{t}. To construct structured reasoning trajectories, we prompt GPT-4o[[28](https://arxiv.org/html/2603.02872#bib.bib63 "Gpt-4o system card")] to generate triplets (Q t,R t,A t)(Q_{t},R_{t},A_{t}) representing the temporally grounded question, reasoning step, and answer derived from the annotated content. This enforces frame-level incremental reasoning and yields temporally segmented reasoning units across the video.
|
| 135 |
+
|
| 136 |
+
#### Quality Control.
|
| 137 |
+
|
| 138 |
+
To ensure semantic coherence and temporal consistency, we compute an alignment score between each question and its corresponding reasoning sentence:
|
| 139 |
+
|
| 140 |
+
consistency(Q t,R t)=v Q⋅v R‖v Q‖‖v R‖,\mathrm{consistency}(Q_{t},R_{t})=\frac{v_{Q}\cdot v_{R}}{\|v_{Q}\|\,\|v_{R}\|},(6)
|
| 141 |
+
|
| 142 |
+
where v Q v_{Q} and v R v_{R} are embedding vectors obtained from the BGE-M3 model[[6](https://arxiv.org/html/2603.02872#bib.bib51 "M3-embedding: multi-linguality, multi-functionality, multi-granularity text embeddings through self-knowledge distillation")]. Samples with low semantic alignment or temporal inconsistency are discarded. The remaining instances form high-quality streaming reasoning trajectories.
|
| 143 |
+
|
| 144 |
+
Finally, sentence-level boundary tokens <EOT> are inserted to delimit minimal reasoning units, encouraging the model to generate causally ordered and frame-consistent outputs conditioned only on preceding visual observations.
|
| 145 |
+
|
| 146 |
+

|
| 147 |
+
|
| 148 |
+
Figure 3: Overview of the streaming reasoning framework. (a)Parallel video reasoning KV caches enable concurrent visual encoding and reasoning generation via dynamic merge and split operations. (b)The streaming attention mask enforces causal alignment between frames and reasoning steps. (c)During inference, parallel information flow reduces attention path length and alleviates sequential blocking compared with interleaved paradigms.
|
| 149 |
+
|
| 150 |
+
### 3.3 Naive Streaming Paradigm
|
| 151 |
+
|
| 152 |
+
A straightforward way to emulate streaming behavior is to interleave video and reasoning tokens during training. Concretely, each frame F t F_{t} is immediately followed by its associated reasoning segment R t R_{t}, forming an alternating sequence {F 1,R 1,F 2,R 2,…,F T,R T}\{F_{1},R_{1},F_{2},R_{2},\dots,F_{T},R_{T}\}. All visual and textual embeddings are concatenated into a single causal token stream and processed autoregressively.
|
| 153 |
+
|
| 154 |
+
This strict interleaving imposes a serialized dependency between perception and reasoning. Since all tokens share a single causal attention space, new visual tokens cannot be encoded until the preceding reasoning tokens are generated, and reasoning cannot proceed until visual tokens are appended. Such coupling creates a computational bottleneck and prevents concurrent updates across modalities.
|
| 155 |
+
|
| 156 |
+
Although this design superficially resembles a “thinking-while-watching” process, it tightly entangles perception and reasoning in a way that deviates from the pretraining distribution of LVLMs, where visual encoding and textual decoding are typically factorized. As illustrated in Figure[3](https://arxiv.org/html/2603.02872#S3.F3 "Figure 3 ‣ Quality Control. ‣ 3.2 Streaming Video CoT Generation ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models")(c), this paradigm therefore suffers from reduced efficiency and limited scalability in long streaming scenarios.
|
| 157 |
+
|
| 158 |
+
### 3.4 Parallel Streaming Paradigm
|
| 159 |
+
|
| 160 |
+
To overcome the intrinsic serialization bottleneck of naive interleaving strategies, we introduce a parallel streaming paradigm termed _Think-as-You-See (TaYS)_. Unlike conventional approaches that treat reasoning as a post-hoc process dependent on complete visual encoding, TaYS decouples perception from reasoning while strictly preserving temporal causality. This architecture enables concurrent execution of visual ingestion and cognitive inference, bridging the gap between streaming perception and real-time reasoning.
|
| 161 |
+
|
| 162 |
+
#### Streaming Attention Mask.
|
| 163 |
+
|
| 164 |
+
In streaming scenarios, maintaining strict temporal causality is paramount: a reasoning step at time t t must strictly attend to visual evidence accumulated up to t t, remaining agnostic to future frames. Standard batch attention mechanisms, which globally expose all visual tokens, violate this causal constraint and are unsuitable for streaming inference.
|
| 165 |
+
|
| 166 |
+
To address this, we design a streaming-aware attention mask that enforces fine-grained visibility constraints. Consider a visual sequence of length N v N_{v} and a reasoning sequence of length N r N_{r}. For a query token at position i i and a key token at position j j, the masked attention matrix M~(i,j)\widetilde{M}(i,j) is formulated as:
|
| 167 |
+
|
| 168 |
+
M~(i,j)={−∞,i>N v,j<N v,j>i−N v,M causal(i,j),otherwise,\widetilde{M}(i,j)=\begin{cases}-\infty,&i>N_{v},\;j<N_{v},\;j>i-N_{v},\\ M_{\text{causal}}(i,j),&\text{otherwise},\end{cases}
|
| 169 |
+
|
| 170 |
+
where M causal M_{\text{causal}} represents the standard autoregressive mask. The condition j>i−N v j>i-N_{v} effectively creates a sliding window over the visual tokens relative to the current reasoning step. This construction ensures that each reasoning token only integrates information from the current temporal window, preventing information leakage from future frames and ensuring the generated reasoning remains grounded in observed reality.
|
| 171 |
+
|
| 172 |
+
#### Streaming Positional Encoding.
|
| 173 |
+
|
| 174 |
+
While masking enforces logical visibility, positional encoding must resolve index conflicts arising from the concurrent growth of visual and reasoning streams. Modern Large Vision-Language Models (LVLMs) typically employ Rotary Position Embeddings (RoPE)[[46](https://arxiv.org/html/2603.02872#bib.bib52 "Roformer: enhanced transformer with rotary position embedding")], where relative positional information is encoded via rotation matrices. Under standard monolithic indexing, the attention interaction between reasoning token r t r_{t} and visual token v s v_{s} is computed as:
|
| 175 |
+
|
| 176 |
+
(ℛ N v+t𝐪 r t)⊤(ℛ s𝐤 v s)=𝐪 r t⊤ℛ(N v+t)−s⊤𝐤 v s.(\mathbf{\mathcal{R}}_{N_{v}+t}\mathbf{q}_{r_{t}})^{\top}(\mathbf{\mathcal{R}}_{s}\mathbf{k}_{v_{s}})=\mathbf{q}_{r_{t}}^{\top}\mathbf{\mathcal{R}}_{(N_{v}+t)-s}^{\top}\mathbf{k}_{v_{s}}.(7)
|
| 177 |
+
|
| 178 |
+
In this setup, the reasoning position is offset by the total visual length N v N_{v}. However, in a streaming context where N v N_{v} expands continuously, this indexing introduces dynamic shifts in relative positions, potentially destabilizing the model’s temporal perception. To eliminate this interference, we propose a modality-decoupled positional indexing scheme:
|
| 179 |
+
|
| 180 |
+
pos(v s)=s,pos(r t)=t.\mathrm{pos}(v_{s})=s,\qquad\mathrm{pos}(r_{t})=t.
|
| 181 |
+
|
| 182 |
+
This assigns independent positional axes for vision and reasoning. The resulting attention mechanism becomes:
|
| 183 |
+
|
| 184 |
+
(ℛ t𝐪 r t)⊤(ℛ s𝐤 v s)=𝐪 r t⊤ℛ t−s⊤𝐤 v s.(\mathbf{\mathcal{R}}_{t}\mathbf{q}_{r_{t}})^{\top}(\mathbf{\mathcal{R}}_{s}\mathbf{k}_{v_{s}})=\mathbf{q}_{r_{t}}^{\top}\mathbf{\mathcal{R}}_{t-s}^{\top}\mathbf{k}_{v_{s}}.(8)
|
| 185 |
+
|
| 186 |
+
By isolating the positional spaces, this decoupling prevents index collision and ensures that the relative temporal distance (t−s t-s) remains semantically consistent, preserving stable alignment between reasoning updates and visual observations regardless of the growing sequence length.
|
| 187 |
+
|
| 188 |
+
#### Attention Pathways.
|
| 189 |
+
|
| 190 |
+
The architectural choices in different paradigms fundamentally reshape the information flow. Batch reasoning necessitates encoding the entire video prior to decoding, resulting in a long sequential attention path and high initial latency. Interleaved reasoning alternates between frame input and text generation but relies on a monolithic cache, creating a sequential dependency that forces the reasoning process to stall during visual encoding. In contrast, TaYS restructures the dataflow by separating modality-specific memory pathways while enabling dynamic fusion during decoding. This design substantially shortens the effective attention path, allowing the model to initiate reasoning immediately upon receiving the first frame without waiting for subsequent visual inputs (Figure[3](https://arxiv.org/html/2603.02872#S3.F3 "Figure 3 ‣ Quality Control. ‣ 3.2 Streaming Video CoT Generation ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models")(c)).
|
| 191 |
+
|
| 192 |
+
Table 1: Comparison of reasoning accuracy on the extended VideoEspresso benchmark. TaYS consistently achieves competitive or superior performance while maintaining low latency, demonstrating the effectiveness of the streaming reasoning paradigm. In the table, bold numbers denote the best results, and underlined numbers indicate the second-best results for each task category.
|
| 193 |
+
|
| 194 |
+
#### Parallel KV Cache.
|
| 195 |
+
|
| 196 |
+
The core enabler of TaYS’s concurrency is a dual-cache system that manages visual and textual states independently. We maintain two modality-specific caches: a read-heavy video cache 𝒞 v\mathcal{C}_{v} and a dynamic text cache 𝒞 r\mathcal{C}_{r}.
|
| 197 |
+
|
| 198 |
+
At time step t t, the incoming frame F t F_{t} is processed by the visual encoder and incrementally appended to the video cache:
|
| 199 |
+
|
| 200 |
+
𝒞 v(t)=𝒞 v(t−1)∪Enc(F t).\mathcal{C}_{v}^{(t)}=\mathcal{C}_{v}^{(t-1)}\cup\mathrm{Enc}(F_{t}).
|
| 201 |
+
|
| 202 |
+
Crucially, this update is non-blocking and occurs asynchronously with respect to the reasoning process.
|
| 203 |
+
|
| 204 |
+
During the decoding phase, attention is computed over a logical concatenation of the current video cache 𝒞 v(t)\mathcal{C}_{v}^{(t)} and the historical text cache 𝒞 r(t−1)\mathcal{C}_{r}^{(t-1)}. We implement this _merge_ operation via pointer-level composition rather than physical tensor concatenation, achieving a zero-copy overhead. Once the reasoning segment R t R_{t} is generated, only the text cache is updated:
|
| 205 |
+
|
| 206 |
+
𝒞 r(t)=𝒞 r(t−1)∪Dec(R t),\mathcal{C}_{r}^{(t)}=\mathcal{C}_{r}^{(t-1)}\cup\mathrm{Dec}(R_{t}),
|
| 207 |
+
|
| 208 |
+
while the video cache remains immutable during this step. The subsequent _split_ operation restores the modality-specific cache views, preparing the system for the next cycle.
|
| 209 |
+
|
| 210 |
+
This architecture establishes a recursive _merge–generate–split_ loop. While 𝒞 r\mathcal{C}_{r} is engaged in autoregressive token generation, newly arrived frames are independently absorbed into 𝒞 v\mathcal{C}_{v}. Consequently, the reasoning process is never stalled by visual encoding. Compared to the monolithic cache design in batch or interleaved paradigms, TaYS’s decoupled cache architecture minimizes critical path latency and enables true parallel streaming, realizing a system where perception and reasoning evolve simultaneously.
|
| 211 |
+
|
| 212 |
+
4 Experiments
|
| 213 |
+
-------------
|
| 214 |
+
|
| 215 |
+
### 4.1 Experimental Settings
|
| 216 |
+
|
| 217 |
+
#### Video Benchmark.
|
| 218 |
+
|
| 219 |
+
We evaluate TaYS on an extended benchmark protocol derived from VideoEspresso, covering temporal, logical, scene, behavioral, and state understanding. The benchmark includes tasks such as _Event Dynamics_, _Causal Analysis_, _Theme Analysis_, and realistic applications like _Cooking Process_ and _Traffic Analysis_, forming a comprehensive testbed for streaming video reasoning across diverse semantic contexts.
|
| 220 |
+
|
| 221 |
+
#### Models and Baselines.
|
| 222 |
+
|
| 223 |
+
We implement TaYS on Qwen2.5-VL-3B/7B-Instruct. Comparative baselines include: (1) Batch w/o Thinking: a supervised model fine-tuned on direct QA pairs; (2) Batch w/ Thinking: incorporates frame-referenced intermediate reasoning prompts;1 1 1 Detailed CoT inference prompt is provided in Appendix[B](https://arxiv.org/html/2603.02872#S2a "B Prompt Details ‣ A.4 Details of Dataset ‣ A Details of Streaming CoT Pipeline ‣ 5 Conclusion ‣ Temporal Coherence of Reasoning. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"). (3) Batch SFT: distilled from CoT-annotated data; and (4) Interleaved SFT: a streaming variant alternating frame input and reasoning generation without parallel caching. This setup isolates the benefits of parallel streaming against conventional batch and sequential interleaving paradigms.
|
| 224 |
+
|
| 225 |
+
#### Metrics.
|
| 226 |
+
|
| 227 |
+
Evaluation considers both reasoning quality and latency. Objective performance requires the semantic similarity of predictions to exceed a threshold and outperform distractors. Subjective performance is ranked by GPT-5[[37](https://arxiv.org/html/2603.02872#bib.bib66 "Introducing gpt-5")] based on logical consistency, factual accuracy, and contextual appropriateness. Latency is measured by _TTFT_ (time to first token) and _overall delay_ (total time for reasoning and response).
|
| 228 |
+
|
| 229 |
+
### 4.2 Results on Benchmark
|
| 230 |
+
|
| 231 |
+
#### Objective Evaluation Results.
|
| 232 |
+
|
| 233 |
+
Table[3.4](https://arxiv.org/html/2603.02872#S3.SS4.SSS0.Px3 "Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models") summarizes objective results. Explicit CoT prompting enhances base LVLM reasoning, while fine-tuning on temporally aligned trajectories yields further gains by aligning reasoning with visual evidence. Streaming-based models outperform all batch baselines significantly. Notably, the _Interleaved_ model achieves slightly higher accuracy than _TaYS_, suggesting both streaming paradigms effectively capture temporal dependencies. However, objective metrics alone may not fully reflect reasoning coherence, necessitating further subjective evaluation.
|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+
Figure 4: Case study comparing TaYS with the Interleaved paradigm. TaYS produces temporally aligned reasoning, whereas the Interleaved model generates less accurate, fragmented descriptions.
|
| 238 |
+
|
| 239 |
+
#### Subjective Evaluation Results.
|
| 240 |
+
|
| 241 |
+
GPT-5 ranked model outputs based on overall quality.2 2 2 Detailed subjective evaluation prompt is provided in Appendix[B](https://arxiv.org/html/2603.02872#S2a "B Prompt Details ‣ A.4 Details of Dataset ‣ A Details of Streaming CoT Pipeline ‣ 5 Conclusion ‣ Temporal Coherence of Reasoning. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"). TaYS achieved the highest normalized win rate of 43.7%, surpassing Batch (31.4%) and Interleaved (21.7%). TaYS excels in tasks requiring multi-step temporal reasoning, winning 61.1% of _Cooking Process_ samples (vs. 11.1% for Interleaved) and 75.0% of _Preparation Steps_. As illustrated in Figure[4](https://arxiv.org/html/2603.02872#S4.F4 "Figure 4 ‣ Objective Evaluation Results. ‣ 4.2 Results on Benchmark ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), TaYS aligns reasoning tightly with visual evidence, avoiding the fragmented descriptions produced by the Interleaved model, thereby demonstrating superior temporal grounding in dynamic scenarios.
|
| 242 |
+
|
| 243 |
+
Table 2: Latency and accuracy comparison across different FPS. TaYS achieves the lowest TTFT and delay, demonstrating superior real-time efficiency.
|
| 244 |
+
|
| 245 |
+
### 4.3 Real-Time Streaming Reasoning Efficiency
|
| 246 |
+
|
| 247 |
+
We evaluate TaYS in real-time streaming scenarios where frames arrive progressively. As shown in Table[2](https://arxiv.org/html/2603.02872#S4.T2 "Table 2 ‣ Subjective Evaluation Results. ‣ 4.2 Results on Benchmark ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models") and Figure[5](https://arxiv.org/html/2603.02872#S4.F5 "Figure 5 ‣ 4.3 Real-Time Streaming Reasoning Efficiency ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models")(a), the Batch paradigm suffers from a persistent bottleneck (∼\sim 10.6s TTFT). The Interleaved paradigm responds faster but suffers from cumulative delay growth at higher frame rates due to sequential encode–generate dependencies.
|
| 248 |
+
|
| 249 |
+
In contrast, TaYS achieves near-zero decoder-level TTFT (≈10−6\approx 10^{-6}s) under the incremental warm-start setting, reflecting minimal decoding latency. Crucially, TaYS maintains a stable end-to-end delay of ∼\sim 12s across all frame rates by parallelizing cache management and reasoning. Accuracy scales robustly with frame rate (peaking at 36.0% for FPS=3), whereas baselines fluctuate. Figure[5](https://arxiv.org/html/2603.02872#S4.F5 "Figure 5 ‣ 4.3 Real-Time Streaming Reasoning Efficiency ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models")(b) confirms TaYS’s compact latency profile, demonstrating its efficiency and reliability for streaming understanding.
|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
|
| 253 |
+
Figure 5: (a) Latency comparison across paradigms. (b) Latency breakdown of TaYS. Parallel KV Cache design enables the lowest TTFT and stable delay.
|
| 254 |
+
|
| 255 |
+
### 4.4 Temporal Behavior of Streaming Reasoning
|
| 256 |
+
|
| 257 |
+
#### Fine-Grained Temporal Alignment.
|
| 258 |
+
|
| 259 |
+
We assess whether reasoning is triggered at correct moments by measuring the temporal distance Δt\Delta t between reasoning steps and annotated keyframes. Figure[6](https://arxiv.org/html/2603.02872#S4.F6 "Figure 6 ‣ Fine-Grained Temporal Alignment. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models") shows TaYS achieves a mean deviation of 0.69s (vs. 1.52s for Interleaved). Additionally, 86.0% of TaYS’s reasoning falls within one second of keyframes (vs. 62.4% for Interleaved). The distribution indicates TaYS effectively concentrates reasoning around event boundaries rather than scattering outputs across irrelevant temporal segments, thereby confirming precise temporal grounding and acute event sensitivity.
|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
|
| 263 |
+
Figure 6: Temporal distance Δt\Delta t distribution. TaYS aligns reasoning more closely with keyframes, achieving higher precision than the interleaved baseline.
|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
|
| 267 |
+
Figure 7: Semantic similarity between consecutive reasoning steps. TaYS maintains a smoother distribution, whereas the interleaved model exhibits repetitive peaks (high similarity), indicating redundancy.
|
| 268 |
+
|
| 269 |
+
#### Temporal Coherence of Reasoning.
|
| 270 |
+
|
| 271 |
+
We examine semantic continuity between consecutive reasoning outputs (Figure[7](https://arxiv.org/html/2603.02872#S4.F7 "Figure 7 ‣ Fine-Grained Temporal Alignment. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models")). TaYS exhibits a smooth similarity profile, indicating reasoning evolves with visual changes. The suppression of high-similarity spikes suggests effective avoidance of stagnant or looping descriptions, ensuring sustained distinctiveness. Conversely, the Interleaved model displays prominent peaks, reflecting redundant and less adaptive reasoning that struggles to assimilate new events. These results demonstrate TaYS maintains a coherent, progressive reasoning trajectory aligned with the video’s temporal structure.
|
| 272 |
+
|
| 273 |
+
5 Conclusion
|
| 274 |
+
------------
|
| 275 |
+
|
| 276 |
+
Video data naturally arrives as a continuous stream, yet most LVLMs rely on offline batch reasoning, fundamentally misaligned with the sequential nature of real-world visual inputs. We introduce the streaming thinking paradigm, enabling models to reason progressively as frames arrive and refine outputs dynamically. We instantiate this via Think-as-You-See (TaYS), integrating streaming Chain-of-Thought, stream-aligned training, and parallel KV-cache architecture. Experiments show TaYS reduces latency while enhancing reasoning quality by grounding inferences in immediate visual evidence. By decoupling perception from reasoning, our approach resolves the trade-off between responsiveness and depth, allowing models to ”think on their feet” without awaiting complete encoding. Analyses highlight controllable and temporally grounded reasoning, paving the way for responsive, reliable real-time video understanding. This work shifts focus from static analysis to dynamic interaction, laying out a foundation for embodied intelligence and open-world agents.
|
| 277 |
+
|
| 278 |
+
References
|
| 279 |
+
----------
|
| 280 |
+
|
| 281 |
+
* [1]A. Arnab, A. Iscen, M. Caron, A. Fathi, and C. Schmid (2025)Temporal chain of thought: long-video understanding by thinking in frames. arXiv preprint arXiv:2507.02001. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p2.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 282 |
+
* [2]S. Bai, Y. Cai, R. Chen, K. Chen, X. Chen, Z. Cheng, L. Deng, W. Ding, C. Gao, C. Ge, W. Ge, Z. Guo, Q. Huang, J. Huang, F. Huang, B. Hui, S. Jiang, Z. Li, M. Li, M. Li, K. Li, Z. Lin, J. Lin, X. Liu, J. Liu, C. Liu, Y. Liu, D. Liu, S. Liu, D. Lu, R. Luo, C. Lv, R. Men, L. Meng, X. Ren, X. Ren, S. Song, Y. Sun, J. Tang, J. Tu, J. Wan, P. Wang, P. Wang, Q. Wang, Y. Wang, T. Xie, Y. Xu, H. Xu, J. Xu, Z. Yang, M. Yang, J. Yang, A. Yang, B. Yu, F. Zhang, H. Zhang, X. Zhang, B. Zheng, H. Zhong, J. Zhou, F. Zhou, J. Zhou, Y. Zhu, and K. Zhu (2025)Qwen3-vl technical report. External Links: 2511.21631, [Link](https://arxiv.org/abs/2511.21631)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 283 |
+
* [3]S. Bai, K. Chen, X. Liu, J. Wang, W. Ge, S. Song, K. Dang, P. Wang, S. Wang, J. Tang, H. Zhong, Y. Zhu, M. Yang, Z. Li, J. Wan, P. Wang, W. Ding, Z. Fu, Y. Xu, J. Ye, X. Zhang, T. Xie, Z. Cheng, H. Zhang, Z. Yang, H. Xu, and J. Lin (2025)Qwen2.5-vl technical report. arXiv preprint arXiv:2502.13923. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p6.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 284 |
+
* [4]I. Balažević, Y. Shi, P. Papalampidi, R. Chaabouni, S. Koppula, and O. J. Hénaff (2024)Memory consolidation enables long-context video understanding. In Proceedings of the 41st International Conference on Machine Learning, ICML’24. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p2.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 285 |
+
* [5]D. Chatterjee, E. Remelli, Y. Song, B. Tekin, A. Mittal, B. Bhatnagar, N. C. Camgoz, S. Hampali, E. Sauser, S. Ma, A. Yao, and F. Sener (2025-10)Streaming videollms for real-time procedural video understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp.22586–22598. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 286 |
+
* [6]J. Chen, S. Xiao, P. Zhang, K. Luo, D. Lian, and Z. Liu (2024-08)M3-embedding: multi-linguality, multi-functionality, multi-granularity text embeddings through self-knowledge distillation. In Findings of the Association for Computational Linguistics: ACL 2024, L. Ku, A. Martins, and V. Srikumar (Eds.), Bangkok, Thailand, pp.2318–2335. External Links: [Link](https://aclanthology.org/2024.findings-acl.137/), [Document](https://dx.doi.org/10.18653/v1/2024.findings-acl.137)Cited by: [§3.2](https://arxiv.org/html/2603.02872#S3.SS2.SSS0.Px3.p3.2 "Quality Control. ‣ 3.2 Streaming Video CoT Generation ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 287 |
+
* [7]J. Chen, Z. Lv, S. Wu, K. Q. Lin, C. Song, D. Gao, J. Liu, Z. Gao, D. Mao, and M. Z. Shou (2024)Videollm-online: online video large language model for streaming video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.18407–18418. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 288 |
+
* [8]J. Chen, Z. Lv, S. Wu, K. Q. Lin, C. Song, D. Gao, J. Liu, Z. Gao, D. Mao, and M. Z. Shou (2024)VideoLLM-online: online video large language model for streaming video. External Links: 2406.11816, [Link](https://arxiv.org/abs/2406.11816)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 289 |
+
* [9]J. Chen, Z. Zeng, Y. Lin, W. Li, Z. Ma, and M. Z. Shou (2025)Livecc: learning video llm with streaming speech transcription at scale. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp.29083–29095. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 290 |
+
* [10]Q. Chen, L. Qin, J. Liu, D. Peng, J. Guan, P. Wang, M. Hu, Y. Zhou, T. Gao, and W. Che (2025)Towards reasoning era: a survey of long chain-of-thought for reasoning large language models. arXiv preprint arXiv:2503.09567. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 291 |
+
* [11]Z. Cheng, Q. Chen, J. Zhang, H. Fei, X. Feng, W. Che, M. Li, and L. Qin (2025)Comt: a novel benchmark for chain of multi-modal thought on large vision-language models. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39, pp.23678–23686. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p2.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 292 |
+
* [12]G. Comanici, E. Bieber, M. Schaekermann, I. Pasupat, N. Sachdeva, I. Dhillon, M. Blistein, O. Ram, D. Zhang, E. Rosen, et al. (2025)Gemini 2.5: pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities. arXiv preprint arXiv:2507.06261. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 293 |
+
* [13]S. Di, Z. Yu, G. Zhang, H. Li, H. Cheng, B. Li, W. He, F. Shu, H. Jiang, et al. (2025)Streaming video question-answering with in-context video kv-cache retrieval. In ICLR, Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 294 |
+
* [14]L. Ding, A. Zhao, F. Ye, Z. Chen, and X. Shen (2026)From llms to lrms: rethinking pruning for reasoning-centric models. arXiv preprint arXiv:2601.18091. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p3.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 295 |
+
* [15]Y. Fan, J. Tong, A. Zhao, and X. Shen (2026)What do visual tokens really encode? uncovering sparsity and redundancy in multimodal large language models. arXiv preprint arXiv:2603.00510. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 296 |
+
* [16]Y. Fan, A. Zhao, J. Fu, J. Tong, H. Su, Y. Pan, W. Zhang, and X. Shen (2025-11)VisiPruner: decoding discontinuous cross-modal dynamics for efficient multimodal LLMs. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, C. Christodoulopoulos, T. Chakraborty, C. Rose, and V. Peng (Eds.), Suzhou, China, pp.18885–18902. External Links: [Link](https://aclanthology.org/2025.emnlp-main.955/), [Document](https://dx.doi.org/10.18653/v1/2025.emnlp-main.955), ISBN 979-8-89176-332-6 Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p3.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 297 |
+
* [17]H. Fei, S. Wu, W. Ji, H. Zhang, M. Zhang, M. Lee, and W. Hsu (2024-21–27 Jul)Video-of-thought: step-by-step video reasoning from perception to cognition. In Proceedings of the 41st International Conference on Machine Learning, R. Salakhutdinov, Z. Kolter, K. Heller, A. Weller, N. Oliver, J. Scarlett, and F. Berkenkamp (Eds.), Proceedings of Machine Learning Research, Vol. 235, pp.13109–13125. External Links: [Link](https://proceedings.mlr.press/v235/fei24a.html)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p2.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 298 |
+
* [18]H. Ge, Y. Wang, K. Chang, H. Wu, and Y. Cai (2025)FameMind: frame-interleaved video reasoning via reinforcement learning. arXiv e-prints, pp.arXiv–2509. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p2.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 299 |
+
* [19]S. Ghazanfari, F. Croce, N. Flammarion, P. Krishnamurthy, F. Khorrami, and S. Garg (2025)Chain-of-frames: advancing video understanding in multimodal llms via frame-aware reasoning. arXiv preprint arXiv:2506.00318. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p1.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 300 |
+
* [20]A. C. Graesser, M. Singer, and T. Trabasso (1994)Constructing inferences during narrative text comprehension.. Psychological review 101 (3), pp.371. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p3.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 301 |
+
* [21]S. Han, W. Huang, H. Shi, L. Zhuo, X. Su, S. Zhang, X. Zhou, X. Qi, Y. Liao, and S. Liu (2025-06)VideoEspresso: a large-scale chain-of-thought dataset for fine-grained video reasoning via core frame selection. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), pp.26181–26191. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§1](https://arxiv.org/html/2603.02872#S1.p6.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p2.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 302 |
+
* [22]B. He, H. Li, Y. K. Jang, M. Jia, X. Cao, A. Shah, A. Shrivastava, and S. Lim (2024)MA-lmm: memory-augmented large multimodal model for long-term video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p2.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 303 |
+
* [23]V. Himakunthala, A. Ouyang, D. Rose, R. He, A. Mei, Y. Lu, C. Sonar, M. Saxon, and W. Wang (2023-12)Let’s think frame by frame with VIP: a video infilling and prediction dataset for evaluating video chain-of-thought. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, H. Bouamor, J. Pino, and K. Bali (Eds.), Singapore, pp.204–219. External Links: [Link](https://aclanthology.org/2023.emnlp-main.15/), [Document](https://dx.doi.org/10.18653/v1/2023.emnlp-main.15)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p2.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 304 |
+
* [24]Y. Hu, Z. Yang, S. Wang, S. Qian, B. Wen, F. Yang, T. Gao, and C. Xu (2025)StreamingCoT: a dataset for temporal dynamics and multimodal chain-of-thought reasoning in streaming videoqa. In Proceedings of the 33rd ACM International Conference on Multimedia, pp.13464–13470. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p1.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 305 |
+
* [25]Y. Hu, H. Hua, Z. Yang, W. Shi, N. A. Smith, and J. Luo (2022)Promptcap: prompt-guided task-aware image captioning. arXiv preprint arXiv:2211.09699. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p1.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 306 |
+
* [26]J. Huang, X. Liu, S. Song, R. Hou, H. Chang, J. Lin, and S. Bai (2025)Revisiting multimodal positional encoding in vision-language models. External Links: 2510.23095, [Link](https://arxiv.org/abs/2510.23095)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 307 |
+
* [27]X. Huang, H. Zhou, and K. Han (2025-07)PruneVid: visual token pruning for efficient video large language models. In Findings of the Association for Computational Linguistics: ACL 2025, W. Che, J. Nabende, E. Shutova, and M. T. Pilehvar (Eds.), Vienna, Austria, pp.19959–19973. External Links: [Link](https://aclanthology.org/2025.findings-acl.1024/), [Document](https://dx.doi.org/10.18653/v1/2025.findings-acl.1024), ISBN 979-8-89176-256-5 Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 308 |
+
* [28]A. Hurst, A. Lerer, A. P. Goucher, A. Perelman, A. Ramesh, A. Clark, A. Ostrow, A. Welihinda, A. Hayes, A. Radford, et al. (2024)Gpt-4o system card. arXiv preprint arXiv:2410.21276. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§3.2](https://arxiv.org/html/2603.02872#S3.SS2.SSS0.Px2.p1.4 "Structured Trajectory Construction. ‣ 3.2 Streaming Video CoT Generation ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 309 |
+
* [29]S. Lee, J. Wang, Z. Zhang, D. Fan, and X. Li (2024)Video token merging for long video understanding. Advances in Neural Information Processing Systems 37, pp.13851–13871. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p2.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 310 |
+
* [30]Z. Li, X. Wu, H. Du, F. Liu, H. Nghiem, and G. Shi (2025)A survey of state of the art large vision language models: alignment, benchmark, evaluations and challenges. External Links: 2501.02189, [Link](https://arxiv.org/abs/2501.02189)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 311 |
+
* [31]B. Lin, Y. Ye, B. Zhu, J. Cui, M. Ning, P. Jin, and L. Yuan (2024-11)Video-LLaVA: learning united visual representation by alignment before projection. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Y. Al-Onaizan, M. Bansal, and Y. Chen (Eds.), Miami, Florida, USA, pp.5971–5984. External Links: [Link](https://aclanthology.org/2024.emnlp-main.342/), [Document](https://dx.doi.org/10.18653/v1/2024.emnlp-main.342)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 312 |
+
* [32]J. Lin, J. Tong, H. Wu, J. Zhang, J. Liu, X. Jin, and X. Shen (2026)Speak while watching: unleashing true real-time video understanding capability of multimodal large language models. arXiv preprint arXiv:2601.06843. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p5.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 313 |
+
* [33]J. Liu, Z. Yu, S. Lan, S. Wang, R. Fang, J. Kautz, H. Li, and J. M. Alvare (2024)Streamchat: chatting with streaming video. arXiv preprint arXiv:2412.08646. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 314 |
+
* [34]W. Liu, H. Wu, X. Qiu, Y. Fan, Y. Zhang, A. Zhao, Y. Ma, and X. Shen (2026)ViCA: efficient multimodal llms with vision-only cross-attention. arXiv preprint arXiv:2602.07574. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p3.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 315 |
+
* [35]J. Lu, H. Yu, S. Xu, S. Ran, G. Tang, S. Wang, B. Shan, T. Fu, H. Feng, J. Tang, et al. (2025)Prolonged reasoning is not all you need: certainty-based adaptive routing for efficient llm/mllm reasoning. arXiv preprint arXiv:2505.15154. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p3.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 316 |
+
* [36]M. Luo, Z. Xue, A. Dimakis, and K. Grauman (2025)When thinking drifts: evidential grounding for robust video reasoning. arXiv preprint arXiv:2510.06077. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p1.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 317 |
+
* [37]OpenAI (2025)Introducing gpt-5. Note: [https://openai.com/index/introducing-gpt-5/](https://openai.com/index/introducing-gpt-5/)Accessed: 2025-11-10 Cited by: [§4.1](https://arxiv.org/html/2603.02872#S4.SS1.SSS0.Px3.p1.1 "Metrics. ‣ 4.1 Experimental Settings ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 318 |
+
* [38]Y. Peng, P. Wang, X. Wang, Y. Wei, J. Pei, W. Qiu, A. Jian, Y. Hao, J. Pan, T. Xie, et al. (2025)Skywork r1v: pioneering multimodal reasoning with chain-of-thought. arXiv preprint arXiv:2504.05599. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p3.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 319 |
+
* [39]R. Qian, S. Ding, X. Dong, P. Zhang, Y. Zang, Y. Cao, D. Lin, and J. Wang (2025)Dispider: enabling video llms with active real-time interaction via disentangled perception, decision, and reaction. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp.24045–24055. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 320 |
+
* [40]R. Qian, X. Dong, P. Zhang, Y. Zang, S. Ding, D. Lin, and J. Wang (2024)Streaming long video understanding with large language models. In Advances in Neural Information Processing Systems, A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang (Eds.), Vol. 37, pp.119336–119360. External Links: [Document](https://dx.doi.org/10.52202/079017-3792), [Link](https://proceedings.neurips.cc/paper_files/paper/2024/file/d7ce06e9293c3d8e6cb3f80b4157f875-Paper-Conference.pdf)Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p2.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 321 |
+
* [41]R. Qian, X. Dong, P. Zhang, Y. Zang, S. Ding, D. Lin, and J. Wang (2024)Streaming long video understanding with large language models. Advances in Neural Information Processing Systems 37, pp.119336–119360. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 322 |
+
* [42]J. Rao, H. Wu, H. Jiang, Y. Zhang, Y. Wang, and W. Xie (2025)Towards universal soccer video understanding. External Links: 2412.01820, [Link](https://arxiv.org/abs/2412.01820)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 323 |
+
* [43]X. Shen, Y. Xiong, C. Zhao, L. Wu, J. Chen, C. Zhu, Z. Liu, F. Xiao, B. Varadarajan, F. Bordes, et al. (2024)Longvu: spatiotemporal adaptive compression for long video-language understanding. arXiv preprint arXiv:2410.17434. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p2.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 324 |
+
* [44]X. Shen, Y. Wang, X. Shi, Y. Wang, P. Zhao, and J. Gu (2025)Efficient reasoning with hidden thinking. arXiv preprint arXiv:2501.19201. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p3.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 325 |
+
* [45]K. Stenning and M. Van Lambalgen (2012)Human reasoning and cognitive science. MIT Press. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p3.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 326 |
+
* [46]J. Su, M. Ahmed, Y. Lu, S. Pan, W. Bo, and Y. Liu (2024)Roformer: enhanced transformer with rotary position embedding. Neurocomputing 568, pp.127063. Cited by: [§3.4](https://arxiv.org/html/2603.02872#S3.SS4.SSS0.Px2.p1.2 "Streaming Positional Encoding. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 327 |
+
* [47]Z. Su, P. Xia, H. Guo, Z. Liu, Y. Ma, X. Qu, J. Liu, Y. Li, K. Zeng, Z. Yang, et al. (2025)Thinking with images for multimodal reasoning: foundations, methods, and future frontiers. arXiv preprint arXiv:2506.23918. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p2.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 328 |
+
* [48]J. Tong, Y. Fan, A. Zhao, Y. Ma, and X. Shen (2025)StreamingThinker: large language models can think while reading. arXiv preprint arXiv:2510.17238. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p3.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 329 |
+
* [49]J. Tong, J. Fu, Z. Lin, Y. Fan, A. Zhao, H. Su, and X. Shen (2025)Llm as effective streaming processor: bridging streaming-batch mismatches with group position encoding. In Findings of the Association for Computational Linguistics: ACL 2025, pp.23497–23517. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p5.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 330 |
+
* [50]J. Tong, Z. Wang, Y. Ren, P. Yin, H. Wu, W. Zhang, and X. Shen (2026)From static inference to dynamic interaction: navigating the landscape of streaming large language models. External Links: 2603.04592, [Link](https://arxiv.org/abs/2603.04592)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p3.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§1](https://arxiv.org/html/2603.02872#S1.p5.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 331 |
+
* [51]Y. Wang, Y. Zeng, J. Zheng, X. Xing, J. Xu, and X. Xu (2024-08)VideoCoT: a video chain-of-thought dataset with active annotation tool. In Proceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR), J. Gu, T. (. Fu, D. Hudson, A. Celikyilmaz, and W. Wang (Eds.), Bangkok, Thailand, pp.92–101. External Links: [Link](https://aclanthology.org/2024.alvr-1.8/), [Document](https://dx.doi.org/10.18653/v1/2024.alvr-1.8)Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p1.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 332 |
+
* [52]Y. Wang, Y. Wang, D. Zhao, C. Xie, and Z. Zheng (2024)Videohallucer: evaluating intrinsic and extrinsic hallucinations in large video-language models. arXiv preprint arXiv:2406.16338. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 333 |
+
* [53]Y. Weng, M. Han, H. He, X. Chang, and B. Zhuang (2024)LongVLM: efficient long video understanding via large language models. External Links: 2404.03384, [Link](https://arxiv.org/abs/2404.03384)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 334 |
+
* [54]H. Wu, Y. Fan, J. Dai, J. Tong, Y. Ma, and X. Shen (2026)HiDrop: hierarchical vision token reduction in mllms via late injection, concave pyramid pruning, and early exit. arXiv preprint arXiv:2602.23699. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p3.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 335 |
+
* [55]H. Wu, J. Tong, X. Wang, Y. Tan, C. Zeng, A. Antsiferova, and X. Shen (2026-02)From data to model: a survey of the compression lifecycle in mllms. techrxiv preprinttechrxiv.177220375.55495124/v1. External Links: [Link](http://dx.doi.org/10.36227/techrxiv.177220375.55495124/v1), [Document](https://dx.doi.org/10.36227/techrxiv.177220375.55495124/v1)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 336 |
+
* [56]K. Xiang, Z. Liu, Z. Jiang, Y. Nie, K. Cai, Y. Yin, R. Huang, H. Fan, H. Li, W. Huang, et al. (2025)Can atomic step decomposition enhance the self-structured reasoning of multimodal large models?. arXiv preprint arXiv:2503.06252. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p3.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 337 |
+
* [57]H. Xiong, Z. Yang, J. Yu, Y. Zhuge, L. Zhang, J. Zhu, and H. Lu (2025)Streaming video understanding and multi-round interaction with memory-enhanced knowledge. External Links: 2501.13468, [Link](https://arxiv.org/abs/2501.13468)Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 338 |
+
* [58]R. Xu, G. Xiao, Y. Chen, L. He, K. Peng, Y. Lu, and S. Han (2025)StreamingVLM: real-time understanding for infinite video streams. arXiv preprint arXiv:2510.09608. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 339 |
+
* [59]H. Yang, F. Tang, L. Zhao, X. An, M. Hu, H. Li, X. Zhuang, Y. Lu, X. Zhang, A. Swikir, et al. (2025)Streamagent: towards anticipatory agents for streaming video understanding. arXiv preprint arXiv:2508.01875. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px2.p1.1 "Streaming and Memory-Based Video Understanding. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 340 |
+
* [60]S. Yin, C. Fu, S. Zhao, K. Li, X. Sun, T. Xu, and E. Chen (2024-11)A survey on multimodal large language models. National Science Review 11 (12). External Links: ISSN 2053-714X, [Link](http://dx.doi.org/10.1093/nsr/nwae403), [Document](https://dx.doi.org/10.1093/nsr/nwae403)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p1.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 341 |
+
* [61]J. Zhang, Y. Jiao, S. Chen, N. Zhao, Z. Tan, H. Li, and J. Chen (2024)Eventhallusion: diagnosing event hallucinations in video llms. arXiv preprint arXiv:2409.16597. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 342 |
+
* [62]Y. Zhang, X. Liu, R. Tao, Q. Chen, H. Fei, W. Che, and L. Qin (2025)ViTCoT: video-text interleaved chain-of-thought for boosting video understanding in large language models. In Proceedings of the 33rd ACM International Conference on Multimedia, MM ’25, New York, NY, USA, pp.5267–5276. External Links: ISBN 9798400720352, [Link](https://doi.org/10.1145/3746027.3755837), [Document](https://dx.doi.org/10.1145/3746027.3755837)Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p2.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 343 |
+
* [63]Z. Zhang, A. Zhang, M. Li, H. Zhao, G. Karypis, and A. Smola (2023)Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p1.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 344 |
+
* [64]A. Zhao, Z. Chen, J. Tong, Y. Fan, F. Ye, S. Li, Y. Ma, W. Li, and X. Shen (2026)On-policy supervised fine-tuning for efficient reasoning. arXiv preprint arXiv:2602.13407. Cited by: [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p3.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 345 |
+
* [65]G. Zheng, B. Yang, J. Tang, H. Zhou, and S. Yang (2023)Ddcot: duty-distinct chain-of-thought prompting for multimodal reasoning in language models. Advances in Neural Information Processing Systems 36, pp.5168–5191. Cited by: [§1](https://arxiv.org/html/2603.02872#S1.p2.1 "1 Introduction ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), [§2](https://arxiv.org/html/2603.02872#S2.SS0.SSS0.Px1.p1.1 "Multimodal Chain-of-Thought Reasoning. ‣ 2 Related Work ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models").
|
| 346 |
+
|
| 347 |
+
\thetitle
|
| 348 |
+
|
| 349 |
+
Supplementary Material
|
| 350 |
+
|
| 351 |
+
A Details of Streaming CoT Pipeline
|
| 352 |
+
-----------------------------------
|
| 353 |
+
|
| 354 |
+
### A.1 CLIP-Guided Frame ID Alignment
|
| 355 |
+
|
| 356 |
+
#### Step 1: Semantic anchoring before resampling.
|
| 357 |
+
|
| 358 |
+
Given a video 𝒱={F t}t=1 T\mathcal{V}=\{F_{t}\}_{t=1}^{T} with timestamps {τ t}t=1 T\{\tau_{t}\}_{t=1}^{T} and annotated keyframe captions 𝒞={c k}k=1 K\mathcal{C}=\{c_{k}\}_{k=1}^{K}, we first compute CLIP embeddings for all frames and captions:
|
| 359 |
+
|
| 360 |
+
𝒇 t=Enc CLIP img(F t),𝒈 k=Enc CLIP text(c k).\boldsymbol{f}_{t}=\mathrm{Enc}_{\text{CLIP}}^{\text{img}}(F_{t}),\hskip 28.80008pt\boldsymbol{g}_{k}=\mathrm{Enc}_{\text{CLIP}}^{\text{text}}(c_{k}).
|
| 361 |
+
|
| 362 |
+
We utilize cosine similarity throughout the alignment process:
|
| 363 |
+
|
| 364 |
+
sim(𝒂,𝒃)=𝒂⊤𝒃‖𝒂‖‖𝒃‖.\mathrm{sim}(\boldsymbol{a},\boldsymbol{b})\;=\;\frac{\boldsymbol{a}^{\top}\boldsymbol{b}}{\|\boldsymbol{a}\|\,\|\boldsymbol{b}\|}.
|
| 365 |
+
|
| 366 |
+
For each keyframe caption c k c_{k}, we identify its most similar frame index:
|
| 367 |
+
|
| 368 |
+
t k⋆=argmax t∈{1,…,T}sim(𝒇 t,𝒈 k),t_{k}^{\star}\;=\;\arg\max_{\,t\in\{1,\dots,T\}}\;\mathrm{sim}(\boldsymbol{f}_{t},\boldsymbol{g}_{k}),
|
| 369 |
+
|
| 370 |
+
recording the anchor timestamp τ^k=τ t k⋆\widehat{\tau}_{k}=\tau_{t_{k}^{\star}}. These anchors serve as semantic locks preserved during subsequent resampling.
|
| 371 |
+
|
| 372 |
+
#### Step 2: Timestamp-based resampling at 2 FPS with anchor preservation.
|
| 373 |
+
|
| 374 |
+
Let the target sampling interval be Δ=0.5\Delta=0.5 s (2 FPS) and the target grid be {τ t′′}t′=1 T′\{\tau^{\prime}_{t^{\prime}}\}_{t^{\prime}=1}^{T^{\prime}} with τ t′′=(t′−1)Δ\tau^{\prime}_{t^{\prime}}=(t^{\prime}-1)\Delta. For each target timestamp τ t′′\tau^{\prime}_{t^{\prime}}, we select the frame F t′F_{t^{\prime}} as:
|
| 375 |
+
|
| 376 |
+
F t′={F t k⋆,ifτ t′′∈[τ^k−ϵ,τ^k+ϵ]for somek,argmin F t|τ t−τ t′′|,otherwise,F_{t^{\prime}}\;=\;\begin{cases}F_{t_{k}^{\star}},~~~~\text{if }\tau^{\prime}_{t^{\prime}}\in[\widehat{\tau}_{k}-\epsilon,\;\widehat{\tau}_{k}+\epsilon]\text{ for some }k,\\[2.0pt] \displaystyle\arg\min_{F_{t}}\;|\tau_{t}-\tau^{\prime}_{t^{\prime}}|,~~~~\text{otherwise},\end{cases}
|
| 377 |
+
|
| 378 |
+
where ϵ=0.1\epsilon=0.1 s is a tolerance window ensuring every semantic anchor τ^k\widehat{\tau}_{k} snaps to the nearest sampling point. Post-selection, frame indices are renormalized, and clips are truncated to the maximum input duration (30 s).
|
| 379 |
+
|
| 380 |
+
### A.2 Quality Assurance and Temporal Filtering
|
| 381 |
+
|
| 382 |
+
To ensure generated frame-level trajectories are temporally grounded and semantically reliable, we apply a three-stage filtering process (Algorithm[1](https://arxiv.org/html/2603.02872#alg1 "Algorithm 1 ‣ A.2 Quality Assurance and Temporal Filtering ‣ A Details of Streaming CoT Pipeline ‣ 5 Conclusion ‣ Temporal Coherence of Reasoning. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models")). First, we identify question-relevant keyframes via embedding similarity. Second, we prune temporally adjacent captions with redundant semantics to preserve distinct perceptual events. Finally, we format the supervision sequence by assigning </EOT> to selected keyframes and <SKIP> to others. This yields a temporally sparse but well-aligned target stream, guiding the model to reason only at meaningful moments.
|
| 383 |
+
|
| 384 |
+
Algorithm 1 Quality Assurance and Temporal Filtering
|
| 385 |
+
|
| 386 |
+
1:Question
|
| 387 |
+
|
| 388 |
+
Q t Q_{t}
|
| 389 |
+
, keyframe captions
|
| 390 |
+
|
| 391 |
+
{c k}\{c_{k}\}
|
| 392 |
+
|
| 393 |
+
2:Thresholds
|
| 394 |
+
|
| 395 |
+
τ q=0.7\tau_{q}=0.7
|
| 396 |
+
,
|
| 397 |
+
|
| 398 |
+
τ adj=0.9\tau_{\mathrm{adj}}=0.9
|
| 399 |
+
|
| 400 |
+
3:Step 1: Question–caption relevance screening
|
| 401 |
+
|
| 402 |
+
4:for each caption
|
| 403 |
+
|
| 404 |
+
c k c_{k}
|
| 405 |
+
do
|
| 406 |
+
|
| 407 |
+
5:
|
| 408 |
+
|
| 409 |
+
s k←sim(e(Q t),e(c k))s_{k}\leftarrow\mathrm{sim}(e(Q_{t}),e(c_{k}))
|
| 410 |
+
|
| 411 |
+
6:end for
|
| 412 |
+
|
| 413 |
+
7:
|
| 414 |
+
|
| 415 |
+
𝒦 t←{k∣s k≥τ q}\mathcal{K}_{t}\leftarrow\{k\mid s_{k}\geq\tau_{q}\}
|
| 416 |
+
|
| 417 |
+
8:Step 2: Anti-redundancy temporal de-duplication
|
| 418 |
+
|
| 419 |
+
9:Sort
|
| 420 |
+
|
| 421 |
+
𝒦 t\mathcal{K}_{t}
|
| 422 |
+
by time
|
| 423 |
+
|
| 424 |
+
10:
|
| 425 |
+
|
| 426 |
+
𝒦 t⋆←[]\mathcal{K}^{\star}_{t}\leftarrow[\ ]
|
| 427 |
+
|
| 428 |
+
11:for each
|
| 429 |
+
|
| 430 |
+
k k
|
| 431 |
+
in
|
| 432 |
+
|
| 433 |
+
𝒦 t\mathcal{K}_{t}
|
| 434 |
+
do
|
| 435 |
+
|
| 436 |
+
12:if
|
| 437 |
+
|
| 438 |
+
𝒦 t⋆\mathcal{K}^{\star}_{t}
|
| 439 |
+
is empty then
|
| 440 |
+
|
| 441 |
+
13: Append
|
| 442 |
+
|
| 443 |
+
k k
|
| 444 |
+
to
|
| 445 |
+
|
| 446 |
+
𝒦 t⋆\mathcal{K}^{\star}_{t}
|
| 447 |
+
|
| 448 |
+
14:else
|
| 449 |
+
|
| 450 |
+
15: Let
|
| 451 |
+
|
| 452 |
+
j j
|
| 453 |
+
be last element in
|
| 454 |
+
|
| 455 |
+
𝒦 t⋆\mathcal{K}^{\star}_{t}
|
| 456 |
+
|
| 457 |
+
16:
|
| 458 |
+
|
| 459 |
+
s j,k←sim(e(c j),e(c k))s_{j,k}\leftarrow\mathrm{sim}(e(c_{j}),e(c_{k}))
|
| 460 |
+
|
| 461 |
+
17:if
|
| 462 |
+
|
| 463 |
+
s j,k<τ adj s_{j,k}<\tau_{\mathrm{adj}}
|
| 464 |
+
then
|
| 465 |
+
|
| 466 |
+
18: Append
|
| 467 |
+
|
| 468 |
+
k k
|
| 469 |
+
to
|
| 470 |
+
|
| 471 |
+
𝒦 t⋆\mathcal{K}^{\star}_{t}
|
| 472 |
+
|
| 473 |
+
19:end if
|
| 474 |
+
|
| 475 |
+
20:end if
|
| 476 |
+
|
| 477 |
+
21:end for
|
| 478 |
+
|
| 479 |
+
22:Step 3: Formatting supervision targets
|
| 480 |
+
|
| 481 |
+
23:for each sampled frame index
|
| 482 |
+
|
| 483 |
+
t′t^{\prime}
|
| 484 |
+
do
|
| 485 |
+
|
| 486 |
+
24:if
|
| 487 |
+
|
| 488 |
+
t′∈𝒦 t⋆t^{\prime}\in\mathcal{K}^{\star}_{t}
|
| 489 |
+
then
|
| 490 |
+
|
| 491 |
+
25: Emit
|
| 492 |
+
|
| 493 |
+
[R t′]</E O T>[R_{t^{\prime}}]\ </EOT>
|
| 494 |
+
|
| 495 |
+
26:else
|
| 496 |
+
|
| 497 |
+
27: Emit <SKIP>
|
| 498 |
+
|
| 499 |
+
28:end if
|
| 500 |
+
|
| 501 |
+
29:end for
|
| 502 |
+
|
| 503 |
+
### A.3 Practical Notes
|
| 504 |
+
|
| 505 |
+
* •
|
| 506 |
+
Embedding normalization. All embeddings are ℓ 2\ell_{2}-normalized prior to similarity computation to stabilize thresholds.
|
| 507 |
+
|
| 508 |
+
* •
|
| 509 |
+
Batching. Frame and caption embeddings are computed in batches to mitigate I/O latency for long videos.
|
| 510 |
+
|
| 511 |
+
* •
|
| 512 |
+
Hyperparameters. Default values are Δ=0.5\Delta=0.5 s, ϵ=0.1\epsilon=0.1 s, τ q=0.7\tau_{q}=0.7, and τ adj=0.9\tau_{\mathrm{adj}}=0.9, balancing temporal precision with retention of key semantic content.
|
| 513 |
+
|
| 514 |
+
### A.4 Details of Dataset
|
| 515 |
+
|
| 516 |
+
The dataset spans 12 video reasoning tasks covering fine-grained event interpretation and high-level semantic understanding. As shown in Figure[8](https://arxiv.org/html/2603.02872#S1.F8 "Figure 8 ‣ A.4 Details of Dataset ‣ A Details of Streaming CoT Pipeline ‣ 5 Conclusion ‣ Temporal Coherence of Reasoning. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models") and Table[3](https://arxiv.org/html/2603.02872#S1.T3 "Table 3 ‣ A.4 Details of Dataset ‣ A Details of Streaming CoT Pipeline ‣ 5 Conclusion ‣ Temporal Coherence of Reasoning. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models"), the task distribution is long-tailed: _Causal Analysis_ and _Event Dynamic Analysis_ dominate, while _Ingredient Analysis_ and _Behavior Analysis_ are less frequent. This reflects the natural prevalence of reasoning behaviors in real-world video content while ensuring broad coverage for multi-step reasoning evaluation.
|
| 517 |
+
|
| 518 |
+
Temporal structure also varies significantly. Figure[9](https://arxiv.org/html/2603.02872#S1.F9 "Figure 9 ‣ A.4 Details of Dataset ‣ A Details of Streaming CoT Pipeline ‣ 5 Conclusion ‣ Temporal Coherence of Reasoning. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models") illustrates the distribution of keyframe counts, revealing a wide spectrum of temporal sparsity. Some videos contain sparse salient moments, while others feature dense, extended event sequences. This variability is critical for evaluating streaming reasoning, requiring models to adapt to varying event frequencies and accurately identify meaningful visual changes.
|
| 519 |
+
|
| 520 |
+

|
| 521 |
+
|
| 522 |
+
Figure 8: Task distribution in the dataset.
|
| 523 |
+
|
| 524 |
+

|
| 525 |
+
|
| 526 |
+
Figure 9: Distribution of keyframe counts per sample.
|
| 527 |
+
|
| 528 |
+
Table 3: Distribution of task categories in training and test sets.
|
| 529 |
+
|
| 530 |
+
B Prompt Details
|
| 531 |
+
----------------
|
| 532 |
+
|
| 533 |
+
We present the complete prompts used in our pipeline, including QA construction (Figure[10](https://arxiv.org/html/2603.02872#S2.F10 "Figure 10 ‣ B Prompt Details ‣ A.4 Details of Dataset ‣ A Details of Streaming CoT Pipeline ‣ 5 Conclusion ‣ Temporal Coherence of Reasoning. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models")), CoT inference (Figure[11](https://arxiv.org/html/2603.02872#S2.F11 "Figure 11 ‣ B Prompt Details ‣ A.4 Details of Dataset ‣ A Details of Streaming CoT Pipeline ‣ 5 Conclusion ‣ Temporal Coherence of Reasoning. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models")), and subjective evaluation (Figure[12](https://arxiv.org/html/2603.02872#S2.F12 "Figure 12 ‣ B Prompt Details ‣ A.4 Details of Dataset ‣ A Details of Streaming CoT Pipeline ‣ 5 Conclusion ‣ Temporal Coherence of Reasoning. ‣ 4.4 Temporal Behavior of Streaming Reasoning ‣ 4 Experiments ‣ Parallel KV Cache. ‣ Attention Pathways. ‣ 3.4 Parallel Streaming Paradigm ‣ 3 Methodology ‣ Think-as-You-See: Streaming Chain-of-Thought Reasoning for Large Vision-Language Models")).
|
| 534 |
+
|
| 535 |
+

|
| 536 |
+
|
| 537 |
+
Figure 10: Prompt template for QA construction.
|
| 538 |
+
|
| 539 |
+

|
| 540 |
+
|
| 541 |
+
Figure 11: Prompt template for CoT inference.
|
| 542 |
+
|
| 543 |
+

|
| 544 |
+
|
| 545 |
+
Figure 12: Prompt template for subjective evaluation.
|
| 546 |
+
|
| 547 |
+
C Training Details
|
| 548 |
+
------------------
|
| 549 |
+
|
| 550 |
+
We train TaYS using a streaming-aware decoder-only objective, where visual and reasoning tokens are interleaved with causal masking. Optimization employs AdamW with cosine decay, mixed-precision (bfloat16), gradient accumulation, activation checkpointing, and DeepSpeed ZeRO-3 for memory efficiency. The vision encoder remains frozen, while the multimodal projector and LLM backbone are fine-tuned. We regulate video token length via pixel-based constraints and train for two epochs with an effective sequence length of 8192 tokens.
|
| 551 |
+
|
| 552 |
+
Table 4: Training hyperparameters for TaYS.
|
| 553 |
+
|
| 554 |
+
D Evaluation Details
|
| 555 |
+
--------------------
|
| 556 |
+
|
| 557 |
+
#### Construction of Test Set.
|
| 558 |
+
|
| 559 |
+
Following the VideoEspresso protocol, we construct the test set with three distractor options per question. Distractors are designed to match the correct answer in contextual relevance and linguistic form while containing explicit factual inaccuracies, ensuring a discriminative evaluation. We apply the same answer-rewriting procedure as in training to maintain consistency.
|
| 560 |
+
|
| 561 |
+
Algorithm 2 Two-Stage Objective Evaluation
|
| 562 |
+
|
| 563 |
+
1:Prediction
|
| 564 |
+
|
| 565 |
+
y~\tilde{y}
|
| 566 |
+
, reference answer
|
| 567 |
+
|
| 568 |
+
y⋆y^{\star}
|
| 569 |
+
, options
|
| 570 |
+
|
| 571 |
+
𝒪={o 1,o 2,o 3,o 4}\mathcal{O}=\{o_{1},o_{2},o_{3},o_{4}\}
|
| 572 |
+
, correct option
|
| 573 |
+
|
| 574 |
+
o⋆∈𝒪 o^{\star}\in\mathcal{O}
|
| 575 |
+
, similarity function
|
| 576 |
+
|
| 577 |
+
sim\mathrm{sim}
|
| 578 |
+
, threshold
|
| 579 |
+
|
| 580 |
+
τ\tau
|
| 581 |
+
|
| 582 |
+
2:
|
| 583 |
+
|
| 584 |
+
s ref←sim(y~,y⋆)s_{\mathrm{ref}}\leftarrow\mathrm{sim}(\tilde{y},y^{\star})
|
| 585 |
+
|
| 586 |
+
3:if
|
| 587 |
+
|
| 588 |
+
s ref<τ s_{\mathrm{ref}}<\tau
|
| 589 |
+
then
|
| 590 |
+
|
| 591 |
+
4:return Incorrect
|
| 592 |
+
|
| 593 |
+
5:end if
|
| 594 |
+
|
| 595 |
+
6:for each
|
| 596 |
+
|
| 597 |
+
o j∈𝒪 o_{j}\in\mathcal{O}
|
| 598 |
+
do
|
| 599 |
+
|
| 600 |
+
7:
|
| 601 |
+
|
| 602 |
+
s j←sim(y~,o j)s_{j}\leftarrow\mathrm{sim}(\tilde{y},o_{j})
|
| 603 |
+
|
| 604 |
+
8:end for
|
| 605 |
+
|
| 606 |
+
9:
|
| 607 |
+
|
| 608 |
+
s opt←sim(y~,o⋆)s_{\mathrm{opt}}\leftarrow\mathrm{sim}(\tilde{y},o^{\star})
|
| 609 |
+
|
| 610 |
+
10:
|
| 611 |
+
|
| 612 |
+
s max neg←max{s j:o j∈𝒪,o j≠o⋆}s_{\max}^{\mathrm{neg}}\leftarrow\max\{s_{j}:o_{j}\in\mathcal{O},\,o_{j}\neq o^{\star}\}
|
| 613 |
+
|
| 614 |
+
11:if
|
| 615 |
+
|
| 616 |
+
s opt≥τ s_{\mathrm{opt}}\geq\tau
|
| 617 |
+
and
|
| 618 |
+
|
| 619 |
+
s opt>s max neg s_{\mathrm{opt}}>s_{\max}^{\mathrm{neg}}
|
| 620 |
+
then
|
| 621 |
+
|
| 622 |
+
12:return Correct
|
| 623 |
+
|
| 624 |
+
13:else
|
| 625 |
+
|
| 626 |
+
14:return Incorrect
|
| 627 |
+
|
| 628 |
+
15:end if
|
| 629 |
+
|
| 630 |
+
#### Objective Evaluation Protocol.
|
| 631 |
+
|
| 632 |
+
For each sample, we evaluate a free-form prediction y~\tilde{y} against a reference answer y⋆y^{\star} and multiple-choice options 𝒪={o 1,o 2,o 3,o 4}\mathcal{O}=\{o_{1},o_{2},o_{3},o_{4}\}, where o⋆o^{\star} is the correct option. We use a semantic similarity function sim(⋅,⋅)\mathrm{sim}(\cdot,\cdot) with a threshold τ=0.8\tau=0.8.
|
| 633 |
+
|
| 634 |
+
Stage 1: Reference similarity. We first compute s ref=sim(y~,y⋆)s_{\mathrm{ref}}=\mathrm{sim}(\tilde{y},y^{\star}). If s ref<τ s_{\mathrm{ref}}<\tau, the prediction is deemed incorrect.
|
| 635 |
+
|
| 636 |
+
Stage 2: Option discrimination. We compute similarities s j=sim(y~,o j)s_{j}=\mathrm{sim}(\tilde{y},o_{j}) for all options. Let s opt=sim(y~,o⋆)s_{\mathrm{opt}}=\mathrm{sim}(\tilde{y},o^{\star}) and s max neg=max o j≠o⋆s j s_{\max}^{\mathrm{neg}}=\max_{o_{j}\neq o^{\star}}s_{j}. A prediction is correct only if:
|
| 637 |
+
|
| 638 |
+
s ref≥τ,s opt≥τ,and s opt>s max neg.s_{\mathrm{ref}}\geq\tau,\qquad s_{\mathrm{opt}}\geq\tau,\qquad\text{and}\qquad s_{\mathrm{opt}}>s_{\max}^{\mathrm{neg}}.
|
| 639 |
+
|
| 640 |
+
#### Latency Evaluation Protocol.
|
| 641 |
+
|
| 642 |
+
We quantify real-time performance using two metrics: (1) Time to First Token (TTFT), measuring the interval between the arrival of the first frame and the emission of the first token; (2) Overall Delay, measuring the total time to complete reasoning and produce the final answer. All inferences run on identical hardware with token-level timing resolution to ensure fair comparison.
|