Video-MTR / README.md
nielsr's picture
nielsr HF Staff
Improve model card: Update pipeline tag, add library name, and enrich content for Video-MTR
c35d5ea verified
|
raw
history blame
1.91 kB
metadata
base_model:
  - Qwen/Qwen2.5-VL-7B-Instruct
language:
  - en
license: apache-2.0
pipeline_tag: video-text-to-text
library_name: transformers

Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding

This model is a checkpoint for Video-MTR, presented in the paper Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding.

Abstract

Long-form video understanding, characterized by long-range temporal dependencies and multiple events, remains a challenge. Existing methods often rely on static reasoning or external visual-language models (VLMs), which face issues like complexity and sub-optimal performance due to the lack of end-to-end training. In this paper, we propose Video-MTR, a reinforced multi-turn reasoning framework designed to enable iterative key video segment selection and question comprehension. Unlike traditional video reasoning pipeline, which generate predictions in a single turn, Video-MTR performs reasoning in multiple turns, selecting video segments progressively based on the evolving understanding of previously processed segments and the current question. This iterative process allows for a more refined and contextually aware analysis of the video. To ensure intermediate reasoning process, we introduce a novel gated bi-level reward system, combining trajectory-level rewards based on answer correctness and turn-level rewards emphasizing frame-query relevance. This system optimizes both video segment selection and question comprehension, eliminating the need for external VLMs and allowing end-to-end training. Extensive experiments on benchmarks like VideoMME, MLVU, and EgoSchema demonstrate that Video-MTR outperforms existing methods in both accuracy and efficiency, advancing the state-of-the-art in long video understanding.

References