nielsr HF Staff commited on
Commit
c35d5ea
·
verified ·
1 Parent(s): 0b7bc9e

Improve model card: Update pipeline tag, add library name, and enrich content for Video-MTR

Browse files

This PR significantly improves the model card for the `Video-MTR` model by:

* **Updating the `pipeline_tag`**: Changed from `visual-question-answering` to `video-text-to-text`. This new tag more accurately reflects the model's capabilities in long video understanding and multi-turn reasoning for question comprehension, enhancing its discoverability on the Hub (https://huggingface.co/models?pipeline_tag=video-text-to-text).
* **Adding `library_name: transformers`**: Evidence from `config.json` (e.g., `architectures: ["Qwen2_5_VLForConditionalGeneration"]`, `transformers_version: "4.49.0"`) confirms compatibility with the Hugging Face `transformers` library, enabling automated "how to use" code snippets for users.
* **Expanding the model card content**:
* Added the full paper title as the main heading.
* Included the complete paper abstract to provide detailed insights into the model's methodology and contributions.
* The paper link in the content remains `https://arxiv.org/abs/2508.20478` as per instructions.

Please note that the provided GitHub repository and project page URLs were found to be for a different project ("UniMuMo") and have therefore been omitted from this model card to maintain accuracy. No sample usage was included as no relevant code snippets were found.

Files changed (1) hide show
  1. README.md +14 -5
README.md CHANGED
@@ -1,11 +1,20 @@
1
  ---
2
- license: apache-2.0
3
- language:
4
- - en
5
  base_model:
6
  - Qwen/Qwen2.5-VL-7B-Instruct
7
- pipeline_tag: visual-question-answering
 
 
 
 
8
  ---
 
 
 
 
 
 
 
 
9
  ## References
10
 
11
- * [Model Paper](https://arxiv.org/abs/2508.20478)
 
1
  ---
 
 
 
2
  base_model:
3
  - Qwen/Qwen2.5-VL-7B-Instruct
4
+ language:
5
+ - en
6
+ license: apache-2.0
7
+ pipeline_tag: video-text-to-text
8
+ library_name: transformers
9
  ---
10
+
11
+ # Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding
12
+
13
+ This model is a checkpoint for **Video-MTR**, presented in the paper [Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding](https://arxiv.org/abs/2508.20478).
14
+
15
+ ## Abstract
16
+ Long-form video understanding, characterized by long-range temporal dependencies and multiple events, remains a challenge. Existing methods often rely on static reasoning or external visual-language models (VLMs), which face issues like complexity and sub-optimal performance due to the lack of end-to-end training. In this paper, we propose Video-MTR, a reinforced multi-turn reasoning framework designed to enable iterative key video segment selection and question comprehension. Unlike traditional video reasoning pipeline, which generate predictions in a single turn, Video-MTR performs reasoning in multiple turns, selecting video segments progressively based on the evolving understanding of previously processed segments and the current question. This iterative process allows for a more refined and contextually aware analysis of the video. To ensure intermediate reasoning process, we introduce a novel gated bi-level reward system, combining trajectory-level rewards based on answer correctness and turn-level rewards emphasizing frame-query relevance. This system optimizes both video segment selection and question comprehension, eliminating the need for external VLMs and allowing end-to-end training. Extensive experiments on benchmarks like VideoMME, MLVU, and EgoSchema demonstrate that Video-MTR outperforms existing methods in both accuracy and efficiency, advancing the state-of-the-art in long video understanding.
17
+
18
  ## References
19
 
20
+ * [Model Paper](https://arxiv.org/abs/2508.20478)