nielsr HF Staff commited on
Commit
0b5bd7d
·
verified ·
1 Parent(s): a070661

Improve model card metadata and documentation

Browse files

Hi! I'm Niels from the community science team at Hugging Face. I've updated your model card to include:
- Metadata for `pipeline_tag` (`video-text-to-text`) and `library_name` (`transformers`).
- Links to the official GitHub repository and project homepage.
- A brief description of the model's capabilities based on the research paper.

This will help improve the discoverability of your model on the Hub.

Files changed (1) hide show
  1. README.md +18 -2
README.md CHANGED
@@ -1,12 +1,28 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - Qwen/Qwen2.5-VL-7B-Instruct
 
 
 
5
  ---
6
 
 
 
7
  [PyVision-RL: Forging Open Agentic Vision Models via RL](https://arxiv.org/abs/2602.20739)
8
 
9
- This is PyVision-Video-7B-SFT, post trained from Qwen2.5-VL-7B.
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  ```bibtex
12
  @article{pyvisionrl2026,
 
1
  ---
 
2
  base_model:
3
  - Qwen/Qwen2.5-VL-7B-Instruct
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: video-text-to-text
7
  ---
8
 
9
+ # PyVision-Video-7B-SFT
10
+
11
  [PyVision-RL: Forging Open Agentic Vision Models via RL](https://arxiv.org/abs/2602.20739)
12
 
13
+ This is **PyVision-Video-7B-SFT**, post-trained from [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
14
+
15
+ - **Project Page:** [https://agent-x.space/pyvision-rl/](https://agent-x.space/pyvision-rl/)
16
+ - **Repository:** [https://github.com/agents-x-project/PyVision-RL](https://github.com/agents-x-project/PyVision-RL)
17
+ - **Paper:** [arXiv:2602.20739](https://arxiv.org/abs/2602.20739)
18
+
19
+ ## Model Description
20
+ PyVision-Video is part of the PyVision-RL framework, which aims to stabilize Reinforcement Learning (RL) training for open-weight multimodal models to sustain agentic interaction.
21
+
22
+ For video reasoning, PyVision-Video employs an **on-demand context construction** strategy. It selectively samples task-relevant frames during the reasoning process, which significantly reduces visual token usage while maintaining strong performance on complex video understanding tasks. This model serves as the Supervised Fine-Tuning (SFT) checkpoint before RL training.
23
+
24
+ ## Citation
25
+ If you find this work useful, please cite the following paper:
26
 
27
  ```bibtex
28
  @article{pyvisionrl2026,