Improve model card: add metadata and project links
#1
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,17 +1,36 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
|
| 6 |
|
| 7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
```bibtex
|
| 10 |
-
@article{
|
| 11 |
-
title={PyVision-RL: Forging Open Agentic Vision Models via RL},
|
| 12 |
author={Zhao, Shitian and Lin, Shaoheng and Li, Ming and Zhang, Haoquan and Peng, Wenshuo and Zhang, Kaipeng and Wei, Chen},
|
| 13 |
-
journal={
|
| 14 |
-
year={2026}
|
| 15 |
}
|
| 16 |
-
```
|
| 17 |
-
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: video-text-to-text
|
| 5 |
+
tags:
|
| 6 |
+
- multimodal
|
| 7 |
+
- agent
|
| 8 |
+
- reinforcement-learning
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# PyVision-Video-7B-RL
|
| 12 |
|
| 13 |
+
[**PyVision-RL: Forging Open Agentic Vision Models via RL**](https://huggingface.co/papers/2602.20739)
|
| 14 |
+
|
| 15 |
+
PyVision-Video-7B-RL is an open-weight agentic multimodal model post-trained from [Qwen2.5-VL-7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) using reinforcement learning.
|
| 16 |
+
|
| 17 |
+
- **Project Page:** [agent-x.space/pyvision-rl](https://agent-x.space/pyvision-rl/)
|
| 18 |
+
- **GitHub Repository:** [agents-x-project/PyVision-RL](https://github.com/agents-x-project/PyVision-RL)
|
| 19 |
+
- **Paper:** [arXiv:2602.20739](https://arxiv.org/abs/2602.20739)
|
| 20 |
+
|
| 21 |
+
## Overview
|
| 22 |
+
|
| 23 |
+
Reinforcement learning for agentic multimodal models often suffers from interaction collapse, where models learn to reduce tool usage and multi-turn reasoning. **PyVision-RL** is a framework designed to stabilize training and sustain interaction by combining an oversampling-filtering-ranking rollout strategy with an accumulative tool reward.
|
| 24 |
+
|
| 25 |
+
**PyVision-Video** specifically addresses the challenge of video reasoning using **on-demand context construction**. It selectively samples task-relevant frames during the reasoning process to significantly reduce visual token usage while maintaining high performance on complex multimodal agentic tasks.
|
| 26 |
+
|
| 27 |
+
## Citation
|
| 28 |
|
| 29 |
```bibtex
|
| 30 |
+
@article{zhao2026pyvisionrl,
|
| 31 |
+
title={PyVision-RL: Forging Open Agentic Vision Models via RL.},
|
| 32 |
author={Zhao, Shitian and Lin, Shaoheng and Li, Ming and Zhang, Haoquan and Peng, Wenshuo and Zhang, Kaipeng and Wei, Chen},
|
| 33 |
+
journal={arxiv preprint arxiv:2602.20739},
|
| 34 |
+
year={2026},
|
| 35 |
}
|
| 36 |
+
```
|
|
|