stzhao nielsr HF Staff commited on
Commit
84a6975
·
1 Parent(s): 5f9727a

Improve model card and add metadata (#1)

Browse files

- Improve model card and add metadata (1a1049ca6b0445e4731c2697a6fc1954b2717577)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +26 -1
README.md CHANGED
@@ -1,9 +1,34 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
4
  [PyVision-RL: Forging Open Agentic Vision Models via RL](https://arxiv.org/abs/2602.20739)
5
 
6
- This is PyVision-Image-7B-RL, post trained from Qwen2.5-VL-7B.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  ```bibtex
9
  @article{pyvisionrl2026,
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
+ base_model: Qwen/Qwen2.5-VL-7B-Instruct
6
+ tags:
7
+ - multimodal
8
+ - agent
9
+ - reinforcement-learning
10
+ - qwen
11
  ---
12
+
13
+ # PyVision-Image-7B-RL
14
+
15
  [PyVision-RL: Forging Open Agentic Vision Models via RL](https://arxiv.org/abs/2602.20739)
16
 
17
+ This is **PyVision-Image-7B-RL**, a multimodal agentic vision model post-trained from [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) using the PyVision-RL reinforcement learning framework.
18
+
19
+ - **Project Page:** [https://agent-x.space/pyvision-rl/](https://agent-x.space/pyvision-rl/)
20
+ - **Repository:** [https://github.com/agents-x-project/PyVision-RL](https://github.com/agents-x-project/PyVision-RL)
21
+ - **Paper:** [https://arxiv.org/abs/2602.20739](https://arxiv.org/abs/2602.20739)
22
+
23
+ ## Description
24
+
25
+ Reinforcement learning for agentic multimodal models often suffers from "interaction collapse," where models learn to reduce tool usage and multi-turn reasoning. PyVision-RL is a framework designed to stabilize training and sustain interaction using an oversampling-filtering-ranking rollout strategy combined with an accumulative tool reward.
26
+
27
+ PyVision-Image-7B-RL is specifically optimized for image understanding tasks and sustained multi-turn tool interaction, demonstrating strong performance and efficiency for scalable multimodal agents.
28
+
29
+ ## Citation
30
+
31
+ If you find this work useful, please cite the following paper:
32
 
33
  ```bibtex
34
  @article{pyvisionrl2026,