# VTool-R1 Model weights for the paper "VTool-R1: VLMs Learn to Think with Images via Reinforcement Learning on Multimodal Tool Use" [![Paper](https://img.shields.io/badge/paper-5f16a8?style=for-the-badge&logo=arxiv&logoColor=white&color=FF5F05)](https://arxiv.org/pdf/2505.19255) [![HOMEPAGE](https://img.shields.io/badge/HOMEPAGE-3858bf?style=for-the-badge&logo=homepage&logoColor=white&color=13294B)](https://vtool-r1.github.io/) [![Weights](https://img.shields.io/badge/Model%20Weights-63cad3?style=for-the-badge&logo=huggingface&logoColor=white&color=FF5F05)](https://huggingface.co/VTOOL) [Chart 3B](https://huggingface.co/VTOOL/VTOOL-R1-3B-V3-F) [Chart 7B](https://huggingface.co/VTOOL/VTOOL-R1-7B-F) [Chart 32B](https://huggingface.co/VTOOL/VTOOL-R1-32B-F) We are working on training better versions of our Table models, they will be available very soon. [Table 3B (Soon)]() [Table 7B (Soon)]() [Table 32B (Soon)]() If you find our project helpful, please cite:
@misc{wu2025vtoolr1vlmslearnthink,
      title={VTool-R1: VLMs Learn to Think with Images via Reinforcement Learning on Multimodal Tool Use}, 
      author={Mingyuan Wu and Jingcheng Yang and Jize Jiang and Meitang Li and Kaizhuo Yan and Hanchao Yu and Minjia Zhang and Chengxiang Zhai and Klara Nahrstedt},
      year={2025},
      eprint={2505.19255},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2505.19255}, 
}