|
|
--- |
|
|
base_model: |
|
|
- Qwen/Qwen2.5-VL-7B-Instruct |
|
|
language: |
|
|
- en |
|
|
license: apache-2.0 |
|
|
pipeline_tag: image-text-to-text |
|
|
tags: |
|
|
- transformers |
|
|
- multimodal |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
## Model Overview |
|
|
|
|
|
We present **Visual Game Learning (ViGaL)**, a novel post-training paradigm where multimodal large language models (MLLMs) develop out-of-domain generalization of multimodal reasoning through playing arcade-like games. |
|
|
|
|
|
**ViGaL-7B** demonstrates that training a 7B-parameter MLLM via reinforcement learning on simple arcade-like games like Snake significantly enhances its downstream performance on multimodal math benchmarks like MathVista, and on multi-discipline questions like MMMU, **without seeing any worked solutions, equations, or diagrams during RL**, suggesting the capture of transferable reasoning skills. |
|
|
|
|
|
## Resources |
|
|
|
|
|
For details of our approach and performance comparison, please see our [paper](https://arxiv.org/abs/2506.08011). |
|
|
|
|
|
For details of training and evaluation, please see our [code repo](https://github.com/yunfeixie233/ViGaL). |
|
|
|
|
|
| [**π Project Page**](https://yunfeixie233.github.io/ViGaL/) | [**π Paper**](https://arxiv.org/abs/2506.08011) | [**π GitHub**](https://github.com/yunfeixie233/ViGaL) | [**π€ Training Data**](https://huggingface.co/yunfeixie/vigal_data) | [**π€ Model**](https://huggingface.co/yunfeixie/ViGaL-7B) | |
|
|
|
|
|
|
|
|
## Citation |
|
|
|
|
|
If you feel this model useful, please give us a free cite: |
|
|
```bibtex |
|
|
@article{xie2025play, |
|
|
title = {Play to Generalize: Learning to Reason Through Game Play}, |
|
|
author = {Xie, Yunfei and Ma, Yinsong and Lan, Shiyi and Yuille, Alan and Xiao, Junfei and Wei, Chen}, |
|
|
journal = {arXiv preprint arXiv:2506.08011}, |
|
|
year = {2025}, |
|
|
} |
|
|
``` |