File size: 1,734 Bytes
33554bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0ece0de
33554bd
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
language:
- en
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- transformers
- multimodal
library_name: transformers
---

## Model Overview

We present **Visual Game Learning (ViGaL)**, a novel post-training paradigm where multimodal large language models (MLLMs) develop out-of-domain generalization of multimodal reasoning through playing arcade-like games. 

**ViGaL-7B** demonstrates that training a 7B-parameter MLLM via reinforcement learning on simple arcade-like games like Snake significantly enhances its downstream performance on multimodal math benchmarks like MathVista, and on multi-discipline questions like MMMU, **without seeing any worked solutions, equations, or diagrams during RL**, suggesting the capture of transferable reasoning skills.

## Resources

For details of our approach and performance comparison, please see our [paper](https://arxiv.org/abs/2506.08011).

For details of training and evaluation, please see our [code repo](https://github.com/yunfeixie233/ViGaL).

| [**๐Ÿš€ Project Page**](https://yunfeixie233.github.io/ViGaL/) | [**๐Ÿ“– Paper**](https://arxiv.org/abs/2506.08011) | [**๐Ÿ”— GitHub**](https://github.com/yunfeixie233/ViGaL) | [**๐Ÿค— Training Data**](https://huggingface.co/yunfeixie/vigal_data) | [**๐Ÿค— Model**](https://huggingface.co/yunfeixie/ViGaL-7B) |


## Citation

If you feel this model useful, please give us a free cite:
```bibtex
@article{xie2025play,
  title     = {Play to Generalize: Learning to Reason Through Game Play},
  author    = {Xie, Yunfei and Ma, Yinsong and Lan, Shiyi and Yuille, Alan and Xiao, Junfei and Wei, Chen},
  journal   = {arXiv preprint arXiv:2506.08011},
  year      = {2025},
}
```