File size: 2,577 Bytes
3cb7511
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1359572
 
3cb7511
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# ViGaL: Visual Game Learning

## Model Overview
We present **Visual Game Learning (ViGaL)**, a novel post-training paradigm where multimodal large language models (MLLMs) develop out-of-domain generalization of multimodal reasoning through playing arcade-like games. 

**ViGaL-7B** demonstrates that training a 7B-parameter MLLM via reinforcement learning on simple arcade-like games like Snake significantly enhances its downstream performance on multimodal math benchmarks like MathVista, and on multi-discipline questions like MMMU, **without seeing any worked solutions, equations, or diagrams during RL**, suggesting the capture of transferable reasoning skills.

## Dataset Usage

### Preparing the Training Data

After unzipping the dataset, please check the `rotation` subfolder.

#### Converting Image Paths

If you're doing training, you'll need to process the JSON line metadata file in the `rotation` subfolder. The framework currently only supports absolute image paths, but the JSON line metadata file uses relative paths, so you'll need to add the absolute path prefix.

We provide a simple utility script `add_root_prefix.py` to convert relative paths to absolute paths. Run this script to update the metadata file before training:

```bash
python add_root_prefix.py --input rotation/metadata.jsonl --output rotation/metadata_absolute.jsonl --root /path/to/your/dataset
```

### Running Training

To run the training, please follow the instructions in this README. You can also refer to [https://github.com/ModalMinds/MM-EUREKA/tree/qwen](https://github.com/ModalMinds/MM-EUREKA/tree/qwen) for additional information - we're using the same codebase.

## Resources

For details of our approach and performance comparison, please see our [paper](https://arxiv.org/abs/2506.08011).

For details of training and evaluation, please see our [code repo](https://github.com/yunfeixie233/ViGaL).

| [**๐Ÿš€ Project Page**](https://yunfeixie233.github.io/ViGaL/) | [**๐Ÿ“– Paper**](https://arxiv.org/abs/2506.08011) | [**๐Ÿ”— GitHub**](https://github.com/yunfeixie233/ViGaL) | [**๐Ÿค— Training Data**](https://huggingface.co/yunfeixie/vigal_data) | [**๐Ÿค— Model**](https://huggingface.co/yunfeixie/ViGaL-7B) |


## Citation

If you find this model useful, please cite our work:

```bibtex
@article{xie2025play,
  title     = {Play to Generalize: Learning to Reason Through Game Play},
  author    = {Xie, Yunfei and Ma, Yinsong and Lan, Shiyi and Yuille, Alan and Xiao, Junfei and Wei, Chen},
  journal   = {arXiv preprint arXiv:2506.08011},
  year      = {2025},
}
```