vigal_data / README.md
yunfeixie's picture
Update README.md
1359572 verified
# ViGaL: Visual Game Learning
## Model Overview
We present **Visual Game Learning (ViGaL)**, a novel post-training paradigm where multimodal large language models (MLLMs) develop out-of-domain generalization of multimodal reasoning through playing arcade-like games.
**ViGaL-7B** demonstrates that training a 7B-parameter MLLM via reinforcement learning on simple arcade-like games like Snake significantly enhances its downstream performance on multimodal math benchmarks like MathVista, and on multi-discipline questions like MMMU, **without seeing any worked solutions, equations, or diagrams during RL**, suggesting the capture of transferable reasoning skills.
## Dataset Usage
### Preparing the Training Data
After unzipping the dataset, please check the `rotation` subfolder.
#### Converting Image Paths
If you're doing training, you'll need to process the JSON line metadata file in the `rotation` subfolder. The framework currently only supports absolute image paths, but the JSON line metadata file uses relative paths, so you'll need to add the absolute path prefix.
We provide a simple utility script `add_root_prefix.py` to convert relative paths to absolute paths. Run this script to update the metadata file before training:
```bash
python add_root_prefix.py --input rotation/metadata.jsonl --output rotation/metadata_absolute.jsonl --root /path/to/your/dataset
```
### Running Training
To run the training, please follow the instructions in this README. You can also refer to [https://github.com/ModalMinds/MM-EUREKA/tree/qwen](https://github.com/ModalMinds/MM-EUREKA/tree/qwen) for additional information - we're using the same codebase.
## Resources
For details of our approach and performance comparison, please see our [paper](https://arxiv.org/abs/2506.08011).
For details of training and evaluation, please see our [code repo](https://github.com/yunfeixie233/ViGaL).
| [**πŸš€ Project Page**](https://yunfeixie233.github.io/ViGaL/) | [**πŸ“– Paper**](https://arxiv.org/abs/2506.08011) | [**πŸ”— GitHub**](https://github.com/yunfeixie233/ViGaL) | [**πŸ€— Training Data**](https://huggingface.co/yunfeixie/vigal_data) | [**πŸ€— Model**](https://huggingface.co/yunfeixie/ViGaL-7B) |
## Citation
If you find this model useful, please cite our work:
```bibtex
@article{xie2025play,
title = {Play to Generalize: Learning to Reason Through Game Play},
author = {Xie, Yunfei and Ma, Yinsong and Lan, Shiyi and Yuille, Alan and Xiao, Junfei and Wei, Chen},
journal = {arXiv preprint arXiv:2506.08011},
year = {2025},
}
```