yunfeixie commited on
Commit
3cb7511
·
verified ·
1 Parent(s): f3f05b0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ViGaL: Visual Game Learning
2
+
3
+ ## Model Overview
4
+ We present **Visual Game Learning (ViGaL)**, a novel post-training paradigm where multimodal large language models (MLLMs) develop out-of-domain generalization of multimodal reasoning through playing arcade-like games.
5
+
6
+ **ViGaL-7B** demonstrates that training a 7B-parameter MLLM via reinforcement learning on simple arcade-like games like Snake significantly enhances its downstream performance on multimodal math benchmarks like MathVista, and on multi-discipline questions like MMMU, **without seeing any worked solutions, equations, or diagrams during RL**, suggesting the capture of transferable reasoning skills.
7
+
8
+ ## Dataset Usage
9
+
10
+ ### Preparing the Training Data
11
+
12
+ After unzipping the dataset, please check the `rotation` subfolder.
13
+
14
+ #### Converting Image Paths
15
+
16
+ If you're doing training, you'll need to process the JSON line metadata file in the `rotation` subfolder. The framework currently only supports absolute image paths, but the JSON line metadata file uses relative paths, so you'll need to add the absolute path prefix.
17
+
18
+ We provide a simple utility script `add_root_prefix.py` to convert relative paths to absolute paths. Run this script to update the metadata file before training:
19
+
20
+ ```bash
21
+ python add_root_prefix.py --input rotation/metadata.jsonl --output rotation/metadata_absolute.jsonl --root /path/to/your/dataset
22
+ ```
23
+
24
+ ### Running Training
25
+
26
+ To run the training, please follow the instructions in this README. You can also refer to [https://github.com/ModalMinds/MM-EUREKA/tree/qwen](https://github.com/ModalMinds/MM-EUREKA/tree/qwen) for additional information - we're using the same codebase.
27
+
28
+ ## Resources
29
+
30
+ For details of our approach and performance comparison, please see our [paper](https://arxiv.org/abs/2506.08011).
31
+
32
+ For details of training and evaluation, please see our [code repo](https://github.com/yunfeixie233/ViGaL).
33
+
34
+ | [**🚀 Project Page**](https://yunfeixie233.github.io/ViGaL/) | [**📖 Paper**](https://arxiv.org/abs/2506.08011) | [**🔗 GitHub**](https://github.com/yunfeixie233/ViGaL) | **🤗 [Training Data**](https://huggingface.co/yunfeixie/vigal_data) |
35
+
36
+ ## Citation
37
+
38
+ If you find this model useful, please cite our work:
39
+
40
+ ```bibtex
41
+ @article{xie2025play,
42
+ title = {Play to Generalize: Learning to Reason Through Game Play},
43
+ author = {Xie, Yunfei and Ma, Yinsong and Lan, Shiyi and Yuille, Alan and Xiao, Junfei and Wei, Chen},
44
+ journal = {arXiv preprint arXiv:2506.08011},
45
+ year = {2025},
46
+ }
47
+ ```