File size: 3,456 Bytes
4d5b68b 9cc936b 4d5b68b b9b3ce9 9cc936b 4d5b68b 7d5c919 711e4df cd9e370 711e4df 7ced6e3 4d5b68b 9cc936b e0f7c2d 4d5b68b 9cc936b 4d5b68b cd9e370 711e4df 4d5b68b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
base_model:
- OpenGVLab/InternVL3-8B
datasets:
- Code2Logic/GameQA-140K
- Code2Logic/GameQA-5K
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: transformers
---
***This model (GameQA-InternVL3-8B) results from training InternVL3-8B with GRPO solely on our [GameQA-5K](https://huggingface.co/datasets/Code2Logic/GameQA-5K) (sampled from the full [GameQA-140K](https://huggingface.co/datasets/Gabriel166/GameQA-140K) dataset).***
# Evaluation Results on General Vision BenchMarks
<div align=center><img src="https://raw.githubusercontent.com/tongjingqi/Code2Logic/refs/heads/main/assets/evaluation_results_on_general_vision_benchmarks.png"></div>
***(The inference and evaluation configurations were unified across both the original open-source models and our trained models.)***
# Code2Logic: Game-Code-Driven Data Synthesis for Enhancing VLMs General Reasoning
**Paper Abstract:**
Visual-language Chain-of-Thought (CoT) data resources are relatively scarce compared to text-only counterparts, limiting the improvement of reasoning capabilities in Vision Language Models (VLMs). However, high-quality vision-language reasoning data is expensive and labor-intensive to annotate. To address this issue, we leverage a promising resource: game code, which naturally contains logical structures and state transition processes. Therefore, we propose Code2Logic, a novel game-code-driven approach for multimodal reasoning data synthesis. Our approach leverages Large Language Models (LLMs) to adapt game code, enabling automatic acquisition of reasoning processes and results through code execution. Using the Code2Logic approach, we developed the GameQA dataset to train and evaluate VLMs. GameQA is cost-effective and scalable, offers controllable difficulty gradation and is diverse with 30 games and 158 tasks. Surprisingly, despite training solely on game data, VLMs demonstrated out of domain generalization, specifically Qwen2.5-VL-7B improving performance by 2.33% across 7 diverse vision-language benchmarks. Our code, dataset and models are available at this https URL .
This model was presented in the paper [Code2Logic: Game-Code-Driven Data Synthesis for Enhancing VLMs General Reasoning](https://huggingface.co/papers/2505.13886).
Code: [https://github.com/tongjingqi/Code2Logic](https://github.com/tongjingqi/Code2Logic)
This is the first work, to the best of our knowledge, that leverages ***game code*** to synthesize multimodal reasoning data for ***training*** VLMs. Furthermore, when trained with a GRPO strategy solely on **GameQA** (synthesized via our proposed **Code2Logic** approach), multiple cutting-edge open-source models exhibit significantly enhanced out-of-domain generalization.
[[🤗 GameQA-140K Dataset](https://huggingface.co/datasets/Gabriel166/GameQA-140K)] [[🤗 GameQA-5K Dataset](https://huggingface.co/datasets/Code2Logic/GameQA-5K)] [[🤗 GameQA-InternVL3-8B](https://huggingface.co/Code2Logic/GameQA-InternVL3-8B) ] [[🤗 GameQA-Qwen2.5-VL-7B](https://huggingface.co/Code2Logic/GameQA-Qwen2.5-VL-7B)] [[🤗 GameQA-LLaVA-OV-7B](https://huggingface.co/Code2Logic/GameQA-llava-onevision-qwen2-7b-ov-hf) ]
<div align=center><img src="https://raw.githubusercontent.com/tongjingqi/Code2Logic/refs/heads/main/assets/categorized_30_games_images.png"></div>
## News
* We've open-sourced the ***three*** models trained with GRPO on GameQA on [Huggingface](https://huggingface.co/Code2Logic). |