File size: 4,127 Bytes
974c548
 
 
 
 
7ecc53f
974c548
 
 
7ecc53f
974c548
 
 
7ecc53f
974c548
 
 
7ecc53f
974c548
 
 
7ecc53f
974c548
 
 
7ecc53f
974c548
 
 
7ecc53f
974c548
 
 
7ecc53f
 
 
 
 
 
 
e873d06
 
 
 
 
 
 
ac1c9c5
e63bcfc
 
e873d06
 
7ecc53f
 
 
e9a5cdf
e873d06
e9a5cdf
e873d06
e9a5cdf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7ecc53f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
configs:
- config_name: algopuzzle
  data_files:
  - split: train
    path: algopuzzle_train.parquet
- config_name: mmk12
  data_files:
  - split: train
    path: mmk12_train.parquet
- config_name: thinklite_vl_hard
  data_files:
  - split: train
    path: thinklite_vl_hard_train.parquet
- config_name: tqa_train
  data_files:
  - split: train
    path: tqa_train.parquet
- config_name: virl39k
  data_files:
  - split: train
    path: virl39k_train.parquet
- config_name: wemath_pro
  data_files:
  - split: train
    path: wemath_pro.parquet
- config_name: wemath_standard
  data_files:
  - split: train
    path: wemath_standard.parquet
- config_name: validation
  data_files:
  - split: val
    path: val.parquet
task_categories:
- image-text-to-text
tags:
- sft
- reinforcement-learning
license: cc-by-nc-4.0
---

# OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe

<div align="center">

[![Data](https://img.shields.io/badge/Data-0040A1?style=for-the-badge&logo=huggingface&logoColor=ffffff&labelColor)](https://huggingface.co/collections/lmms-lab/openmmreasoner)
[![Paper](https://img.shields.io/badge/Paper-000000?style=for-the-badge&logo=arxiv&logoColor=white)](https://arxiv.org/abs/2511.16334)
[![Project Page](https://img.shields.io/badge/Website-000000?style=for-the-badge&logo=google-chrome&logoColor=white)](https://evolvinglmms-lab.github.io/OpenMMReasoner/)
[![Github](https://img.shields.io/badge/Code-000000?style=for-the-badge&logo=github&logoColor=white)](https://github.com/EvolvingLMMs-Lab/OpenMMReasoner)
</div>

## Overview
Recent advancements in large reasoning models have fueled growing interest in extending such capabilities to multimodal domains. However, despite notable progress in visual reasoning, the lack of transparent and reproducible data curation and training strategies remains a major barrier to scalable research. In this work, we introduce OpenMMReasoner, a fully transparent two-stage recipe for multimodal reasoning spanning supervised fine-tuning (SFT) and reinforcement learning (RL). In the SFT stage, we construct an 874K-sample cold-start dataset with rigorous step-by-step validation, providing a strong foundation for reasoning capabilities. The subsequent RL stage leverages a 74K-sample dataset across diverse domains to further sharpen and stabilize these abilities, resulting in a more robust and efficient learning process. Extensive evaluations demonstrate that our training recipe not only surpasses strong baselines but also highlights the critical role of data quality and training design in shaping multimodal reasoning performance. Notably, our method achieves a 11.6% improvement over the Qwen2.5-VL-7B-Instruct baseline across nine multimodal reasoning benchmarks, establishing a solid empirical foundation for future large-scale multimodal reasoning research.

Here are the RL Data used to train **[OpenMMReasoner-RL](https://huggingface.co/OpenMMReasoner/OpenMMReasoner-RL)**. We use **[verl](https://github.com/volcengine/verl)** as the training framework.

To use this dataset, first snapshot-download the entire repository to your local machine. After that, you can load the dataset using the example script provided in our GitHub repository by pointing it to your local data folder and the parquet file.

An example configuration file would be:

```bash
DATA_FOLDER=/path/to/your/data

ray job submit --address="http://127.0.0.1:8265" \
    --runtime-env=verl/trainer/runtime_env.yaml \
    -- \
    bash -c "cd /path/to/your/verl/ && \
    python3 -m verl.trainer.main_ppo \
        algorithm.adv_estimator=${adv_estimator} \
        actor_rollout_ref.actor.policy_loss.loss_mode=${loss_mode} \
        data.train_files=[$DATA_FOLDER/algopuzzle_train.parquet,$DATA_FOLDER/mmk12_train.parquet,$DATA_FOLDER/puzzlevqa_train.parquet,$DATA_FOLDER/thinklite_vl_hard_train.parquet,$DATA_FOLDER/tqa_train.parquet,$DATA_FOLDER/virl39k_train.parquet,$DATA_FOLDER/wemath_standard.parquet,$DATA_FOLDER/wemath_pro.parquet] \
        data.val_files=${DATA_FOLDER}/val.parquet \


    ... rest of the command args ...