Papers
arxiv:2603.21289

When Models Judge Themselves: Unsupervised Self-Evolution for Multimodal Reasoning

Published on Mar 22
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

A self-evolution training framework for multimodal reasoning uses unsupervised learning with self-consistency signals and group-relative policy optimization to improve performance without labeled data.

AI-generated summary

Recent progress in multimodal large language models has led to strong performance on reasoning tasks, but these improvements largely rely on high-quality annotated data or teacher-model distillation, both of which are costly and difficult to scale. To address this, we propose an unsupervised self-evolution training framework for multimodal reasoning that achieves stable performance improvements without using human-annotated answers or external reward models. For each input, we sample multiple reasoning trajectories and jointly model their within group structure. We use the Actor's self-consistency signal as a training prior, and introduce a bounded Judge based modulation to continuously reweight trajectories of different quality. We further model the modulated scores as a group level distribution and convert absolute scores into relative advantages within each group, enabling more robust policy updates. Trained with Group Relative Policy Optimization (GRPO) on unlabeled data, our method consistently improves reasoning performance and generalization on five mathematical reasoning benchmarks, offering a scalable path toward self-evolving multimodal models. The code are available at https://github.com/OPPO-Mente-Lab/LLM-Self-Judge.

Community

Paper author

Recent multimodal models achieve strong reasoning performance but rely heavily on costly annotated data or teacher-based distillation, limiting scalability.
fig2_01

We propose an unsupervised self-evolution framework that samples multiple reasoning trajectories, leverages self-consistency as a prior, and applies bounded judge-based modulation with group-level distributional modeling.
By converting absolute scores into within-group relative advantages and optimizing with GRPO, our method enables stable and scalable improvements in multimodal reasoning without external supervision.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.21289 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.21289 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.21289 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.