All Roads Lead to Rome: Incentivizing Divergent Thinking in Vision-Language Models
Abstract
Reinforcement Learning enhances Vision-Language Model reasoning but suffers from diversity collapse; a new Multi-Group Policy Optimization method is proposed to encourage diverse thinking patterns.
Recent studies have demonstrated that Reinforcement Learning (RL), notably Group Relative Policy Optimization (GRPO), can intrinsically elicit and enhance the reasoning capabilities of Vision-Language Models (VLMs). However, despite the promise, the underlying mechanisms that drive the effectiveness of RL models as well as their limitations remain underexplored. In this paper, we highlight a fundamental behavioral distinction between RL and base models, where the former engages in deeper yet narrow reasoning, while base models, despite less refined along individual path, exhibit broader and more diverse thinking patterns. Through further analysis of training dynamics, we show that GRPO is prone to diversity collapse, causing models to prematurely converge to a limited subset of reasoning strategies while discarding the majority of potential alternatives, leading to local optima and poor scalability. To address this, we propose Multi-Group Policy Optimization (MUPO), a simple yet effective approach designed to incentivize divergent thinking across multiple solutions, and demonstrate its effectiveness on established benchmarks. Project page: https://xytian1008.github.io/MUPO/
Community
We identify diversity collapse in GRPO-trained VLMs and propose MUPO, a simple drop-in replacement that incentivizes divergent reasoning across multiple solution strategies, achieving new state-of-the-art results (CVPR2026).
Project page👉: https://xytian1008.github.io/MUPO/
Github repo👉: https://github.com/xytian1008/MUPO
Get this paper in your agent:
hf papers read 2604.00479 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper