DiffThinker: Towards Generative Multimodal Reasoning with Diffusion Models
Abstract
While recent Multimodal Large Language Models (MLLMs) have attained significant strides in multimodal reasoning, their reasoning processes remain predominantly text-centric, leading to suboptimal performance in complex long-horizon, vision-centric tasks. In this paper, we establish a novel Generative Multimodal Reasoning paradigm and introduce DiffThinker, a diffusion-based reasoning framework. Conceptually, DiffThinker reformulates multimodal reasoning as a native generative image-to-image task, achieving superior logical consistency and spatial precision in vision-centric tasks. We perform a systematic comparison between DiffThinker and MLLMs, providing the first in-depth investigation into the intrinsic characteristics of this paradigm, revealing four core properties: efficiency, controllability, native parallelism, and collaboration. Extensive experiments across four domains (sequential planning, combinatorial optimization, constraint satisfaction, and spatial configuration) demonstrate that DiffThinker significantly outperforms leading closed source models including GPT-5 (+314.2\%) and Gemini-3-Flash (+111.6\%), as well as the fine-tuned Qwen3-VL-32B baseline (+39.0\%), highlighting generative multimodal reasoning as a promising approach for vision-centric reasoning.
Community
TLDR: A new paradigm for multi-modal reasoning with image-to-image generation. Diffusion could think too!
arXiv lens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/diffthinker-towards-generative-multimodal-reasoning-with-diffusion-models-8277-5a4d5999
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MMGR: Multi-Modal Generative Reasoning (2025)
- GGBench: A Geometric Generative Reasoning Benchmark for Unified Multimodal Models (2025)
- AdaTok: Adaptive Token Compression with Object-Aware Representations for Efficient Multimodal LLMs (2025)
- VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs (2025)
- MM-CoT:A Benchmark for Probing Visual Chain-of-Thought Reasoning in Multimodal Models (2025)
- Yanyun-3: Enabling Cross-Platform Strategy Game Operation with Vision-Language Models (2025)
- FysicsWorld: A Unified Full-Modality Benchmark for Any-to-Any Understanding, Generation, and Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper