File size: 2,729 Bytes
9158172
 
 
3821124
 
 
 
 
9158172
3821124
29645b8
3821124
f422686
3821124
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f422686
3821124
 
 
f422686
 
 
 
 
 
 
 
 
 
 
3821124
 
 
 
 
 
 
 
 
 
f422686
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
base_model:
- Qwen/Qwen-Image-Edit-2509
language:
- en
license: apache-2.0
library_name: diffusers
pipeline_tag: image-to-image
---

# DiffThinker: Towards Generative Multimodal Reasoning with Diffusion Models

<a href="https://diffthinker-project.github.io/"><img src="https://img.shields.io/badge/%F0%9F%8C%90%20Project-Page-2563eb" alt="Project Page"></a>
<a href="https://github.com/lcqysl/DiffThinker"><img src="https://img.shields.io/badge/GitHub-Code-blue?logo=github" alt="GitHub"></a>
<a href="https://huggingface.co/papers/2512.24165"><img src="https://img.shields.io/badge/arXiv-Paper-b31b1b" alt="Paper"></a>

DiffThinker introduces a novel Generative Multimodal Reasoning paradigm, establishing a diffusion-based reasoning framework. It reformulates multimodal reasoning as a native generative image-to-image task, achieving superior logical consistency and spatial precision in vision-centric tasks compared to traditional text-centric Multimodal Large Language Models (MLLMs).

### Features
DiffThinker exhibits four core properties in its approach to vision-centric reasoning:
-   **Efficiency**: Streamlined reasoning process.
-   **Controllability**: Precise spatial and logical generation.
-   **Native Parallelism**: Advantageous for complex reasoning steps.
-   **Collaboration**: Works effectively across multiple domains (sequential planning, combinatorial optimization, constraint satisfaction, and spatial configuration).

### Quick Start
To get started with DiffThinker, clone the official repository and install the necessary dependencies:
```bash
git clone https://github.com/lcqysl/DiffThinker.git
cd DiffThinker/DiffSynth-Studio
pip install -e .
pip install gymnasium

# (Optional) Install vLLM for OCR tasks
# we recommend installing it in a SEPARATE environment to avoid conflicts.
# pip install vllm
```

### Inference & Evaluation
The test datasets used in our experiments are provided within each task's directory. We recommend using the same data to ensure the reproducibility of our results and to facilitate comparison with other models. If you wish to generate your own test data, please refer to the `gen.txt` file in each task directory.

```bash
cd Maze

# 1. Inference and Parsing
bash eval/gen_and_parse.sh

# 2. Evaluation
bash eval/eval_path.sh

# 3. Individual Inference
python ../DiffSynth-Studio/add/infer/infer.py
python ../DiffSynth-Studio/add/infer/infer_with_middle.py
```

### Citation
```bibtex
@article{he2024diffthinker,
  title={DiffThinker: Towards Generative Multimodal Reasoning with Diffusion Models},
  author={He, Zefeng and Qu, Xiaoye and Li, Yafu and Zhu, Tong and Huang, Siyuan and Cheng, Yu},
  journal={arXiv preprint arXiv:2512.24165},
  year={2024}
}
```