Add model card for GoT-R1-7B

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +34 -0
README.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: text-to-image
4
+ ---
5
+
6
+ # GoT-R1-7B
7
+
8
+ GoT-R1-7B is a multimodal large language model (MLLM) designed for high-quality text-to-image generation with advanced semantic-spatial reasoning, as introduced in the paper [GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement Learning](https://huggingface.co/papers/2505.17022).
9
+
10
+ - **Paper:** [GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement Learning](https://huggingface.co/papers/2505.17022)
11
+ - **Repository:** [https://github.com/gogoduan/GoT-R1](https://github.com/gogoduan/GoT-R1)
12
+
13
+ ## Overview
14
+
15
+ Visual generation models often struggle with complex prompts that specify multiple objects with precise spatial relationships and attributes. GoT-R1 addresses this by applying reinforcement learning to enhance semantic-spatial reasoning. Building upon the Generation Chain-of-Thought (GoT) approach, GoT-R1 enables models to autonomously discover effective reasoning strategies. The model uses a unified MLLM architecture (based on Janus-Pro) that autoregressively generates a textual reasoning chain followed by image tokens.
16
+
17
+ ## Usage
18
+
19
+ To use this model, please follow the installation instructions in the [official GitHub repository](https://github.com/gogoduan/GoT-R1). Inference can be performed using the provided script:
20
+
21
+ ```bash
22
+ python infer.py --ckpt_path <path-to-GoT-R1-7B-weights>
23
+ ```
24
+
25
+ ## Citation
26
+
27
+ ```bibtex
28
+ @article{duan2025got,
29
+ title={GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement Learning},
30
+ author={Duan, Chengqi and Fang, Rongyao and Wang, Yuqing and Wang, Kun and Huang, Linjiang and Zeng, Xingyu and Li, Hongsheng and Liu, Xihui},
31
+ journal={arXiv preprint arXiv:2505.17022},
32
+ year={2025}
33
+ }
34
+ ```