File size: 1,418 Bytes
cfa5d4e
 
 
233150b
 
 
cfa5d4e
 
233150b
 
 
 
 
 
 
64c3bc2
 
 
233150b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
base_model:
- lmms-lab/llava-onevision-qwen2-7b-ov
library_name: transformers
pipeline_tag: image-text-to-text
license: cc-by-nc-4.0
---

This is the **Output Reward Model (ORM)** used in the paper [T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT](https://arxiv.org/pdf/2505.00703).

T2I-R1 is a novel reasoning-enhanced text-to-image generation model powered by Reinforcement Learning (RL) with a bi-level Chain-of-Thought (CoT) reasoning process. This ORM is crucial for evaluating image generation by leveraging two levels of CoT:
1.  **Semantic-level CoT**: for high-level planning of the prompt.
2.  **Token-level CoT**: for low-level pixel processing during patch-by-patch generation.

The paper introduces BiCoT-GRPO with an ensemble of generation rewards, which seamlessly optimizes both generation CoTs within the same training step. By applying these reasoning strategies to the baseline model, Janus-Pro, T2I-R1 achieves superior performance with a 13% improvement on T2I-CompBench and 19% improvement on the WISE benchmark, even surpassing the state-of-the-art model FLUX.1.

This model is fine-tuned from [lmms-lab/llava-onevision-qwen2-7b-ov](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov).

For more details, please refer to the [official paper](https://arxiv.org/pdf/2505.00703) and the [GitHub repository](https://github.com/CaraJ7/T2I-R1).