nielsr's picture
nielsr HF Staff
Update model card for CodeGoat24/UnifiedReward-Think-qwen-7b (Pref-GRPO reward model)
039c5f9 verified
|
raw
history blame
6.16 kB
---
base_model:
- CodeGoat24/UnifiedReward-qwen-7b
datasets:
- CodeGoat24/HPD
- CodeGoat24/OIP
- CodeGoat24/EvalMuse
- CodeGoat24/ShareGPTVideo-DPO
- CodeGoat24/LLaVA-Critic-113k
- CodeGoat24/VideoDPO
- CodeGoat24/Text-2-Video-Human-Preferences
- CodeGoat24/OpenAI-4o_t2i_human_preference
- CodeGoat24/ImageGen_Reward_Cold_Start
license: mit
library_name: transformers
pipeline_tag: image-text-to-text
---
## Model Summary
`Unified-Reward-Think-qwen-7b` is a unified multimodal CoT reward model, capable of multi-dimensional, step-by-step long-chain reasoning for both visual understanding and generation reward tasks. This model serves as the pairwise preference reward model for the framework presented in the paper [Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement Learning](https://huggingface.co/papers/2508.20751).
For further details on Pref-GRPO and this reward model, please refer to the following resources:
- πŸ“° Paper: [Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement Learning](https://huggingface.co/papers/2508.20751)
- πŸͺ Project Page: [https://codegoat24.github.io/UnifiedReward/Pref-GRPO](https://codegoat24.github.io/UnifiedReward/Pref-GRPO)
- πŸ’» GitHub Repository (Pref-GRPO framework): [https://github.com/CodeGoat24/Pref-GRPO](https://github.com/CodeGoat24/Pref-GRPO)
- πŸ€— Model Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-models-67c3008148c3a380d15ac63a
- πŸ€— Dataset Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-training-data-67c300d4fd5eff00fa7f1ede
- πŸ‘‹ Point of Contact: [Yibin Wang](https://codegoat24.github.io)
### Quick Start
All inference codes for using this reward model are provided in our [github sub-directory](https://github.com/CodeGoat24/UnifiedReward/tree/main/UnifiedReward-Think).
We take image understanding assessment as example here:
```python
import json
import random
import torch
import tqdm
from PIL import Image
import warnings
import os
import requests # Added for fetching image from URL
from transformers import AutoProcessor, AutoTokenizer, Qwen2_5_VLForConditionalGeneration
from qwen_vl_utils import process_vision_info
warnings.filterwarnings("ignore")
model_path = "CodeGoat24/UnifiedReward-Think-qwen-7b"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path, torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained(model_path)
url = "https://github.com/LLaVA-VL/blog/blob/main/2024-10-03-llava-critic/static/images/critic_img_seven.png?raw=True"
image = Image.open(requests.get(url, stream=True).raw)
Query = 'What does this image present?'
R1 = 'The image is a black and white sketch of a line that appears to be in the shape of a cross. The line is a simple and straightforward representation of the cross shape, with two straight lines intersecting at a point.'
R2 = 'This is a handwritten number seven.'
prompt_text = ("Given a question and a reference image, please analyze in detail the two provided answers (Answer 1 and Answer 2). " \
"Evaluate them based on the following three core dimensions:
" \
"1. Semantic accuracy: How well the answer reflects the visual content of the image
" \
"2. Correctness: Whether the answer is logically and factually correct
" \
"3. Clarity: Whether the answer is clearly and fluently expressed
" \
"You may also consider additional dimensions if you find them relevant (e.g., reasoning ability, attention to detail, multimodal grounding, etc.). " \
"For each dimension, provide a score from 1 to 10 for both answers, and briefly explain your reasoning. " \
"Then, compute the total score for each answer by explicitly adding the scores for all dimensions and showing the full calculation. " \
"Enclose your full reasoning within <think> and </think> tags. " \
"Then, in the <answer> tag, output exactly one of the following: 'Answer 1 is better' or 'Answer 2 is better'. No other text is allowed in the <answer> section.
" \
"Example format:
" \
"<think>
" \
"1. Semantic accuracy: Answer 1 (9/10) - ...; Answer 2 (7/10) - ...
" \
"2. Correctness: Answer 1 (8/10) - ...; Answer 2 (7/10) - ...
" \
"3. Clarity: Answer 1 (9/10) - ...; Answer 2 (8/10) - ...
" \
"[Additional dimensions if any]: Answer 1 (6/10) - ...; Answer 2 (7/10) - ...
" \
"Total score:
Answer 1: 9+8+9+6=32
Answer 2: 7+7+8+7=29
" \
"</think>
" \
"<answer>Answer 1 is better</answer>
" \
"**Note: In the example above, scores and the final answer are placeholders meant only to demonstrate the format. Your actual evaluation should be based on the quality of two given answers.**
"
f"Your task is provided as follows:
Question: [{Query}]
Answer 1: [{R1}]
Answer 2: [{R2}]")
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": prompt_text},
],
}
]
chat_input = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[chat_input],
images=image_inputs,
videos=video_inputs,
return_tensors="pt",
padding=True
).to("cuda")
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=4096)
generated_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output = processor.batch_decode(generated_trimmed, skip_special_tokens=True)[0]
print(output)
```
## Citation
```bibtex
@article{Pref-GRPO&UniGenBench,
title={Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement Learning},
author={Wang, Yibin and Li, Zhimin and Zang, Yuhang and Zhou, Yujie and Bu, Jiazi and Wang, Chunyu and Lu, Qinglin, and Jin, Cheng and Wang, Jiaqi},
journal={arXiv preprint arXiv:2508.20751},
year={2025}
}
```