nielsr HF Staff commited on
Commit
45f222a
Β·
verified Β·
1 Parent(s): 99a09d7

Improve model card: Add pipeline tag, links, description, and usage

Browse files

This PR significantly enhances the model card for EditScore by:
- Adding the `pipeline_tag: image-text-to-text` to better categorize the model's function as an image editing reward model.
- Including a link to the official Hugging Face paper: [EditScore: Unlocking Online RL for Image Editing via High-Fidelity Reward Modeling](https://huggingface.co/papers/2509.23909).
- Providing links to the project page ([https://vectorspacelab.github.io/EditScore](https://vectorspacelab.github.io/EditScore)) and the GitHub repository ([https://github.com/VectorSpaceLab/EditScore](https://github.com/VectorSpaceLab/EditScore)).
- Adding a comprehensive model description based on the paper's abstract, highlights, and the project's introduction.
- Incorporating a "Quick Start" section with environment setup and a practical Python code snippet for inference, directly from the GitHub README.
- Including relevant images from the GitHub README for better visual representation.

These additions will make the model more discoverable and easier to use for the community.

Files changed (1) hide show
  1. README.md +134 -3
README.md CHANGED
@@ -1,3 +1,134 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-text-to-text
4
+ ---
5
+
6
+ <p align="center">
7
+ <img src="https://github.com/VectorSpaceLab/EditScore/raw/main/assets/logo.png" width="65%">
8
+ </p>
9
+
10
+ This repository contains **EditScore**, a series of state-of-the-art open-source reward models (7B–72B) designed to evaluate and enhance instruction-guided image editing.
11
+
12
+ The model was presented in the paper [EditScore: Unlocking Online RL for Image Editing via High-Fidelity Reward Modeling](https://huggingface.co/papers/2509.23909).
13
+
14
+ - πŸ“š [Paper](https://huggingface.co/papers/2509.23909)
15
+ - 🌐 [Project Page](https://vectorspacelab.github.io/EditScore)
16
+ - πŸ’» [Code Repository](https://github.com/VectorSpaceLab/EditScore)
17
+
18
+ ## ✨ Highlights
19
+ - **State-of-the-Art Performance**: Effectively matches the performance of leading proprietary VLMs. With a self-ensembling strategy, **our largest model surpasses even GPT-5** on our comprehensive benchmark, **EditReward-Bench**.
20
+ - **A Reliable Evaluation Standard**: We introduce **EditReward-Bench**, the first public benchmark specifically designed for evaluating reward models in image editing, featuring 13 subtasks, 11 state-of-the-art editing models (*including proprietary models*) and expert human annotations.
21
+ - **Simple and Easy-to-Use**: Get an accurate quality score for your image edits with just a few lines of code.
22
+ - **Versatile Applications**: Ready to use as a best-in-class reranker to improve editing outputs, or as a high-fidelity reward signal for **stable and effective Reinforcement Learning (RL) fine-tuning**.
23
+
24
+ ## πŸ“– Introduction
25
+ While Reinforcement Learning (RL) holds immense potential for this domain, its progress has been severely hindered by the absence of a high-fidelity, efficient reward signal.
26
+
27
+ To overcome this barrier, we provide a systematic, two-part solution:
28
+
29
+ - **A Rigorous Evaluation Standard**: We first introduce **EditReward-Bench**, a new public benchmark for the direct and reliable evaluation of reward models. It features 13 diverse subtasks and expert human annotations, establishing a gold standard for measuring reward signal quality.
30
+
31
+ - **A Powerful & Versatile Tool**: Guided by our benchmark, we developed the **EditScore** model series. Through meticulous data curation and an effective self-ensembling strategy, EditScore sets a new state of the art for open-source reward models, even surpassing the accuracy of leading proprietary VLMs.
32
+
33
+ <p align="center">
34
+ <img src="https://github.com/VectorSpaceLab/EditScore/raw/main/assets/table_reward_model_results.png" width="95%">
35
+ <br>
36
+ <em>Benchmark results on EditReward-Bench.</em>
37
+ </p>
38
+
39
+ We demonstrate the practical utility of EditScore through two key applications:
40
+
41
+ - **As a State-of-the-Art Reranker**: Use EditScore to perform Best-of-*N* selection and instantly improve the output quality of diverse editing models.
42
+ - **As a High-Fidelity Reward for RL**: Use EditScore as a robust reward signal to fine-tune models via RL, enabling stable training and unlocking significant performance gains where general-purpose VLMs fail.
43
+
44
+ This repository releases both the **EditScore** models and the **EditReward-Bench** dataset to facilitate future research in reward modeling, policy optimization, and AI-driven model improvement.
45
+
46
+ <p align="center">
47
+ <img src="https://github.com/VectorSpaceLab/EditScore/raw/main/assets/figure_edit_results.png" width="95%">
48
+ <br>
49
+ <em>EditScore as a superior reward signal for image editing.</em>
50
+ </p>
51
+
52
+ ## πŸš€ Quick Start
53
+
54
+ ### πŸ› οΈ Environment Setup
55
+
56
+ #### βœ… Recommended Setup
57
+
58
+ ```bash
59
+ # 1. Clone the repo
60
+ git clone git@github.com:VectorSpaceLab/EditScore.git
61
+ cd EditScore
62
+
63
+ # 2. (Optional) Create a clean Python environment
64
+ conda create -n editscore python=3.12
65
+ conda activate editscore
66
+
67
+ # 3. Install dependencies
68
+ # 3.1 Install PyTorch (choose correct CUDA version)
69
+ pip install torch==2.7.1 torchvision --extra-index-url https://download.pytorch.org/whl/cu126
70
+
71
+ # 3.2 Install other required packages
72
+ pip install -r requirements.txt
73
+
74
+ # EditScore runs even without vllm, though we recommend install it for best performance.
75
+ pip install vllm
76
+ ```
77
+
78
+ #### 🌏 For users in Mainland China
79
+
80
+ ```bash
81
+ # Install PyTorch from a domestic mirror
82
+ pip install torch==2.7.1 torchvision --index-url https://mirror.sjtu.edu.cn/pytorch-wheels/cu126
83
+
84
+ # Install other dependencies from Tsinghua mirror
85
+ pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
86
+
87
+ # EditScore runs even without vllm, though we recommend install it for best performance.
88
+ pip install vllm -i https://pypi.tuna.tsinghua.edu.cn/simple
89
+ ```
90
+
91
+ ---
92
+
93
+ ### πŸ§ͺ Usage Example
94
+ Using EditScore is straightforward. The model will be automatically downloaded from the Hugging Face Hub on its first run.
95
+ ```python
96
+ from PIL import Image
97
+ from editscore import EditScore
98
+
99
+ # Load the EditScore model. It will be downloaded automatically.
100
+ # Replace with the specific model version you want to use.
101
+ model_path = "Qwen/Qwen2.5-VL-7B-Instruct"
102
+ lora_path = "EditScore/EditScore-7B"
103
+
104
+ scorer = EditScore(
105
+ backbone="qwen25vl", # set to "qwen25vl_vllm" for faster inference
106
+ model_name_or_path=model_path,
107
+ enable_lora=True,
108
+ lora_path=lora_path,
109
+ score_range=25,
110
+ num_pass=1, # Increase for better performance via self-ensembling
111
+ )
112
+
113
+ input_image = Image.open("example_images/input.png")
114
+ output_image = Image.open("example_images/output.png")
115
+ instruction = "Adjust the background to a glass wall."
116
+
117
+ result = scorer.evaluate([input_image, output_image], instruction)
118
+ print(f"Edit Score: {result['final_score']}")
119
+ # Expected output: A dictionary containing the final score and other details.
120
+ ```
121
+
122
+ ---
123
+
124
+ ## ❀️ Citing Us
125
+ If you find this repository or our work useful, please consider giving a star ⭐ and citation πŸ¦–, which would be greatly appreciated:
126
+
127
+ ```bibtex
128
+ @article{luo2025editscore,
129
+ title={EditScore: Unlocking Online RL for Image Editing via High-Fidelity Reward Modeling},
130
+ author={Xin Luo and Jiahao Wang and Chenyuan Wu and Shitao Xiao and Xiyan Jiang and Defu Lian and Jiajun Zhang and Dong Liu and Zheng Liu},
131
+ journal={arXiv preprint arXiv:2509.23909},
132
+ year={2025}
133
+ }
134
+ ```