Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
# Model Card for HP (High-Preference) Model
|
| 2 |
|
| 3 |
-
This model is a specialized human preference scoring function that evaluates image quality based purely on visual aesthetics and human preferences, without relying on text-image alignment. See our paper [Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment]() for more details.
|
| 4 |
|
| 5 |
## Model Details
|
| 6 |
|
|
@@ -19,7 +19,7 @@ The HP (High-Preference) model represents a paradigm shift in image quality eval
|
|
| 19 |
### Model Sources
|
| 20 |
|
| 21 |
* **Repository:** [https://github.com/BarretBa/ICTHP](https://github.com/BarretBa/ICTHP)
|
| 22 |
-
* **Paper:** [Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment](https://arxiv.org/abs/
|
| 23 |
* **Base Model:** CLIP-ViT-H-14 (Image Encoder + MLP Head)
|
| 24 |
* **Training Dataset:** [Pick-High datase](https://huggingface.co/datasets/8y/Pick-High-Dataset) and Pick-a-pic dataset (360,000 preference triplets)
|
| 25 |
|
|
@@ -88,15 +88,18 @@ print(f"HP Scores: {scores}")
|
|
| 88 |
### Training Data
|
| 89 |
|
| 90 |
This model was trained on 36000 preference triplets from [Pick-High datase](https://huggingface.co/datasets/8y/Pick-High-Dataset) and Pick-a-pic dataset.
|
| 91 |
-
|
| 92 |
|
| 93 |
## Citation
|
| 94 |
|
| 95 |
```bibtex
|
| 96 |
-
@
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
|
|
|
|
|
|
|
|
|
| 101 |
}
|
| 102 |
-
```
|
|
|
|
| 1 |
# Model Card for HP (High-Preference) Model
|
| 2 |
|
| 3 |
+
This model is a specialized human preference scoring function that evaluates image quality based purely on visual aesthetics and human preferences, without relying on text-image alignment. See our paper [Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment](https://arxiv.org/abs/2507.19002) for more details.
|
| 4 |
|
| 5 |
## Model Details
|
| 6 |
|
|
|
|
| 19 |
### Model Sources
|
| 20 |
|
| 21 |
* **Repository:** [https://github.com/BarretBa/ICTHP](https://github.com/BarretBa/ICTHP)
|
| 22 |
+
* **Paper:** [Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment](https://arxiv.org/abs/2507.19002)
|
| 23 |
* **Base Model:** CLIP-ViT-H-14 (Image Encoder + MLP Head)
|
| 24 |
* **Training Dataset:** [Pick-High datase](https://huggingface.co/datasets/8y/Pick-High-Dataset) and Pick-a-pic dataset (360,000 preference triplets)
|
| 25 |
|
|
|
|
| 88 |
### Training Data
|
| 89 |
|
| 90 |
This model was trained on 36000 preference triplets from [Pick-High datase](https://huggingface.co/datasets/8y/Pick-High-Dataset) and Pick-a-pic dataset.
|
| 91 |
+
|
| 92 |
|
| 93 |
## Citation
|
| 94 |
|
| 95 |
```bibtex
|
| 96 |
+
@misc{ba2025enhancingrewardmodelshighquality,
|
| 97 |
+
title={Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment},
|
| 98 |
+
author={Ying Ba and Tianyu Zhang and Yalong Bai and Wenyi Mo and Tao Liang and Bing Su and Ji-Rong Wen},
|
| 99 |
+
year={2025},
|
| 100 |
+
eprint={2507.19002},
|
| 101 |
+
archivePrefix={arXiv},
|
| 102 |
+
primaryClass={cs.CV},
|
| 103 |
+
url={https://arxiv.org/abs/2507.19002},
|
| 104 |
}
|
| 105 |
+
```
|