8y commited on
Commit
7fcd3d8
·
verified ·
1 Parent(s): 4820f0d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -9
README.md CHANGED
@@ -1,6 +1,6 @@
1
  # Model Card for HP (High-Preference) Model
2
 
3
- This model is a specialized human preference scoring function that evaluates image quality based purely on visual aesthetics and human preferences, without relying on text-image alignment. See our paper [Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment]() for more details.
4
 
5
  ## Model Details
6
 
@@ -19,7 +19,7 @@ The HP (High-Preference) model represents a paradigm shift in image quality eval
19
  ### Model Sources
20
 
21
  * **Repository:** [https://github.com/BarretBa/ICTHP](https://github.com/BarretBa/ICTHP)
22
- * **Paper:** [Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment](https://arxiv.org/abs/xxxx.xxxxx)
23
  * **Base Model:** CLIP-ViT-H-14 (Image Encoder + MLP Head)
24
  * **Training Dataset:** [Pick-High datase](https://huggingface.co/datasets/8y/Pick-High-Dataset) and Pick-a-pic dataset (360,000 preference triplets)
25
 
@@ -88,15 +88,18 @@ print(f"HP Scores: {scores}")
88
  ### Training Data
89
 
90
  This model was trained on 36000 preference triplets from [Pick-High datase](https://huggingface.co/datasets/8y/Pick-High-Dataset) and Pick-a-pic dataset.
91
- <!--
92
 
93
  ## Citation
94
 
95
  ```bibtex
96
- @article{ba2024enhancing,
97
- title={Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment},
98
- author={Ba, Ying and Zhang, Tianyu and Bai, Yalong and Mo, Wenyi and Liang, Tao and Su, Bing and Wen, Ji-Rong},
99
- journal={arXiv preprint arXiv:xxxx.xxxxx},
100
- year={2024}
 
 
 
101
  }
102
- ``` -->
 
1
  # Model Card for HP (High-Preference) Model
2
 
3
+ This model is a specialized human preference scoring function that evaluates image quality based purely on visual aesthetics and human preferences, without relying on text-image alignment. See our paper [Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment](https://arxiv.org/abs/2507.19002) for more details.
4
 
5
  ## Model Details
6
 
 
19
  ### Model Sources
20
 
21
  * **Repository:** [https://github.com/BarretBa/ICTHP](https://github.com/BarretBa/ICTHP)
22
+ * **Paper:** [Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment](https://arxiv.org/abs/2507.19002)
23
  * **Base Model:** CLIP-ViT-H-14 (Image Encoder + MLP Head)
24
  * **Training Dataset:** [Pick-High datase](https://huggingface.co/datasets/8y/Pick-High-Dataset) and Pick-a-pic dataset (360,000 preference triplets)
25
 
 
88
  ### Training Data
89
 
90
  This model was trained on 36000 preference triplets from [Pick-High datase](https://huggingface.co/datasets/8y/Pick-High-Dataset) and Pick-a-pic dataset.
91
+
92
 
93
  ## Citation
94
 
95
  ```bibtex
96
+ @misc{ba2025enhancingrewardmodelshighquality,
97
+ title={Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment},
98
+ author={Ying Ba and Tianyu Zhang and Yalong Bai and Wenyi Mo and Tao Liang and Bing Su and Ji-Rong Wen},
99
+ year={2025},
100
+ eprint={2507.19002},
101
+ archivePrefix={arXiv},
102
+ primaryClass={cs.CV},
103
+ url={https://arxiv.org/abs/2507.19002},
104
  }
105
+ ```