Datasets:
Update README for the full version of the dataset
Browse files
README.md
CHANGED
|
@@ -16,60 +16,60 @@ size_categories:
|
|
| 16 |
---
|
| 17 |
|
| 18 |
|
| 19 |
-
#
|
| 20 |
|
| 21 |
-
|
|
|
|
| 22 |
|
| 23 |
## Dataset Overview
|
| 24 |
|
| 25 |
-
|
| 26 |
|
| 27 |
-
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|<img src="./images/tiff16_a/a2031-WP_CRW_5715.jpeg">|<img src="./images/tiff16_b/a2031-WP_CRW_5715.jpeg">|<img src="./images/tiff16_c/a2031-WP_CRW_5715.jpeg">|<img src="./images/tiff16_d/a2031-WP_CRW_5715.jpeg">|<img src="./images/tiff16_e/a2031-WP_CRW_5715.jpeg">|<img src="./images/original/a2031-WP_CRW_5715.jpeg">|
|
| 35 |
-
|<img src="./images/tiff16_a/a3214-KE_-8375.jpeg">|<img src="./images/tiff16_b/a3214-KE_-8375.jpeg">|<img src="./images/tiff16_c/a3214-KE_-8375.jpeg">|<img src="./images/tiff16_d/a3214-KE_-8375.jpeg">|<img src="./images/tiff16_e/a3214-KE_-8375.jpeg">|<img src="./images/original/a3214-KE_-8375.jpeg">|
|
| 36 |
-
|<img src="./images/tiff16_a/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|<img src="./images/tiff16_b/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|<img src="./images/tiff16_c/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|<img src="./images/tiff16_d/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|<img src="./images/tiff16_e/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|<img src="./images/original/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|
|
| 37 |
-
|<img src="./images/tiff16_a/a0036-jn_2007_05_05__183.jpeg">|<img src="./images/tiff16_b/a0036-jn_2007_05_05__183.jpeg">|<img src="./images/tiff16_c/a0036-jn_2007_05_05__183.jpeg">|<img src="./images/tiff16_d/a0036-jn_2007_05_05__183.jpeg">|<img src="./images/tiff16_e/a0036-jn_2007_05_05__183.jpeg">|<img src="./images/original/a0036-jn_2007_05_05__183.jpeg">|
|
| 38 |
-
|<img src="./images/tiff16_a/a3308-Ja_Pe-23.jpeg">|<img src="./images/tiff16_b/a3308-Ja_Pe-23.jpeg">|<img src="./images/tiff16_c/a3308-Ja_Pe-23.jpeg">|<img src="./images/tiff16_d/a3308-Ja_Pe-23.jpeg">|<img src="./images/tiff16_e/a3308-Ja_Pe-23.jpeg">|<img src="./images/original/a3308-Ja_Pe-23.jpeg">|
|
| 39 |
-
|<img src="./images/tiff16_a/a0742-IMG_2429.jpeg">|<img src="./images/tiff16_b/a0742-IMG_2429.jpeg">|<img src="./images/tiff16_c/a0742-IMG_2429.jpeg">|<img src="./images/tiff16_d/a0742-IMG_2429.jpeg">|<img src="./images/tiff16_e/a0742-IMG_2429.jpeg">|<img src="./images/original/a0742-IMG_2429.jpeg">|
|
| 40 |
|
| 41 |
## Key Features
|
| 42 |
|
| 43 |
-
|
| 44 |
-
- **Multiple rendering styles**: Six distinct interpretations per scene allow for fine-grained preference analysis
|
| 45 |
-
- **Expert vs. automated rendering**: Comparison between professional photographer styles and Adobe's auto mode
|
| 46 |
-
- **Diverse content**: Scenes span portraits, landscapes, food, animals, night photography, and general vernacular subjects
|
| 47 |
|
| 48 |
-
|
| 49 |
|
| 50 |
-
|
|
|
|
| 51 |
|
| 52 |
-
|
| 53 |
-
- **Rendering information**: Software version, processing history, and renderer identifiers
|
| 54 |
-
- **Content categories**: Auto-generated scene classifications (people, animals, food, night scenes, etc.)
|
| 55 |
-
- **Image properties**: Dimensions, color profiles, and format specifications
|
| 56 |
-
- **Source information**: Original filename, capture device, and photographer attribution
|
| 57 |
|
| 58 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
## Applications
|
| 61 |
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
|
|
|
| 68 |
|
| 69 |
## Citation
|
| 70 |
```
|
| 71 |
@misc{plohotnyuk2025dear,
|
| 72 |
-
title={
|
| 73 |
author={Plohotnyuk, Vsevolod and Panshin, Artyom and Bani{\'c}, Nikola and Bianco, Simone and Freeman, Michael and Ershov, Egor},
|
| 74 |
year={2025},
|
| 75 |
eprint={2512.05209},
|
|
|
|
| 16 |
---
|
| 17 |
|
| 18 |
|
| 19 |
+
# REPID: Rendering Evaluation of Photographic Image Dataset
|
| 20 |
|
| 21 |
+
**REPID** (officially introduced as the Rendering Evaluation of Photographic Image Dataset) is a large-scale benchmark designed for **Image Rendering Quality Assessment (IRQA)**.
|
| 22 |
+
> Unlike traditional Image Quality Assessment (IQA) which focuses on technical degradations like noise or blur, REPID aims to model subjective human aesthetic preferences for different rendering styles of the same scene.
|
| 23 |
|
| 24 |
## Dataset Overview
|
| 25 |
|
| 26 |
+
> Built upon the MIT-Adobe FiveK dataset, REPID provides a massive collection of pairwise human preference annotations for professional and automated renderings.
|
| 27 |
|
| 28 |
+
> * **Scenes**: 5,000 high-resolution RAW photographs.
|
| 29 |
+
> * **Total Images**: 30,000 unique renderings (6 per scene).
|
| 30 |
+
> * **Total Votes**: Over 2.5 million unique votes collected via crowdsourcing.
|
| 31 |
+
> * **Annotators**: 13,648 unique evaluators, with each image pair receiving at least 25 individual votes.
|
| 32 |
+
> * **Comparison Task**: For each of the 15 possible pairs per scene, evaluators indicated "Left preferable", "Right preferable", or "Both equal".
|
| 33 |
|
| 34 |
+
### Rendering Styles
|
| 35 |
+
Each scene features six distinct interpretations:
|
| 36 |
+
* **Expert A–E**: Five professional photographer renderings from the original MIT-Adobe FiveK dataset.
|
| 37 |
+
* **Neutral**: Adobe Photoshop’s "Auto" mode rendering, serving as a baseline.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
## Key Features
|
| 40 |
|
| 41 |
+
* **Subjective Focus**: Targets the "aesthetics of rendering" (color, texture, and artistic expression) rather than simple signal-to-noise ratios.
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
+
* **Content-Dependent Preferences**: The dataset reveals that preferred rendering styles vary significantly based on image content (e.g., portraits vs. landscapes).
|
| 44 |
|
| 45 |
+
* **Personalization Support**: Includes unique evaluator identifiers, enabling research into personalized aesthetic preference modeling—a critical area for recommendation systems and generative AI[cite: 1622].
|
| 46 |
+
* **Content Annotations**: Includes scene classifications generated via BLIP (e.g., nature, food, night scenes) with verified 96% accuracy.
|
| 47 |
|
| 48 |
+
## Dataset Statistics
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
+
| Feature | Value |
|
| 51 |
+
| :--- | :--- |
|
| 52 |
+
| **Scenes** | 5,000 |
|
| 53 |
+
| **Images** | 30,000 |
|
| 54 |
+
| **Annotators** | 13,648 |
|
| 55 |
+
| **Votes** | 2,500,000+ |
|
| 56 |
+
| **Test Set** | 1,283 scenes (approx. 25% of data) |
|
| 57 |
+
| **Upper Bound Accuracy** | 0.896 (human consensus) |
|
| 58 |
|
| 59 |
## Applications
|
| 60 |
|
| 61 |
+
REPID is designed to foster research in:
|
| 62 |
+
|
| 63 |
+
1. **Aesthetic Preference Prediction**: Training models to predict which of two renderings a human will prefer.
|
| 64 |
+
2. **Personalized Rendering**: Modeling individual user tastes using the provided evaluator IDs.
|
| 65 |
+
3. **Render Ranking**: Developing systems that can automatically select the "best" rendering for a given image.
|
| 66 |
+
4. **Benchmarking IRQA**: Providing a qualitatively different challenge than traditional distortion-based IQA benchmarks.
|
| 67 |
+
|
| 68 |
|
| 69 |
## Citation
|
| 70 |
```
|
| 71 |
@misc{plohotnyuk2025dear,
|
| 72 |
+
title={Beyond distortions — a benchmark for subjective evaluation of image rendering quality},
|
| 73 |
author={Plohotnyuk, Vsevolod and Panshin, Artyom and Bani{\'c}, Nikola and Bianco, Simone and Freeman, Michael and Ershov, Egor},
|
| 74 |
year={2025},
|
| 75 |
eprint={2512.05209},
|