vsevolodpl commited on
Commit
5e07bba
·
verified ·
1 Parent(s): 1581c07

Update README for the full version of the dataset

Browse files
Files changed (1) hide show
  1. README.md +34 -34
README.md CHANGED
@@ -16,60 +16,60 @@ size_categories:
16
  ---
17
 
18
 
19
- # Dataset for Evaluating the Aesthetics of Rendering (DEAR) - 100-scene Sample
20
 
21
- This dataset is a preliminary release of the **Dataset for Evaluating the Aesthetics of Rendering (DEAR)**, designed to model human aesthetic judgments of image rendering styles. Built upon the MIT-Adobe FiveK dataset, DEAR provides a foundation for training and evaluating models that can assess subjective rendering preferences rather than just technical image quality.
 
22
 
23
  ## Dataset Overview
24
 
25
- This sample contains **100 scenes** (600 total images) with **pairwise human preference annotations** collected through large-scale crowdsourcing. Each scene is rendered in **6 distinct styles**:
26
 
27
- - **Style A-E**: Five different photographer-specific renderings (Expert A through Expert E from the original MIT-Adobe FiveK dataset)
28
- - **Neutral**: Adobe Photoshop auto mode rendering (serves as a baseline/reference style)
 
 
 
29
 
30
- Each pair of renderings for the same scene was evaluated by multiple human annotators who indicated which version they preferred or if they considered them equally appealing. This rich preference data enables training models to predict aesthetic preferences across different rendering styles.
31
-
32
- |style a|style b|style c|style d|style e|neutral|
33
- |-------|-------|-------|-------|-------|-------|
34
- |<img src="./images/tiff16_a/a2031-WP_CRW_5715.jpeg">|<img src="./images/tiff16_b/a2031-WP_CRW_5715.jpeg">|<img src="./images/tiff16_c/a2031-WP_CRW_5715.jpeg">|<img src="./images/tiff16_d/a2031-WP_CRW_5715.jpeg">|<img src="./images/tiff16_e/a2031-WP_CRW_5715.jpeg">|<img src="./images/original/a2031-WP_CRW_5715.jpeg">|
35
- |<img src="./images/tiff16_a/a3214-KE_-8375.jpeg">|<img src="./images/tiff16_b/a3214-KE_-8375.jpeg">|<img src="./images/tiff16_c/a3214-KE_-8375.jpeg">|<img src="./images/tiff16_d/a3214-KE_-8375.jpeg">|<img src="./images/tiff16_e/a3214-KE_-8375.jpeg">|<img src="./images/original/a3214-KE_-8375.jpeg">|
36
- |<img src="./images/tiff16_a/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|<img src="./images/tiff16_b/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|<img src="./images/tiff16_c/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|<img src="./images/tiff16_d/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|<img src="./images/tiff16_e/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|<img src="./images/original/a0196-2004-01-25 13-34-10 CRW_3079.jpeg">|
37
- |<img src="./images/tiff16_a/a0036-jn_2007_05_05__183.jpeg">|<img src="./images/tiff16_b/a0036-jn_2007_05_05__183.jpeg">|<img src="./images/tiff16_c/a0036-jn_2007_05_05__183.jpeg">|<img src="./images/tiff16_d/a0036-jn_2007_05_05__183.jpeg">|<img src="./images/tiff16_e/a0036-jn_2007_05_05__183.jpeg">|<img src="./images/original/a0036-jn_2007_05_05__183.jpeg">|
38
- |<img src="./images/tiff16_a/a3308-Ja_Pe-23.jpeg">|<img src="./images/tiff16_b/a3308-Ja_Pe-23.jpeg">|<img src="./images/tiff16_c/a3308-Ja_Pe-23.jpeg">|<img src="./images/tiff16_d/a3308-Ja_Pe-23.jpeg">|<img src="./images/tiff16_e/a3308-Ja_Pe-23.jpeg">|<img src="./images/original/a3308-Ja_Pe-23.jpeg">|
39
- |<img src="./images/tiff16_a/a0742-IMG_2429.jpeg">|<img src="./images/tiff16_b/a0742-IMG_2429.jpeg">|<img src="./images/tiff16_c/a0742-IMG_2429.jpeg">|<img src="./images/tiff16_d/a0742-IMG_2429.jpeg">|<img src="./images/tiff16_e/a0742-IMG_2429.jpeg">|<img src="./images/original/a0742-IMG_2429.jpeg">|
40
 
41
  ## Key Features
42
 
43
- - **Human-centered evaluation**: Captures nuanced aesthetic preferences rather than technical distortions
44
- - **Multiple rendering styles**: Six distinct interpretations per scene allow for fine-grained preference analysis
45
- - **Expert vs. automated rendering**: Comparison between professional photographer styles and Adobe's auto mode
46
- - **Diverse content**: Scenes span portraits, landscapes, food, animals, night photography, and general vernacular subjects
47
 
48
- ## Metadata
49
 
50
- Each image includes comprehensive metadata extracted from the original TIFF XMP data, including:
 
51
 
52
- - **Technical parameters**: Exposure settings, white balance, contrast, saturation, and tone curve adjustments
53
- - **Rendering information**: Software version, processing history, and renderer identifiers
54
- - **Content categories**: Auto-generated scene classifications (people, animals, food, night scenes, etc.)
55
- - **Image properties**: Dimensions, color profiles, and format specifications
56
- - **Source information**: Original filename, capture device, and photographer attribution
57
 
58
- The metadata provides valuable context for understanding how specific parameter adjustments influence aesthetic preferences and enables researchers to analyze correlations between technical settings and human preferences.
 
 
 
 
 
 
 
59
 
60
  ## Applications
61
 
62
- This dataset enables research in:
63
- - **Aesthetic preference prediction**: Training models to predict which rendering style users will prefer
64
- - **Style-aware image enhancement**: Developing algorithms that adapt to user aesthetic preferences
65
- - **Personalized rendering**: Creating systems that learn individual aesthetic tastes
66
- - **Benchmarking aesthetic models**: Evaluating existing and new approaches to aesthetic quality assessment
67
- - **Human-AI collaboration**: Understanding how human preferences can guide AI rendering decisions
 
68
 
69
  ## Citation
70
  ```
71
  @misc{plohotnyuk2025dear,
72
- title={DEAR: Dataset for Evaluating the Aesthetics of Rendering},
73
  author={Plohotnyuk, Vsevolod and Panshin, Artyom and Bani{\'c}, Nikola and Bianco, Simone and Freeman, Michael and Ershov, Egor},
74
  year={2025},
75
  eprint={2512.05209},
 
16
  ---
17
 
18
 
19
+ # REPID: Rendering Evaluation of Photographic Image Dataset
20
 
21
+ **REPID** (officially introduced as the Rendering Evaluation of Photographic Image Dataset) is a large-scale benchmark designed for **Image Rendering Quality Assessment (IRQA)**.
22
+ > Unlike traditional Image Quality Assessment (IQA) which focuses on technical degradations like noise or blur, REPID aims to model subjective human aesthetic preferences for different rendering styles of the same scene.
23
 
24
  ## Dataset Overview
25
 
26
+ > Built upon the MIT-Adobe FiveK dataset, REPID provides a massive collection of pairwise human preference annotations for professional and automated renderings.
27
 
28
+ > * **Scenes**: 5,000 high-resolution RAW photographs.
29
+ > * **Total Images**: 30,000 unique renderings (6 per scene).
30
+ > * **Total Votes**: Over 2.5 million unique votes collected via crowdsourcing.
31
+ > * **Annotators**: 13,648 unique evaluators, with each image pair receiving at least 25 individual votes.
32
+ > * **Comparison Task**: For each of the 15 possible pairs per scene, evaluators indicated "Left preferable", "Right preferable", or "Both equal".
33
 
34
+ ### Rendering Styles
35
+ Each scene features six distinct interpretations:
36
+ * **Expert A–E**: Five professional photographer renderings from the original MIT-Adobe FiveK dataset.
37
+ * **Neutral**: Adobe Photoshop’s "Auto" mode rendering, serving as a baseline.
 
 
 
 
 
 
38
 
39
  ## Key Features
40
 
41
+ * **Subjective Focus**: Targets the "aesthetics of rendering" (color, texture, and artistic expression) rather than simple signal-to-noise ratios.
 
 
 
42
 
43
+ * **Content-Dependent Preferences**: The dataset reveals that preferred rendering styles vary significantly based on image content (e.g., portraits vs. landscapes).
44
 
45
+ * **Personalization Support**: Includes unique evaluator identifiers, enabling research into personalized aesthetic preference modeling—a critical area for recommendation systems and generative AI[cite: 1622].
46
+ * **Content Annotations**: Includes scene classifications generated via BLIP (e.g., nature, food, night scenes) with verified 96% accuracy.
47
 
48
+ ## Dataset Statistics
 
 
 
 
49
 
50
+ | Feature | Value |
51
+ | :--- | :--- |
52
+ | **Scenes** | 5,000 |
53
+ | **Images** | 30,000 |
54
+ | **Annotators** | 13,648 |
55
+ | **Votes** | 2,500,000+ |
56
+ | **Test Set** | 1,283 scenes (approx. 25% of data) |
57
+ | **Upper Bound Accuracy** | 0.896 (human consensus) |
58
 
59
  ## Applications
60
 
61
+ REPID is designed to foster research in:
62
+
63
+ 1. **Aesthetic Preference Prediction**: Training models to predict which of two renderings a human will prefer.
64
+ 2. **Personalized Rendering**: Modeling individual user tastes using the provided evaluator IDs.
65
+ 3. **Render Ranking**: Developing systems that can automatically select the "best" rendering for a given image.
66
+ 4. **Benchmarking IRQA**: Providing a qualitatively different challenge than traditional distortion-based IQA benchmarks.
67
+
68
 
69
  ## Citation
70
  ```
71
  @misc{plohotnyuk2025dear,
72
+ title={Beyond distortions a benchmark for subjective evaluation of image rendering quality},
73
  author={Plohotnyuk, Vsevolod and Panshin, Artyom and Bani{\'c}, Nikola and Bianco, Simone and Freeman, Michael and Ershov, Egor},
74
  year={2025},
75
  eprint={2512.05209},