Improve dataset card: Add paper, code, project page links, task category, and sample usage

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +50 -13
README.md CHANGED
@@ -1,17 +1,17 @@
1
  ---
2
- task_categories:
3
- - question-answering
4
  language:
5
  - en
 
 
 
 
 
6
  tags:
7
  - cognition
8
  - emotional_valence
9
  - funniness
10
  - memorability
11
  - aesthetics
12
- size_categories:
13
- - 1K<n<10K
14
- license: cc-by-4.0
15
  dataset_info:
16
  features:
17
  - name: id
@@ -46,12 +46,36 @@ configs:
46
 
47
  # CogIP-Bench: Cognition Image Property Benchmark
48
 
 
 
 
 
49
  ![90a73d7a39db648d3dfd442e6efed570](https://cdn-uploads.huggingface.co/production/uploads/67daba9b9c49701f60496af3/-zoo7NfdxCIQsJboMX49g.png)
50
 
51
  **CogIP-Bench** is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the **CogIP-Bench** aims to measure.
52
 
53
  This dataset evaluates models across four key cognitive dimensions: **Aesthetics**, **Funniness**, **Emotional Valence**, and **Memorability**.
54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ## 📂 Dataset Structure & Files
56
 
57
  The dataset is organized into two main formats: standard benchmark files (`.jsonl`) for evaluation and a source JSON file (`.json`) used for Supervised Fine-Tuning (SFT).
@@ -60,8 +84,8 @@ The dataset is organized into two main formats: standard benchmark files (`.json
60
 
61
  These files contain the image-prompt pairs and ground truth scores used to benchmark model performance against human judgments.
62
 
63
- * **`metadata_train.jsonl`**: The training split, containing **3,200 examples** across the four dimensions.
64
- * **`metadata_test.jsonl`**: The testing split, used for final evaluation.
65
 
66
  #### **Data Fields**
67
 
@@ -80,16 +104,29 @@ This example shows a prompt for the Aesthetics sub-task, which includes detailed
80
 
81
  ```json
82
  {
83
- "id": "000800",
84
- "image": "images/Aesthetics/000800.jpg",
85
- "prompt": "<image> \n**Visual Aesthetics Analysis Sub-Task (Aesthetics):** \nIn this sub-task, you are asked to assess the aesthetic appeal of the image based on elements such as visual harmony, composition, color, lighting, and emotional impact. Your goal is to provide a descriptive label that captures the overall aesthetic quality of the image, followed by a numerical score that reflects its aesthetic value.\n\nPlease first give a description label for the corresponding image, then predict the scores based on the following rules: \n- (0.0, 3.5, 'very low') \n- (3.5, 5.0, 'low') \n- (5.0, 6.5, 'medium') \n- (6.5, 8.0, 'high') \n- (8.0, 10.1, 'very high') \n\nThe score should be a number with exactly three decimal places (e.g., 7.234). \n\nPlease return only the label and the scores number, nothing else.",
 
 
 
 
 
 
 
 
 
 
 
 
 
86
  "score": 4.105
87
  }
88
  ```
89
  ### 2. SFT Data (`original_cognition.json`)
90
 
91
- * **Filename:** `original_cognition.json`
92
- * **Purpose:** This is the original JSON file used for **Supervised Fine-Tuning (SFT)** on the MLLM. This file contains the data formatted for training the MLLM to output the structured response that includes both the label and the numerical score, thereby aligning its output with human cognitive judgments. This is the source file used to generate the structured `.jsonl` data.
93
 
94
  ---
95
 
@@ -106,4 +143,4 @@ The benchmark evaluates four distinct subjective properties, each with a specifi
106
 
107
  ![36de37faa08cd9d1aae35bbf6b2e92a1](https://cdn-uploads.huggingface.co/production/uploads/67daba9b9c49701f60496af3/5Rw2izq5rFb5UM1P6nTJL.png)
108
 
109
- ![a926e52b247cb3623a770078a4fc1a6b](https://cdn-uploads.huggingface.co/production/uploads/67daba9b9c49701f60496af3/-1hnW34wOmqzKs0b_DK-P.png)
 
1
  ---
 
 
2
  language:
3
  - en
4
+ license: cc-by-4.0
5
+ size_categories:
6
+ - 1K<n<10K
7
+ task_categories:
8
+ - image-text-to-text
9
  tags:
10
  - cognition
11
  - emotional_valence
12
  - funniness
13
  - memorability
14
  - aesthetics
 
 
 
15
  dataset_info:
16
  features:
17
  - name: id
 
46
 
47
  # CogIP-Bench: Cognition Image Property Benchmark
48
 
49
+ Paper: [From Pixels to Feelings: Aligning MLLMs with Human Cognitive Perception of Images](https://huggingface.co/papers/2511.22805)
50
+ Project Page: [MLLM-Cognition-project-page](https://follen-cry.github.io/MLLM-Cognition-project-page/)
51
+ Code: [Follen-cry/MLLM_Cognition_Alignment](https://github.com/Follen-cry/MLLM_Cognition_Alignment)
52
+
53
  ![90a73d7a39db648d3dfd442e6efed570](https://cdn-uploads.huggingface.co/production/uploads/67daba9b9c49701f60496af3/-zoo7NfdxCIQsJboMX49g.png)
54
 
55
  **CogIP-Bench** is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the **CogIP-Bench** aims to measure.
56
 
57
  This dataset evaluates models across four key cognitive dimensions: **Aesthetics**, **Funniness**, **Emotional Valence**, and **Memorability**.
58
 
59
+ ## Sample Usage
60
+
61
+ The GitHub repository provides scripts for Supervised Fine-Tuning (SFT) and Evaluation using the CogIP-Bench dataset.
62
+
63
+ ### Supervised Fine-Tuning (`sft/`)
64
+ To train a model (e.g., Qwen2.5-VL), navigate to the relevant directory within the cloned repository and run the script:
65
+
66
+ ```bash
67
+ cd sft/qwen
68
+ bash scripts/finetune_lora.sh
69
+ ```
70
+
71
+ ### Evaluation (`evaluation/`)
72
+ To benchmark a model's performance on the 4 cognitive dimensions, navigate to the specific model folder (e.g., `evaluation/gemma`) within the cloned repository and run the evaluation script:
73
+
74
+ ```bash
75
+ cd evaluation/gemma
76
+ bash cog_test.sh
77
+ ```
78
+
79
  ## 📂 Dataset Structure & Files
80
 
81
  The dataset is organized into two main formats: standard benchmark files (`.jsonl`) for evaluation and a source JSON file (`.json`) used for Supervised Fine-Tuning (SFT).
 
84
 
85
  These files contain the image-prompt pairs and ground truth scores used to benchmark model performance against human judgments.
86
 
87
+ * **`metadata_train.jsonl`**: The training split, containing **3,200 examples** across the four dimensions.
88
+ * **`metadata_test.jsonl`**: The testing split, used for final evaluation.
89
 
90
  #### **Data Fields**
91
 
 
104
 
105
  ```json
106
  {
107
+ "id": "000800",
108
+ "image": "images/Aesthetics/000800.jpg",
109
+ "prompt": "<image>
110
+ **Visual Aesthetics Analysis Sub-Task (Aesthetics):**
111
+ In this sub-task, you are asked to assess the aesthetic appeal of the image based on elements suchs as visual harmony, composition, color, lighting, and emotional impact. Your goal is to provide a descriptive label that captures the overall aesthetic quality of the image, followed by a numerical score that reflects its aesthetic value.
112
+
113
+ Please first give a description label for the corresponding image, then predict the scores based on the following rules:
114
+ - (0.0, 3.5, 'very low')
115
+ - (3.5, 5.0, 'low')
116
+ - (5.0, 6.5, 'medium')
117
+ - (6.5, 8.0, 'high')
118
+ - (8.0, 10.1, 'very high')
119
+
120
+ The score should be a number with exactly three decimal places (e.g., 7.234).
121
+
122
+ Please return only the label and the scores number, nothing else.",
123
  "score": 4.105
124
  }
125
  ```
126
  ### 2. SFT Data (`original_cognition.json`)
127
 
128
+ * **Filename:** `original_cognition.json`
129
+ * **Purpose:** This is the original JSON file used for **Supervised Fine-Tuning (SFT)** on the MLLM. This file contains the data formatted for training the MLLM to output the structured response that includes both the label and the numerical score, thereby aligning its output with human cognitive judgments. This is the source file used to generate the structured `.jsonl` data.
130
 
131
  ---
132
 
 
143
 
144
  ![36de37faa08cd9d1aae35bbf6b2e92a1](https://cdn-uploads.huggingface.co/production/uploads/67daba9b9c49701f60496af3/5Rw2izq5rFb5UM1P6nTJL.png)
145
 
146
+ ![a926e2b247cb3623a770078a4fc1a6b](https://cdn-uploads.huggingface.co/production/uploads/67daba9b9c49701f60496af3/-1hnW34wOmqzKs0b_DK-P.png)