File size: 6,230 Bytes
ea2a915
 
 
ce75e0a
 
 
 
 
ea2a915
 
 
 
 
 
 
 
6b50286
 
 
 
 
 
 
 
 
 
 
 
ea2a915
6b50286
0ef7271
6b50286
0ef7271
 
 
 
 
ea2a915
6b50286
 
 
 
0ef7271
 
ea2a915
 
f93e1e0
 
ce75e0a
 
 
 
d872121
 
f93e1e0
 
 
 
ce75e0a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f93e1e0
 
 
 
 
 
 
 
ce75e0a
 
f93e1e0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ce75e0a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f93e1e0
7291c27
8776b90
7291c27
 
ce75e0a
 
7291c27
 
 
 
 
 
 
 
 
 
00580eb
 
dbf34dd
 
 
 
ce75e0a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
language:
- en
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- image-text-to-text
tags:
- cognition
- emotional_valence
- funniness
- memorability
- aesthetics
dataset_info:
  features:
  - name: id
    dtype: string
  - name: image
    dtype:
      image:
        decode: false
  - name: dimension
    dtype: string
  - name: prompt
    dtype: string
  - name: score
    dtype: float32
  splits:
  - name: train
    num_bytes: 145969790.0
    num_examples: 3200
  - name: test
    num_bytes: 22545428.0
    num_examples: 480
  download_size: 165420288
  dataset_size: 168515218.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

# CogIP-Bench: Cognition Image Property Benchmark

Paper: [From Pixels to Feelings: Aligning MLLMs with Human Cognitive Perception of Images](https://huggingface.co/papers/2511.22805)
Project Page: [MLLM-Cognition-project-page](https://follen-cry.github.io/MLLM-Cognition-project-page/)
Code: [Follen-cry/MLLM_Cognition_Alignment](https://github.com/Follen-cry/MLLM_Cognition_Alignment)

![90a73d7a39db648d3dfd442e6efed570](https://cdn-uploads.huggingface.co/production/uploads/67daba9b9c49701f60496af3/-zoo7NfdxCIQsJboMX49g.png)

**CogIP-Bench** is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the **CogIP-Bench** aims to measure.

This dataset evaluates models across four key cognitive dimensions: **Aesthetics**, **Funniness**, **Emotional Valence**, and **Memorability**.

## Sample Usage

The GitHub repository provides scripts for Supervised Fine-Tuning (SFT) and Evaluation using the CogIP-Bench dataset.

### Supervised Fine-Tuning (`sft/`)
To train a model (e.g., Qwen2.5-VL), navigate to the relevant directory within the cloned repository and run the script:

```bash
cd sft/qwen
bash scripts/finetune_lora.sh
```

### Evaluation (`evaluation/`)
To benchmark a model's performance on the 4 cognitive dimensions, navigate to the specific model folder (e.g., `evaluation/gemma`) within the cloned repository and run the evaluation script:

```bash
cd evaluation/gemma
bash cog_test.sh
```

## 📂 Dataset Structure & Files

The dataset is organized into two main formats: standard benchmark files (`.jsonl`) for evaluation and a source JSON file (`.json`) used for Supervised Fine-Tuning (SFT).

### 1. Benchmark Data (`metadata_train.jsonl`, `metadata_test.jsonl`)

These files contain the image-prompt pairs and ground truth scores used to benchmark model performance against human judgments.

*   **`metadata_train.jsonl`**: The training split, containing **3,200 examples** across the four dimensions.
*   **`metadata_test.jsonl`**: The testing split, used for final evaluation.

#### **Data Fields**

Each line in the `.jsonl` files represents a single datapoint:

| Field | Type | Description |
| :--- | :--- | :--- |
| `id` | `string` | A unique identifier for the image (e.g., `"000800"`). |
| `image` | `Image` | The image file (stored locally in the dataset structure, e.g., `"images/Aesthetics/000800.jpg"`). |
| `prompt` | `string` | The specific instruction given to the MLLM, employing the **"Describe-then-Predict"** strategy. |
| `score` | `float32` | The ground truth human-preference score, which is the target value for the model to predict (e.g., `4.105`). |

#### **Example Entry (from `metadata_test.jsonl`)**

This example shows a prompt for the Aesthetics sub-task, which includes detailed instructions and the scoring scale.

```json
{
  "id": "000800",
  "image": "images/Aesthetics/000800.jpg",
  "prompt": "<image>  
**Visual Aesthetics Analysis Sub-Task (Aesthetics):** 
In this sub-task, you are asked to assess the aesthetic appeal of the image based on elements suchs as visual harmony, composition, color, lighting, and emotional impact. Your goal is to provide a descriptive label that captures the overall aesthetic quality of the image, followed by a numerical score that reflects its aesthetic value.

Please first give a description label for the corresponding image, then predict the scores based on the following rules:  
- (0.0, 3.5, 'very low')  
- (3.5, 5.0, 'low')  
- (5.0, 6.5, 'medium')  
- (6.5, 8.0, 'high')  
- (8.0, 10.1, 'very high')  

The score should be a number with exactly three decimal places (e.g., 7.234).  

Please return only the label and the scores number, nothing else.",
  "score": 4.105
}
```
### 2. SFT Data (`original_cognition.json`)

*   **Filename:** `original_cognition.json`
*   **Purpose:** This is the original JSON file used for **Supervised Fine-Tuning (SFT)** on the MLLM. This file contains the data formatted for training the MLLM to output the structured response that includes both the label and the numerical score, thereby aligning its output with human cognitive judgments. This is the source file used to generate the structured `.jsonl` data.

---

## 🧠 Cognitive Dimensions

The benchmark evaluates four distinct subjective properties, each with a specific scale and corresponding labels used in the `prompt`.

| Dimension | Description | Typical Scale | Scoring Buckets |
| :--- | :--- | :--- | :--- |
| **Aesthetics** | Assesses visual appeal, harmony, and composition. | 0.0 to 10.0 | Very Low, Low, Medium, High, Very High |
| **Funniness** | Measures the humorous or amusing quality of an image. | 0.0 to 10.0 | Very Low, Low, Medium, High, Very High |
| **Emotional Valence** | Captures the emotional tone (positive to negative). | -3.0 to 3.0 (Mapped to 1-10) | Negative, Neutral, Positive |
| **Memorability** | Reflects the likelihood of an image being remembered. | 0.0 to 1.0 (Mapped to 1-10) | Very Low, Low, Medium, High, Very High |

![36de37faa08cd9d1aae35bbf6b2e92a1](https://cdn-uploads.huggingface.co/production/uploads/67daba9b9c49701f60496af3/5Rw2izq5rFb5UM1P6nTJL.png)

![a926e2b247cb3623a770078a4fc1a6b](https://cdn-uploads.huggingface.co/production/uploads/67daba9b9c49701f60496af3/-1hnW34wOmqzKs0b_DK-P.png)