Datasets:
Improve dataset card: Add paper, code, project page links, task category, and sample usage
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,17 +1,17 @@
|
|
| 1 |
---
|
| 2 |
-
task_categories:
|
| 3 |
-
- question-answering
|
| 4 |
language:
|
| 5 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
tags:
|
| 7 |
- cognition
|
| 8 |
- emotional_valence
|
| 9 |
- funniness
|
| 10 |
- memorability
|
| 11 |
- aesthetics
|
| 12 |
-
size_categories:
|
| 13 |
-
- 1K<n<10K
|
| 14 |
-
license: cc-by-4.0
|
| 15 |
dataset_info:
|
| 16 |
features:
|
| 17 |
- name: id
|
|
@@ -46,12 +46,36 @@ configs:
|
|
| 46 |
|
| 47 |
# CogIP-Bench: Cognition Image Property Benchmark
|
| 48 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |

|
| 50 |
|
| 51 |
**CogIP-Bench** is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the **CogIP-Bench** aims to measure.
|
| 52 |
|
| 53 |
This dataset evaluates models across four key cognitive dimensions: **Aesthetics**, **Funniness**, **Emotional Valence**, and **Memorability**.
|
| 54 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
## 📂 Dataset Structure & Files
|
| 56 |
|
| 57 |
The dataset is organized into two main formats: standard benchmark files (`.jsonl`) for evaluation and a source JSON file (`.json`) used for Supervised Fine-Tuning (SFT).
|
|
@@ -60,8 +84,8 @@ The dataset is organized into two main formats: standard benchmark files (`.json
|
|
| 60 |
|
| 61 |
These files contain the image-prompt pairs and ground truth scores used to benchmark model performance against human judgments.
|
| 62 |
|
| 63 |
-
*
|
| 64 |
-
*
|
| 65 |
|
| 66 |
#### **Data Fields**
|
| 67 |
|
|
@@ -80,16 +104,29 @@ This example shows a prompt for the Aesthetics sub-task, which includes detailed
|
|
| 80 |
|
| 81 |
```json
|
| 82 |
{
|
| 83 |
-
"id": "000800",
|
| 84 |
-
"image": "images/Aesthetics/000800.jpg",
|
| 85 |
-
"prompt": "<image>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
"score": 4.105
|
| 87 |
}
|
| 88 |
```
|
| 89 |
### 2. SFT Data (`original_cognition.json`)
|
| 90 |
|
| 91 |
-
*
|
| 92 |
-
*
|
| 93 |
|
| 94 |
---
|
| 95 |
|
|
@@ -106,4 +143,4 @@ The benchmark evaluates four distinct subjective properties, each with a specifi
|
|
| 106 |
|
| 107 |

|
| 108 |
|
| 109 |
-

|
| 50 |
+
Project Page: [MLLM-Cognition-project-page](https://follen-cry.github.io/MLLM-Cognition-project-page/)
|
| 51 |
+
Code: [Follen-cry/MLLM_Cognition_Alignment](https://github.com/Follen-cry/MLLM_Cognition_Alignment)
|
| 52 |
+
|
| 53 |

|
| 54 |
|
| 55 |
**CogIP-Bench** is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the **CogIP-Bench** aims to measure.
|
| 56 |
|
| 57 |
This dataset evaluates models across four key cognitive dimensions: **Aesthetics**, **Funniness**, **Emotional Valence**, and **Memorability**.
|
| 58 |
|
| 59 |
+
## Sample Usage
|
| 60 |
+
|
| 61 |
+
The GitHub repository provides scripts for Supervised Fine-Tuning (SFT) and Evaluation using the CogIP-Bench dataset.
|
| 62 |
+
|
| 63 |
+
### Supervised Fine-Tuning (`sft/`)
|
| 64 |
+
To train a model (e.g., Qwen2.5-VL), navigate to the relevant directory within the cloned repository and run the script:
|
| 65 |
+
|
| 66 |
+
```bash
|
| 67 |
+
cd sft/qwen
|
| 68 |
+
bash scripts/finetune_lora.sh
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
### Evaluation (`evaluation/`)
|
| 72 |
+
To benchmark a model's performance on the 4 cognitive dimensions, navigate to the specific model folder (e.g., `evaluation/gemma`) within the cloned repository and run the evaluation script:
|
| 73 |
+
|
| 74 |
+
```bash
|
| 75 |
+
cd evaluation/gemma
|
| 76 |
+
bash cog_test.sh
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
## 📂 Dataset Structure & Files
|
| 80 |
|
| 81 |
The dataset is organized into two main formats: standard benchmark files (`.jsonl`) for evaluation and a source JSON file (`.json`) used for Supervised Fine-Tuning (SFT).
|
|
|
|
| 84 |
|
| 85 |
These files contain the image-prompt pairs and ground truth scores used to benchmark model performance against human judgments.
|
| 86 |
|
| 87 |
+
* **`metadata_train.jsonl`**: The training split, containing **3,200 examples** across the four dimensions.
|
| 88 |
+
* **`metadata_test.jsonl`**: The testing split, used for final evaluation.
|
| 89 |
|
| 90 |
#### **Data Fields**
|
| 91 |
|
|
|
|
| 104 |
|
| 105 |
```json
|
| 106 |
{
|
| 107 |
+
"id": "000800",
|
| 108 |
+
"image": "images/Aesthetics/000800.jpg",
|
| 109 |
+
"prompt": "<image>
|
| 110 |
+
**Visual Aesthetics Analysis Sub-Task (Aesthetics):**
|
| 111 |
+
In this sub-task, you are asked to assess the aesthetic appeal of the image based on elements suchs as visual harmony, composition, color, lighting, and emotional impact. Your goal is to provide a descriptive label that captures the overall aesthetic quality of the image, followed by a numerical score that reflects its aesthetic value.
|
| 112 |
+
|
| 113 |
+
Please first give a description label for the corresponding image, then predict the scores based on the following rules:
|
| 114 |
+
- (0.0, 3.5, 'very low')
|
| 115 |
+
- (3.5, 5.0, 'low')
|
| 116 |
+
- (5.0, 6.5, 'medium')
|
| 117 |
+
- (6.5, 8.0, 'high')
|
| 118 |
+
- (8.0, 10.1, 'very high')
|
| 119 |
+
|
| 120 |
+
The score should be a number with exactly three decimal places (e.g., 7.234).
|
| 121 |
+
|
| 122 |
+
Please return only the label and the scores number, nothing else.",
|
| 123 |
"score": 4.105
|
| 124 |
}
|
| 125 |
```
|
| 126 |
### 2. SFT Data (`original_cognition.json`)
|
| 127 |
|
| 128 |
+
* **Filename:** `original_cognition.json`
|
| 129 |
+
* **Purpose:** This is the original JSON file used for **Supervised Fine-Tuning (SFT)** on the MLLM. This file contains the data formatted for training the MLLM to output the structured response that includes both the label and the numerical score, thereby aligning its output with human cognitive judgments. This is the source file used to generate the structured `.jsonl` data.
|
| 130 |
|
| 131 |
---
|
| 132 |
|
|
|
|
| 143 |
|
| 144 |

|
| 145 |
|
| 146 |
+

|