Datasets:
Tasks:
Image-Text-to-Text
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
ArXiv:
License:
File size: 8,154 Bytes
bf08f6f 6ffb728 bf08f6f 11ee365 6ffb728 11ee365 1d89215 11ee365 1d89215 11ee365 1d89215 11ee365 1d89215 11ee365 1d89215 11ee365 1d89215 11ee365 1d89215 11ee365 1d89215 11ee365 6ffb728 11ee365 6ffb728 11ee365 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 |
---
language:
- en
license: mit
task_categories:
- image-text-to-text
tags:
- MICL
- MLLMs
- in-context-learning
- vision-language
---
# TrueMICL: True Multimodal In-Context Learning Dataset
A comprehensive multimodal dataset designed to evaluate and improve true multimodal in-context learning capabilities in Multimodal Large Language Models (MLLMs).
[Paper](https://huggingface.co/papers/2507.15807) | [Code](https://github.com/chenxshuo/true-micl-colm) | [Project page](https://chenxshuo.github.io/true-micl-colm)
## Table of Contents
- [Dataset Overview](#dataset-overview)
- [Dataset Structure](#dataset-structure)
- [Tasks and Domains](#tasks-and-domains)
- [Usage Examples](#usage-examples)
- [Data Collection Methodology](#data-collection-methodology)
- [Citation](#citation)
- [License](#license)
- [Contact](#contact)
## Dataset Overview
TrueMICL addresses a critical limitation in current Multimodal Large Language Models: their tendency to neglect visual information in multimodal demonstrations, leading to superficial text imitation. This dataset is specifically designed to test **true** multimodal in-context learning by ensuring that:
- Tasks are unsolvable without visual context
- Novel image-text relationships are introduced
- Visual information is perceivable and critical
- Compatibility with language model backbones is maintained
### Key Statistics
- **Total samples**: 867 evaluation samples + extensive training data
- **Task categories**: 4 major categories
- **Distinct tasks**: 7 different tasks
- **Domains**: Mathematical reasoning, pattern recognition, concept learning, visual question answering
## Dataset Structure
The dataset is organized into task-specific directories, each containing:
### File Organization
```
dataset/
├── classification/ # Character classification task
│ ├── img/ # Query and support images
│ ├── query.json # Test queries
│ └── support.json # Support examples
├── clevr/ # CLEVR-based reasoning tasks
│ ├── material/ # Material-based images
│ ├── query/ # Query images
│ ├── shape/ # Shape-based images
│ ├── size/ # Size-based images
│ ├── support/ # Support images
│ ├── query.json # Main queries
│ ├── support.json # Support examples
│ └── [query/support]_[material/shape/size].json # Task-specific splits
├── clock/ # Clock reading and math
│ ├── img/ # Clock face images
│ ├── query.json # Test queries
│ └── support.json # Support examples
├── operator_induction/ # Mathematical operator learning
│ ├── query.json # Test queries
│ ├── support.json # Support examples
│ └── processed_training_data.json # Training data
├── palindrome_dataset/ # Palindrome pattern recognition
│ ├── query.json # Test queries
│ ├── support.json # Support examples
│ └── training_data.json # Training data
├── shapes_count/ # Shape counting task
│ ├── query.json # Test queries
│ ├── support.json # Support examples
│ └── training_data.json # Training data
├── sudoku/ # Sudoku puzzle solving
│ ├── query.json # Test queries
│ └── support.json # Support examples
└── vqav2/ # Visual Question Answering v2
├── query.json # Test queries
└── support.json # Support examples
```
### Data Format
Each JSON file contains structured data with the following schema:
**Query/Support Format**:
```json
{
"id": "unique_identifier",
"image": ["path/to/image.png"],
"question": "Question text with multiple choice options",
"answer": "Correct answer"
}
```
**VQA Format** (slightly different):
```json
{
"image_id": 12345,
"question_id": 67890,
"question": "Question text",
"answer": "Answer text"
}
```
### Data Types and Columns
| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Unique identifier for the sample |
| `image` | array | List of image file paths |
| `question` | string | Question or task description |
| `answer` | string | Ground truth answer |
| `image_id` | integer | Image identifier (VQA format) |
| `question_id` | integer | Question identifier (VQA format) |
## Tasks and Domains
### 1. Mathematical Reasoning
- **Operator Induction**: Learn novel mathematical operators from visual examples
- **Clock Math**: Time reading and calculation tasks
### 2. Concept Binding
- **Character Classification**: Classify novel character types from visual examples
- **CLEVR Count**: Object counting and attribute reasoning
### 3. Pattern Finding
- **Sudoku**: Complete Sudoku puzzles using visual pattern recognition
- **Palindrome**: Identify palindromic patterns in visual sequences
### 4. Novel Concept Learning
- **Shapes Count**: Count specific shapes and understand spatial relationships
- **VQA**: General visual question answering requiring multimodal reasoning
## Usage Examples
### Basic Data Exploration
```python
import json
import matplotlib.pyplot as plt
from PIL import Image
# Load and examine a sample
with open("classification/query.json", "r") as f:
data = json.load(f)
sample = data[0]
print(f"ID: {sample['id']}")
print(f"Question: {sample['question']}")
print(f"Answer: {sample['answer']}")
# Load and display the image
img_path = sample['image'][0]
img = Image.open(img_path)
plt.imshow(img)
plt.title(sample['question'])
plt.show()
```
### Task-Specific Loading
```python
# Load CLEVR subtasks
clevr_tasks = ['material', 'shape', 'size']
for task in clevr_tasks:
with open(f"clevr/query_{task}.json", "r") as f:
task_data = json.load(f)
print(f"CLEVR {task}: {len(task_data)} samples")
```
## Data Collection Methodology
The dataset was constructed following rigorous criteria to ensure true multimodal learning:
1. **Visual Dependency**: All tasks require visual information and cannot be solved through text-only reasoning
2. **Novel Relationships**: Introduction of previously unseen image-text mappings
3. **Perceptual Validity**: Visual elements are clearly perceivable and unambiguous
4. **Model Compatibility**: Designed to work with standard language model architectures
### Source Data
- **CLEVR**: Modified from the original CLEVR dataset for visual reasoning
- **VQAv2**: Subset of the Visual Question Answering v2 dataset
- **Synthetic Tasks**: Custom-generated tasks for operator induction, palindromes, and shape counting
- **Novel Concepts**: Artificially created character types and visual patterns
## Citation
```bibtex
@inproceedings{wu2024fiva,
title={True Multimodal In-Context Learning Needs Attention to the Visual Context},
author={Tong Wu and Yinghao Xu and Ryan Po and Mengchen Zhang and Guandao Yang and Jiaqi Wang and Ziwei Liu and Dahua Lin and Gordon Wetzstein},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=Vp6HAjrdIg}
}
```
## License
This dataset is released under the [MIT License](LICENSE). Please see the license file for detailed terms and conditions.
## Contact
For questions, issues, or contributions regarding this dataset:
- **Project Website**: https://chenxshuo.github.io/true-micl-colm/
- **Paper**: https://huggingface.co/papers/2507.15807
- **Code**: https://github.com/chenxshuo/true-micl-colm
- **Issues**: Please report bugs or request features through the appropriate channels
---
**Note**: This dataset is designed for research purposes to advance multimodal in-context learning. The novel tasks and visual concepts are specifically crafted to test true multimodal understanding rather than superficial pattern matching. |