Datasets:
Tasks:
Image-Text-to-Text
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
ArXiv:
License:
| language: | |
| - en | |
| license: mit | |
| task_categories: | |
| - image-text-to-text | |
| tags: | |
| - MICL | |
| - MLLMs | |
| - in-context-learning | |
| - vision-language | |
| # TrueMICL: True Multimodal In-Context Learning Dataset | |
| A comprehensive multimodal dataset designed to evaluate and improve true multimodal in-context learning capabilities in Multimodal Large Language Models (MLLMs). | |
| [Paper](https://huggingface.co/papers/2507.15807) | [Code](https://github.com/chenxshuo/true-micl-colm) | [Project page](https://chenxshuo.github.io/true-micl-colm) | |
| ## Table of Contents | |
| - [Dataset Overview](#dataset-overview) | |
| - [Dataset Structure](#dataset-structure) | |
| - [Tasks and Domains](#tasks-and-domains) | |
| - [Usage Examples](#usage-examples) | |
| - [Data Collection Methodology](#data-collection-methodology) | |
| - [Citation](#citation) | |
| - [License](#license) | |
| - [Contact](#contact) | |
| ## Dataset Overview | |
| TrueMICL addresses a critical limitation in current Multimodal Large Language Models: their tendency to neglect visual information in multimodal demonstrations, leading to superficial text imitation. This dataset is specifically designed to test **true** multimodal in-context learning by ensuring that: | |
| - Tasks are unsolvable without visual context | |
| - Novel image-text relationships are introduced | |
| - Visual information is perceivable and critical | |
| - Compatibility with language model backbones is maintained | |
| ### Key Statistics | |
| - **Total samples**: 867 evaluation samples + extensive training data | |
| - **Task categories**: 4 major categories | |
| - **Distinct tasks**: 7 different tasks | |
| - **Domains**: Mathematical reasoning, pattern recognition, concept learning, visual question answering | |
| ## Dataset Structure | |
| The dataset is organized into task-specific directories, each containing: | |
| ### File Organization | |
| ``` | |
| dataset/ | |
| ├── classification/ # Character classification task | |
| │ ├── img/ # Query and support images | |
| │ ├── query.json # Test queries | |
| │ └── support.json # Support examples | |
| ├── clevr/ # CLEVR-based reasoning tasks | |
| │ ├── material/ # Material-based images | |
| │ ├── query/ # Query images | |
| │ ├── shape/ # Shape-based images | |
| │ ├── size/ # Size-based images | |
| │ ├── support/ # Support images | |
| │ ├── query.json # Main queries | |
| │ ├── support.json # Support examples | |
| │ └── [query/support]_[material/shape/size].json # Task-specific splits | |
| ├── clock/ # Clock reading and math | |
| │ ├── img/ # Clock face images | |
| │ ├── query.json # Test queries | |
| │ └── support.json # Support examples | |
| ├── operator_induction/ # Mathematical operator learning | |
| │ ├── query.json # Test queries | |
| │ ├── support.json # Support examples | |
| │ └── processed_training_data.json # Training data | |
| ├── palindrome_dataset/ # Palindrome pattern recognition | |
| │ ├── query.json # Test queries | |
| │ ├── support.json # Support examples | |
| │ └── training_data.json # Training data | |
| ├── shapes_count/ # Shape counting task | |
| │ ├── query.json # Test queries | |
| │ ├── support.json # Support examples | |
| │ └── training_data.json # Training data | |
| ├── sudoku/ # Sudoku puzzle solving | |
| │ ├── query.json # Test queries | |
| │ └── support.json # Support examples | |
| └── vqav2/ # Visual Question Answering v2 | |
| ├── query.json # Test queries | |
| └── support.json # Support examples | |
| ``` | |
| ### Data Format | |
| Each JSON file contains structured data with the following schema: | |
| **Query/Support Format**: | |
| ```json | |
| { | |
| "id": "unique_identifier", | |
| "image": ["path/to/image.png"], | |
| "question": "Question text with multiple choice options", | |
| "answer": "Correct answer" | |
| } | |
| ``` | |
| **VQA Format** (slightly different): | |
| ```json | |
| { | |
| "image_id": 12345, | |
| "question_id": 67890, | |
| "question": "Question text", | |
| "answer": "Answer text" | |
| } | |
| ``` | |
| ### Data Types and Columns | |
| | Field | Type | Description | | |
| |-------|------|-------------| | |
| | `id` | string | Unique identifier for the sample | | |
| | `image` | array | List of image file paths | | |
| | `question` | string | Question or task description | | |
| | `answer` | string | Ground truth answer | | |
| | `image_id` | integer | Image identifier (VQA format) | | |
| | `question_id` | integer | Question identifier (VQA format) | | |
| ## Tasks and Domains | |
| ### 1. Mathematical Reasoning | |
| - **Operator Induction**: Learn novel mathematical operators from visual examples | |
| - **Clock Math**: Time reading and calculation tasks | |
| ### 2. Concept Binding | |
| - **Character Classification**: Classify novel character types from visual examples | |
| - **CLEVR Count**: Object counting and attribute reasoning | |
| ### 3. Pattern Finding | |
| - **Sudoku**: Complete Sudoku puzzles using visual pattern recognition | |
| - **Palindrome**: Identify palindromic patterns in visual sequences | |
| ### 4. Novel Concept Learning | |
| - **Shapes Count**: Count specific shapes and understand spatial relationships | |
| - **VQA**: General visual question answering requiring multimodal reasoning | |
| ## Usage Examples | |
| ### Basic Data Exploration | |
| ```python | |
| import json | |
| import matplotlib.pyplot as plt | |
| from PIL import Image | |
| # Load and examine a sample | |
| with open("classification/query.json", "r") as f: | |
| data = json.load(f) | |
| sample = data[0] | |
| print(f"ID: {sample['id']}") | |
| print(f"Question: {sample['question']}") | |
| print(f"Answer: {sample['answer']}") | |
| # Load and display the image | |
| img_path = sample['image'][0] | |
| img = Image.open(img_path) | |
| plt.imshow(img) | |
| plt.title(sample['question']) | |
| plt.show() | |
| ``` | |
| ### Task-Specific Loading | |
| ```python | |
| # Load CLEVR subtasks | |
| clevr_tasks = ['material', 'shape', 'size'] | |
| for task in clevr_tasks: | |
| with open(f"clevr/query_{task}.json", "r") as f: | |
| task_data = json.load(f) | |
| print(f"CLEVR {task}: {len(task_data)} samples") | |
| ``` | |
| ## Data Collection Methodology | |
| The dataset was constructed following rigorous criteria to ensure true multimodal learning: | |
| 1. **Visual Dependency**: All tasks require visual information and cannot be solved through text-only reasoning | |
| 2. **Novel Relationships**: Introduction of previously unseen image-text mappings | |
| 3. **Perceptual Validity**: Visual elements are clearly perceivable and unambiguous | |
| 4. **Model Compatibility**: Designed to work with standard language model architectures | |
| ### Source Data | |
| - **CLEVR**: Modified from the original CLEVR dataset for visual reasoning | |
| - **VQAv2**: Subset of the Visual Question Answering v2 dataset | |
| - **Synthetic Tasks**: Custom-generated tasks for operator induction, palindromes, and shape counting | |
| - **Novel Concepts**: Artificially created character types and visual patterns | |
| ## Citation | |
| ```bibtex | |
| @inproceedings{wu2024fiva, | |
| title={True Multimodal In-Context Learning Needs Attention to the Visual Context}, | |
| author={Tong Wu and Yinghao Xu and Ryan Po and Mengchen Zhang and Guandao Yang and Jiaqi Wang and Ziwei Liu and Dahua Lin and Gordon Wetzstein}, | |
| booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, | |
| year={2024}, | |
| url={https://openreview.net/forum?id=Vp6HAjrdIg} | |
| } | |
| ``` | |
| ## License | |
| This dataset is released under the [MIT License](LICENSE). Please see the license file for detailed terms and conditions. | |
| ## Contact | |
| For questions, issues, or contributions regarding this dataset: | |
| - **Project Website**: https://chenxshuo.github.io/true-micl-colm/ | |
| - **Paper**: https://huggingface.co/papers/2507.15807 | |
| - **Code**: https://github.com/chenxshuo/true-micl-colm | |
| - **Issues**: Please report bugs or request features through the appropriate channels | |
| --- | |
| **Note**: This dataset is designed for research purposes to advance multimodal in-context learning. The novel tasks and visual concepts are specifically crafted to test true multimodal understanding rather than superficial pattern matching. |