idx string | task string | image_relation string | image_type string | question string | options list | answer string | image_list list | counterpart_idx string | count int64 |
|---|---|---|---|---|---|---|---|---|---|
100 | Counting | Partial Similarity | Photography | How many other garments besides a complete mitten pair are shown in each image? <image> <image> | [
"Four",
"Two",
"Three",
"None of the choices provided",
"Zero"
] | E | [
{
"bytes": [
255,
216,
255,
224,
0,
16,
74,
70,
73,
70,
0,
1,
1,
0,
0,
1,
0,
1,
0,
0,
255,
219,
0,
67,
0,
8,
6,
6,
7,
6,
5,
... | 101 | 2 |
101 | Counting | Partial Similarity | Photography | How many other garments besides a complete mitten pair are shown in each image? <image> <image> | [
"Four",
"Three",
"One",
"None of the choices provided",
"Two"
] | D | [
{
"bytes": [
255,
216,
255,
224,
0,
16,
74,
70,
73,
70,
0,
1,
1,
0,
0,
1,
0,
1,
0,
0,
255,
219,
0,
67,
0,
8,
6,
6,
7,
6,
5,
... | 100 | 2 |
102 | Counting | Partial Similarity | Photography | "How many girls wearing matching knitted mittens and cap, with her hands pointing up towards her fac(...TRUNCATED) | [
"Four",
"Two",
"Three",
"One",
"None of the choices provided"
] | D | [{"bytes":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsI(...TRUNCATED) | 103 | 2 |
103 | Counting | Partial Similarity | Photography | "How many girls wearing matching knitted mittens and cap, with her hands pointing up towards her fac(...TRUNCATED) | [
"Two",
"Three",
"Four",
"None of the choices provided",
"Zero"
] | D | [{"bytes":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsI(...TRUNCATED) | 102 | 2 |
104 | Counting | Partial Similarity | Photography | "How many girls wearing matching knitted mittens and cap, with her hands pointing up towards her fac(...TRUNCATED) | [
"None of the choices provided",
"Four",
"Three",
"One",
"Two"
] | D | [{"bytes":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsI(...TRUNCATED) | 105 | 2 |
105 | Counting | Partial Similarity | Photography | "How many girls wearing matching knitted mittens and cap, with her hands pointing up towards her fac(...TRUNCATED) | [
"Three",
"Two",
"Zero",
"Four",
"None of the choices provided"
] | E | [{"bytes":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsI(...TRUNCATED) | 104 | 2 |
106 | Counting | Partial Similarity | Photography | How many hands with gloves on them are there in the image? <image> <image> | [
"Two",
"Four",
"One",
"Three",
"None of the choices provided"
] | A | [{"bytes":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsI(...TRUNCATED) | 107 | 2 |
107 | Counting | Partial Similarity | Photography | How many hands with gloves on them are there in the images? <image> <image> | [
"None of the choices provided",
"Zero",
"One",
"Three",
"Four"
] | A | [{"bytes":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsI(...TRUNCATED) | 106 | 2 |
108 | Counting | Partial Similarity | Photography | How many hands with gloves on them are there? <image> <image> | [
"None of the choices provided",
"Four",
"One",
"Three",
"Two"
] | E | [{"bytes":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsI(...TRUNCATED) | 109 | 2 |
109 | Counting | Partial Similarity | Photography | How many hands with gloves on them are there? <image> <image> | [
"None of the choices provided",
"Three",
"One",
"Zero",
"Four"
] | A | [{"bytes":"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsI(...TRUNCATED) | 108 | 2 |
MuirBench (Image Count = 2)
Dataset Summary
This dataset is a filtered subset of MuirBench, containing only samples with exactly 2 input images per question.
The goal of this subset is to support controlled experiments on multi-image visual reasoning, where the number of images is fixed and small, enabling clearer analysis of compositional and relational reasoning in Vision-Language Models (VLMs).
- Original dataset: MuirBench
- Filter criterion:
len(image_list) ∈ {2} - Modality: Image + Text
- Task type: Multi-image visual reasoning / VQA-style QA
Dataset Structure
Each sample contains (a subset of the original fields):
| Field name | Type | Description |
|---|---|---|
question |
string | The input question |
image_list |
list of images | A list of 2 or 3 images (stored as bytes) |
count |
int | Number of images (2) |
choices* |
list (optional) | Multiple-choice options (if applicable) |
answer* |
string/int | Ground-truth answer |
task* |
string | Task category |
relation* |
string | Reasoning relation type |
* Availability depends on the original MuirBench annotation.
Intended Use
This dataset is intended for:
- Evaluating multi-image reasoning capabilities of VLMs
- Controlled ablations on number of images vs accuracy
- Benchmarking models on compositional visual understanding
- Research on visual grounding across multiple images
It is not intended for training large-scale commercial systems.
Creation Process
This dataset was created by:
- Loading the original MuirBench dataset
- Counting the number of images per sample (
len(image_list)) - Filtering samples where the image count is exactly 2
- Releasing the filtered subset as a standalone dataset for research convenience
No annotations were modified.
Licensing and Attribution
This dataset inherits the license of the original MuirBench dataset.
If you use this dataset in academic work, please cite the original MuirBench paper.
- Downloads last month
- 3