Datasets:
image imagewidth (px) 138 7.09k | label class label 4
classes |
|---|---|
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life | |
0daily_life |
DailyClue Dataset
Seek-and-Solve: Benchmarking MLLMs for Visual Clue-Driven Reasoning in Daily Scenarios
Dataset Summary
DailyClue is a benchmark for evaluating visual clue-driven reasoning in Multimodal Large Language Models (MLLMs). Unlike benchmarks that test pre-existing knowledge, DailyClue requires models to actively identify decisive visual clues from images before producing answers.
The dataset spans 4 major domains and 16 distinct subtasks, with 666 total questions.
Dataset Structure
DailyClue/
βββ daily_life/ # Images for Daily Commonsense Reasoning
βββ location/ # Images for Location Identification
βββ science/ # Images for Scientific Commonsense
βββ spatial/ # Images for Spatial Reasoning
βββ daily_life.json
βββ location.json
βββ science.json
βββ spatial.json
Statistics
| Category | # Questions | Formats |
|---|---|---|
| Daily Commonsense Reasoning | 180 | Multiple Choice, Yes/No, Open-ended |
| Location Identification | 200 | Open-ended |
| Spatial Reasoning | 163 | Multiple Choice, Yes/No |
| Scientific Commonsense | 123 | Multiple Choice, Yes/No, Open-ended |
| Total | 666 |
Data Fields
Each JSON entry contains:
| Field | Type | Description |
|---|---|---|
image |
list[str] |
Image filename(s) within the category subfolder |
question |
str |
The question posed to the model |
clues |
str |
Human-annotated ground-truth visual clues (see note below) |
ground_truth |
str |
The correct answer |
format |
str |
"Multiple choice", "Yes or no", or "Open-ended" |
category_1 |
str |
Primary domain (one of the four above) |
category_2 |
str |
Subtask within the primary domain |
language |
str |
"English" |
Note on
clues: This field contains human-annotated ground-truth visual clues. It is used in ablation experiments (injecting GT clues to probe the impact on model accuracy) and in the Rigorous Evaluation Protocol (checking whether model-predicted clues semantically align with GT clues). It is not fed to the model during standard inference.
Usage
Download
# via git
git clone https://huggingface.co/datasets/Crysun/DailyClue
# via huggingface_hub
from huggingface_hub import snapshot_download
snapshot_download(repo_id="Crysun/DailyClue", repo_type="dataset", local_dir="./dataset")
Run Inference
After downloading, point the inference script to the local directory:
python infer/inference.py \
--dataset ./dataset \
--model_names "gpt-4o" \
--prompt_mode "b"
See the GitHub repository for the full evaluation pipeline.
Citation
@article{dailyclue2026,
title={Seek-and-Solve: Benchmarking MLLMs for Visual Clue-Driven Reasoning in Daily Scenarios},
author={Li, Xiaomin and Wang, Tala and Zhong, Zichen and Zhang, Ying and Zheng, Zirui and Isobe, Takashi and Li, Dezhuang and Lu, Huchuan and He, You and Jia, Xu},
journal={arXiv preprint arXiv:2604.14041},
year={2026}
}
- Downloads last month
- 41