Datasets:
license: apache-2.0
task_categories:
- image-feature-extraction
- zero-shot-image-classification
language:
- en
tags:
- fashion
- image-retrieval
- benchmark
- e-commerce
- visual-search
pretty_name: LookBench
size_categories:
- 10K<n<100K
configs:
- config_name: aigen_streetlook
data_files:
- split: query
path: v20251201/aigen_streetlook/query.parquet
- split: gallery
path: v20251201/aigen_streetlook/gallery.parquet
- config_name: aigen_studio
data_files:
- split: query
path: v20251201/aigen_studio/query.parquet
- split: gallery
path: v20251201/aigen_studio/gallery.parquet
- config_name: real_streetlook
data_files:
- split: query
path: v20251201/real_streetlook/query.parquet
- split: gallery
path: v20251201/real_streetlook/gallery.parquet
- config_name: real_studio_flat
data_files:
- split: query
path: v20251201/real_studio_flat/query.parquet
- split: gallery
path: v20251201/real_studio_flat/gallery.parquet
- config_name: noise
data_files:
- split: gallery
path: v20251201/noise/*.parquet
dataset_info:
features:
- name: image
dtype: image
- name: category
dtype: string
- name: main_attribute
dtype: string
- name: other_attributes
dtype: string
- name: bbox
dtype: string
- name: item_ID
dtype: string
- name: task
dtype: string
- name: difficulty
dtype: string
LookBench: A Live and Holistic Fashion Image Retrieval Benchmark
LookBench is a large-scale, open benchmark for fashion image retrieval, designed to evaluate modern vision and vision–language models under realistic, contamination-aware settings. The benchmark emphasizes live data, domain diversity, and holistic retrieval tasks spanning both single-item and outfit-level scenarios.
This dataset accompanies the paper LookBench: A Live and Holistic Open Benchmark for Fashion Image Retrieval.
🎯 Motivation
Existing fashion retrieval benchmarks often suffer from:
- Significant test–training contamination
- Over-reliance on clean studio product images
- Limited support for outfit-level and real-world queries
LookBench addresses these limitations by introducing live, recently collected images, street-style outfit queries, and AI-generated images, enabling more realistic and forward-looking evaluation.
📦 Dataset Overview
LookBench consists of multiple subsets reflecting different image sources and retrieval difficulties.
Each subset is constructed as a query–corpus retrieval benchmark, where query images are matched against a large gallery.
Subsets (from Table 1 in the paper)
| Subset Name | Image Source | Retrieval Type | Difficulty | #Queries | #Corpus |
|---|---|---|---|---|---|
| RealStudioFlat | Real studio flat-lay product images | Single-item | Easy | 1,011 | 62,226 |
| AIGen-Studio | AI-generated studio images | Single-item | Medium | 192 | 59,254 |
| RealStreetLook | Real street outfit images | Multi-item | Hard | 1,000 | 61,553 |
| AIGen-StreetLook | AI-generated street outfit images | Multi-item | Hard | 160 | 58,846 |
🧠 Tasks
LookBench supports two primary retrieval tasks:
1. Single-Item Retrieval
Given a query image containing a single fashion item, retrieve the exact matching product from the corpus.
2. Multi-Item (Outfit) Retrieval
Given a street-style image containing multiple fashion items, retrieve all corresponding products from the corpus.
These tasks reflect real-world fashion search and recommendation scenarios.
🧾 Data Format
Each dataset subset contains:
- Query split: images used as retrieval queries
- Corpus split: candidate images used as the retrieval gallery
Each sample may include the following fields (subset-dependent):
image: Input fashion imagecategory: Fashion category labelbbox: Bounding box of the fashion itemitem_id: Unique product identifiertask: Retrieval task typedifficulty: Difficulty level
🚀 How to Use
Load the Dataset
You can load LookBench using the 🤗 Datasets library:
from datasets import load_dataset
dataset = load_dataset("srpone/look-bench")
print(dataset)
Citation
@article{gao2026lookbench,
title={LookBench: A Live and Holistic Open Benchmark for Fashion Image Retrieval},
author={Chao Gao and Siqiao Xue and Yimin Peng and Jiwen Fu and Tingyi Gu and Shanshan Li and Fan Zhou},
year={2026},
url={https://arxiv.org/abs/2601.14706},
journal= {arXiv preprint arXiv:2601.14706},
}