Datasets:
File size: 3,254 Bytes
31c238f ceb6e05 2982c7e ceb6e05 e880775 2982c7e 7ebfa67 2982c7e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
license: cc-by-nc-sa-4.0
task_categories:
- image-text-to-image
language:
- en
pretty_name: icir
size_categories:
- 100K<n<1M
---
## i-CIR Dataset (Hugging Face)
[**website**](https://vrg.fel.cvut.cz/icir/) | [**arxiv**](https://arxiv.org/pdf/2510.25387) | [**github**](https://github.com/billpsomas/icir)
### About
**i-CIR (Instance-Level Composed Image Retrieval)** is a curated benchmark for **composed image retrieval** where each *instance* corresponds to a specific, visually indistinguishable object (e.g., a particular landmark). Each query combines an **image of the instance** with a **text modification**, and retrieval is evaluated against a database containing **rich hard negatives** (visual / textual / compositional).
<p align="center">
<img width="75%" alt="i-CIR illustration" src="https://github.com/billpsomas/icir/raw/main/.github/dataset.png">
</p>
**Key stats**
- **Instances:** 202
- **Total images:** ~750K
- **Composed queries:** 1,883
- **Avg database size / query:** ~3.7K images
- Includes challenging hard negatives per instance.
---
### Dataset Structure
On Hugging Face, i-CIR is hosted as **WebDataset shards** for scalable/robust downloads and streaming.
```text
icir/
βββ webdataset/
β βββ query/
β β βββ query-000000.tar
β β βββ query-000001.tar
β β βββ ...
β βββ database/
β βββ database-000000.tar
β βββ database-000001.tar
β βββ ...
βββ annotations/
β βββ query_files.csv
β βββ database_files.csv
βββ VERSION.txt
βββ LICENSE
```
---
### Annotations format
- query_files.csv: each row is (image_path, text_query, instance_id)
- database_files.csv: each row is (image_path, text_query, instance_id) (the text field may be unused for database features depending on the pipeline)
Inside each WebDataset sample, we store:
- an image (.jpg/.png/...)
- a json payload with: img_path, text, instance
---
### Download
One-liner download (recommended):
```bash
pip install -U huggingface_hub
huggingface-cli download billpsomas/icir --repo-type dataset --local-dir ./data/icir --revision main
```
Python (equivalent):
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="billpsomas/icir", repo_type="dataset", local_dir="./data/icir", revision="main")
```
---
### Using the dataset (feature extraction)
You can extract features directly from the WebDataset shards (no image folder extraction needed):
```bash
python3 create_features.py \
--dataset icir \
--icir_source wds \
--icir_wds_root ./data/icir \
--backbone clip \
--batch 512 \
--gpu 0
```
---
### License
The dataset is released under CC BY-NC-SA 4.0. Please see LICENSE for details.
---
### Citation
If you use i-CIR in your research, please cite:
```
@inproceedings{
psomas2025instancelevel,
title={Instance-Level Composed Image Retrieval},
author={Bill Psomas and George Retsinas and Nikos Efthymiadis and Panagiotis Filntisis and Yannis Avrithis and Petros Maragos and Ondrej Chum and Giorgos Tolias},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
year={2025}
}
``` |