File size: 6,006 Bytes
d922224 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | ---
license: apache-2.0
task_categories:
- image-segmentation
- text-to-image
- image-to-text
tags:
- composed-image-retrieval
- fashioniq
- cirr
- shoes
- acm-mm-2025
---
<a id="top"></a>
<div align="center">
<h1>(ACM MM 2025) OFFSET: Segmentation-based Focus Shift Revision for Composed Image Retrieval</h1>
<div align="center">
<a target="_blank" href="https://zivchen-ty.github.io/">Zhiwei Chen</a><sup>1</sup>,
<a target="_blank" href="https://faculty.sdu.edu.cn/huyupeng1/zh_CN/index.htm">Yupeng Hu</a><sup>1✉</sup>,
<a target="_blank" href="https://lee-zixu.github.io/">Zixu Li</a><sup>1</sup>,
<a target="_blank" href="https://zhihfu.github.io/">Zhiheng Fu</a><sup>1</sup>,
<a target="_blank" href="https://xuemengsong.github.io">Xuemeng Song</a><sup>2</sup>,
<a target="_blank" href="https://liqiangnie.github.io/index.html">Liqiang Nie</a><sup>3</sup>
</div>
<sup>1</sup>School of Software, Shandong University    </span>
<br />
<sup>2</sup>Department of Data Science, City University of Hong Kong,    </span>
<br />
<sup>3</sup>School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen),    </span> <br />
<sup>✉ </sup>Corresponding author  </span>
<br/>
<p>
<a href="https://acmmm2025.org/"><img src="https://img.shields.io/badge/ACM_MM-2025-blue.svg?style=flat-square" alt="ACM MM 2025"></a>
<a href="https://arxiv.org/abs/2507.05631"><img alt='arXiv' src="https://img.shields.io/badge/arXiv-2507.05631-b31b1b.svg"></a>
<a href="https://github.com/iLearn-Lab/MM25-OFFSET"><img alt='GitHub' src="https://img.shields.io/badge/GitHub-Repository-black?style=flat-square&logo=github"></a>
</p>
</div>
This dataset contains the official pre-computed dominant portion segmentation data used in the **OFFSET** framework for Composed Image Retrieval (CIR).
---
## 📌 Dataset Information
### 1. Dataset Source
This dataset is derived from the official visual data of three widely-used Composed Image Retrieval (CIR) datasets: **FashionIQ**, **Shoes**, and **CIRR**.
The segmentation data within this repository was machine-generated using visual language models (BLIP-2) to create image captions as a role-supervised signal, dividing images into dominant and noisy regions by CLIPSeg.
### 2. Dataset Purpose
This data serves as the foundational input for the **Dominant Portion Segmentation** module in the OFFSET architecture. It is designed to:
* Effectively mask noise information in visual data.
* Act as a guiding signal for the Dual Focus Mapping (Visual and Textual Focus Mapping branches).
* Address visual inhomogeneity and text-priority biases in Composed Image Retrieval tasks.
### 3. Field Descriptions & Structure
The dataset is provided as a single compressed archive: `OFFSET_dominant_portion_segmentation.zip`. Once extracted, it contains pre-computed segmentation masks corresponding to the reference and target images of the downstream datasets.
* **Image ID / Filename:** Corresponds directly to the original image names in FashionIQ (e.g., `B000ALGQSY.jpg`), Shoes (e.g., `img_womens_athletic_shoes_375.jpg`), and CIRR (e.g., `train-10108-0-img0.png`).
* **Segmentation Mask/Data:** The processed dominant portion arrays/tensors indicating the salient regions versus noisy background regions.
### 4. Data Split
The segmentation data aligns strictly with the official dataset splits of the corresponding benchmarks:
* **FashionIQ:** `train` / `val`
* **Shoes:** `train` / `test`
* **CIRR:** `train` / `dev` / `test1`
### 5. License & Commercial Use
This segmentation dataset is released under the **Apache 2.0 License**, which permits commercial use, modification, and distribution.
*Note:* While this specific segmentation data is Apache 2.0, users must still comply with the original licenses of the underlying FashionIQ, Shoes, and CIRR datasets when using them in conjunction.
### 6. Usage Restrictions & Ethical Considerations
* **Limitations:** This data is specifically optimized for the OFFSET model architecture and standard CIR tasks. Generalizing these specific masks to completely unrelated dense prediction tasks may yield sub-optimal results.
* **Privacy & Ethics:** The source datasets consist of publicly available e-commerce product images (FashionIQ, Shoes) and natural real-world images (NLVR2/CIRR). The pre-computed segmentation process does not introduce new personally identifiable information (PII) or ethical risks beyond those present in the original public benchmarks.
---
## 🚀 How to Use
This dataset is designed to be used directly with the official OFFSET GitHub repository.
**1. Download the Data:**
Download `OFFSET_dominant_portion_segmentation.zip` from the Files section and extract it.
**2. Organize the Directory:**
Place the extracted segmentation data into your local environment alongside the original datasets, following the directory requirements specified in the [OFFSET GitHub Repository Data Preparation guide](https://github.com/iLearn-Lab/MM25-OFFSET#--data-preparation).
**3. Run Training/Evaluation:**
Point the training script to the extracted data paths:
```bash
python3 train.py \
--model_dir ./checkpoints/ \
--dataset {shoes, fashioniq, cirr} \
--cirr_path "path/to/CIRR" \
--fashioniq_path "path/to/FashionIQ" \
--shoes_path "path/to/Shoes"
```
---
## 📝⭐️ Citation
If you find this dataset or the OFFSET framework useful in your research, please consider leaving a **Star**⭐️ on our GitHub repository and **Citing**📝 our ACM MM 2025 paper:
```bibtex
@inproceedings{OFFSET,
title = {OFFSET: Segmentation-based Focus Shift Revision for Composed Image Retrieval},
author = {Chen, Zhiwei and Hu, Yupeng and Li, Zixu and Fu, Zhiheng and Song, Xuemeng and Nie, Liqiang},
booktitle = {Proceedings of the ACM International Conference on Multimedia},
pages = {6113–6122},
year = {2025}
}
```
|