The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image image |
|---|
(ACM MM 2025) OFFSET: Segmentation-based Focus Shift Revision for Composed Image Retrieval
1School of Software, Shandong University2Department of Data Science, City University of Hong Kong,
3School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen),
✉ Corresponding author
This dataset contains the official pre-computed dominant portion segmentation data used in the OFFSET framework for Composed Image Retrieval (CIR).
📌 Dataset Information
1. Dataset Source
This dataset is derived from the official visual data of three widely-used Composed Image Retrieval (CIR) datasets: FashionIQ, Shoes, and CIRR. The segmentation data within this repository was machine-generated using visual language models (BLIP-2) to create image captions as a role-supervised signal, dividing images into dominant and noisy regions by CLIPSeg.
2. Dataset Purpose
This data serves as the foundational input for the Dominant Portion Segmentation module in the OFFSET architecture. It is designed to:
- Effectively mask noise information in visual data.
- Act as a guiding signal for the Dual Focus Mapping (Visual and Textual Focus Mapping branches).
- Address visual inhomogeneity and text-priority biases in Composed Image Retrieval tasks.
3. Field Descriptions & Structure
The dataset is provided as a single compressed archive: OFFSET_dominant_portion_segmentation.zip. Once extracted, it contains pre-computed segmentation masks corresponding to the reference and target images of the downstream datasets.
- Image ID / Filename: Corresponds directly to the original image names in FashionIQ (e.g.,
B000ALGQSY.jpg), Shoes (e.g.,img_womens_athletic_shoes_375.jpg), and CIRR (e.g.,train-10108-0-img0.png). - Segmentation Mask/Data: The processed dominant portion arrays/tensors indicating the salient regions versus noisy background regions.
4. Data Split
The segmentation data aligns strictly with the official dataset splits of the corresponding benchmarks:
- FashionIQ:
train/val - Shoes:
train/test - CIRR:
train/dev/test1
5. License & Commercial Use
This segmentation dataset is released under the Apache 2.0 License, which permits commercial use, modification, and distribution. Note: While this specific segmentation data is Apache 2.0, users must still comply with the original licenses of the underlying FashionIQ, Shoes, and CIRR datasets when using them in conjunction.
6. Usage Restrictions & Ethical Considerations
- Limitations: This data is specifically optimized for the OFFSET model architecture and standard CIR tasks. Generalizing these specific masks to completely unrelated dense prediction tasks may yield sub-optimal results.
- Privacy & Ethics: The source datasets consist of publicly available e-commerce product images (FashionIQ, Shoes) and natural real-world images (NLVR2/CIRR). The pre-computed segmentation process does not introduce new personally identifiable information (PII) or ethical risks beyond those present in the original public benchmarks.
🚀 How to Use
This dataset is designed to be used directly with the official OFFSET GitHub repository.
1. Download the Data:
Download OFFSET_dominant_portion_segmentation.zip from the Files section and extract it.
2. Organize the Directory: Place the extracted segmentation data into your local environment alongside the original datasets, following the directory requirements specified in the OFFSET GitHub Repository Data Preparation guide.
3. Run Training/Evaluation: Point the training script to the extracted data paths:
python3 train.py \
--model_dir ./checkpoints/ \
--dataset {shoes, fashioniq, cirr} \
--cirr_path "path/to/CIRR" \
--fashioniq_path "path/to/FashionIQ" \
--shoes_path "path/to/Shoes"
📝⭐️ Citation
If you find this dataset or the OFFSET framework useful in your research, please consider leaving a Star⭐️ on our GitHub repository and Citing📝 our ACM MM 2025 paper:
@inproceedings{OFFSET,
title = {OFFSET: Segmentation-based Focus Shift Revision for Composed Image Retrieval},
author = {Chen, Zhiwei and Hu, Yupeng and Li, Zixu and Fu, Zhiheng and Song, Xuemeng and Nie, Liqiang},
booktitle = {Proceedings of the ACM International Conference on Multimedia},
pages = {6113–6122},
year = {2025}
}
- Downloads last month
- 7