zivchen commited on
Commit
d922224
·
verified ·
1 Parent(s): dbb2511

Upload 2 files

Browse files
OFFSET_dominant_portion_segmentation.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4139cb78b4621c1be7df636dc0583e4334c5db3c887b8431538d7d11eb20f758
3
+ size 1660312736
README.md CHANGED
@@ -1,3 +1,113 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - image-segmentation
5
+ - text-to-image
6
+ - image-to-text
7
+ tags:
8
+ - composed-image-retrieval
9
+ - fashioniq
10
+ - cirr
11
+ - shoes
12
+ - acm-mm-2025
13
+ ---
14
+
15
+ <a id="top"></a>
16
+ <div align="center">
17
+ <h1>(ACM MM 2025) OFFSET: Segmentation-based Focus Shift Revision for Composed Image Retrieval</h1>
18
+ <div align="center">
19
+ <a target="_blank" href="https://zivchen-ty.github.io/">Zhiwei&#160;Chen</a><sup>1</sup>,
20
+ <a target="_blank" href="https://faculty.sdu.edu.cn/huyupeng1/zh_CN/index.htm">Yupeng&#160;Hu</a><sup>1&#9993</sup>,
21
+ <a target="_blank" href="https://lee-zixu.github.io/">Zixu&#160;Li</a><sup>1</sup>,
22
+ <a target="_blank" href="https://zhihfu.github.io/">Zhiheng&#160;Fu</a><sup>1</sup>,
23
+ <a target="_blank" href="https://xuemengsong.github.io">Xuemeng&#160;Song</a><sup>2</sup>,
24
+ <a target="_blank" href="https://liqiangnie.github.io/index.html">Liqiang&#160;Nie</a><sup>3</sup>
25
+ </div>
26
+ <sup>1</sup>School of Software, Shandong University &#160&#160&#160</span>
27
+ <br />
28
+ <sup>2</sup>Department of Data Science, City University of Hong Kong, &#160&#160&#160</span>
29
+ <br />
30
+ <sup>3</sup>School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), &#160&#160&#160</span> <br />
31
+ <sup>&#9993&#160;</sup>Corresponding author&#160;&#160;</span>
32
+ <br/>
33
+ <p>
34
+ <a href="https://acmmm2025.org/"><img src="https://img.shields.io/badge/ACM_MM-2025-blue.svg?style=flat-square" alt="ACM MM 2025"></a>
35
+ <a href="https://arxiv.org/abs/2507.05631"><img alt='arXiv' src="https://img.shields.io/badge/arXiv-2507.05631-b31b1b.svg"></a>
36
+ <a href="https://github.com/iLearn-Lab/MM25-OFFSET"><img alt='GitHub' src="https://img.shields.io/badge/GitHub-Repository-black?style=flat-square&logo=github"></a>
37
+ </p>
38
+ </div>
39
+
40
+ This dataset contains the official pre-computed dominant portion segmentation data used in the **OFFSET** framework for Composed Image Retrieval (CIR).
41
+
42
+ ---
43
+
44
+ ## 📌 Dataset Information
45
+
46
+ ### 1. Dataset Source
47
+ This dataset is derived from the official visual data of three widely-used Composed Image Retrieval (CIR) datasets: **FashionIQ**, **Shoes**, and **CIRR**.
48
+ The segmentation data within this repository was machine-generated using visual language models (BLIP-2) to create image captions as a role-supervised signal, dividing images into dominant and noisy regions by CLIPSeg.
49
+
50
+ ### 2. Dataset Purpose
51
+ This data serves as the foundational input for the **Dominant Portion Segmentation** module in the OFFSET architecture. It is designed to:
52
+ * Effectively mask noise information in visual data.
53
+ * Act as a guiding signal for the Dual Focus Mapping (Visual and Textual Focus Mapping branches).
54
+ * Address visual inhomogeneity and text-priority biases in Composed Image Retrieval tasks.
55
+
56
+ ### 3. Field Descriptions & Structure
57
+ The dataset is provided as a single compressed archive: `OFFSET_dominant_portion_segmentation.zip`. Once extracted, it contains pre-computed segmentation masks corresponding to the reference and target images of the downstream datasets.
58
+
59
+ * **Image ID / Filename:** Corresponds directly to the original image names in FashionIQ (e.g., `B000ALGQSY.jpg`), Shoes (e.g., `img_womens_athletic_shoes_375.jpg`), and CIRR (e.g., `train-10108-0-img0.png`).
60
+ * **Segmentation Mask/Data:** The processed dominant portion arrays/tensors indicating the salient regions versus noisy background regions.
61
+
62
+ ### 4. Data Split
63
+ The segmentation data aligns strictly with the official dataset splits of the corresponding benchmarks:
64
+ * **FashionIQ:** `train` / `val`
65
+ * **Shoes:** `train` / `test`
66
+ * **CIRR:** `train` / `dev` / `test1`
67
+
68
+ ### 5. License & Commercial Use
69
+ This segmentation dataset is released under the **Apache 2.0 License**, which permits commercial use, modification, and distribution.
70
+ *Note:* While this specific segmentation data is Apache 2.0, users must still comply with the original licenses of the underlying FashionIQ, Shoes, and CIRR datasets when using them in conjunction.
71
+
72
+ ### 6. Usage Restrictions & Ethical Considerations
73
+ * **Limitations:** This data is specifically optimized for the OFFSET model architecture and standard CIR tasks. Generalizing these specific masks to completely unrelated dense prediction tasks may yield sub-optimal results.
74
+ * **Privacy & Ethics:** The source datasets consist of publicly available e-commerce product images (FashionIQ, Shoes) and natural real-world images (NLVR2/CIRR). The pre-computed segmentation process does not introduce new personally identifiable information (PII) or ethical risks beyond those present in the original public benchmarks.
75
+
76
+ ---
77
+
78
+ ## 🚀 How to Use
79
+
80
+ This dataset is designed to be used directly with the official OFFSET GitHub repository.
81
+
82
+ **1. Download the Data:**
83
+ Download `OFFSET_dominant_portion_segmentation.zip` from the Files section and extract it.
84
+
85
+ **2. Organize the Directory:**
86
+ Place the extracted segmentation data into your local environment alongside the original datasets, following the directory requirements specified in the [OFFSET GitHub Repository Data Preparation guide](https://github.com/iLearn-Lab/MM25-OFFSET#--data-preparation).
87
+
88
+ **3. Run Training/Evaluation:**
89
+ Point the training script to the extracted data paths:
90
+ ```bash
91
+ python3 train.py \
92
+ --model_dir ./checkpoints/ \
93
+ --dataset {shoes, fashioniq, cirr} \
94
+ --cirr_path "path/to/CIRR" \
95
+ --fashioniq_path "path/to/FashionIQ" \
96
+ --shoes_path "path/to/Shoes"
97
+ ```
98
+
99
+ ---
100
+
101
+ ## 📝⭐️ Citation
102
+
103
+ If you find this dataset or the OFFSET framework useful in your research, please consider leaving a **Star**⭐️ on our GitHub repository and **Citing**📝 our ACM MM 2025 paper:
104
+
105
+ ```bibtex
106
+ @inproceedings{OFFSET,
107
+ title = {OFFSET: Segmentation-based Focus Shift Revision for Composed Image Retrieval},
108
+ author = {Chen, Zhiwei and Hu, Yupeng and Li, Zixu and Fu, Zhiheng and Song, Xuemeng and Nie, Liqiang},
109
+ booktitle = {Proceedings of the ACM International Conference on Multimedia},
110
+ pages = {6113–6122},
111
+ year = {2025}
112
+ }
113
+ ```