AAAI26-HABIT / README.md
Lee-zixu's picture
Update README.md
d0586f2 verified
|
raw
history blame
6.39 kB
---
license: apache-2.0
task_categories:
- image-retrieval
- vision-language-navigation
tags:
- composed-image-retrieval
- robust-learning
- blip-2
- pytorch
- aaai-2026
---
<a id="top"></a>
<div align="center">
<h1>(AAAI 2026) HABIT: Chrono-Synergia Robust Progressive Learning Framework for Composed Image Retrieval (Model Weights)</h1>
<div>
<a target="_blank" href="https://lee-zixu.github.io/">Zixu&#160;Li</a><sup>1</sup>,
<a target="_blank" href="https://faculty.sdu.edu.cn/huyupeng1/zh_CN/index.htm">Yupeng&#160;Hu</a><sup>1&#9993</sup>,
<a target="_blank" href="https://zivchen-ty.github.io/">Zhiwei&#160;Chen</a><sup>1</sup>,
Shiqi&#160;Zhang<sup>1</sup>,
<a target="_blank" href="https://windlikeo.github.io/HQL.github.io/">Qinlei&#160;Huang</a><sup>1</sup>,
<a target="_blank" href="https://zhihfu.github.io/">Zhiheng&#160;Fu</a><sup>1</sup>,
<a target="_blank" href="https://faculty.sdu.edu.cn/weiyinwei1/zh_CN/index.htm">Yinwei&#160;Wei</a><sup>1</sup>
</div>
<sup>1</sup>School of Software, Shandong University &#160&#160&#160</span>
<br />
<sup>&#9993&#160;</sup>Corresponding author&#160;&#160;</span>
<br/>
<p>
<a href="https://aaai.org/Conferences/AAAI-26/"><img src="https://img.shields.io/badge/AAAI-2026-blue.svg?style=flat-square" alt="AAAI 2026"></a>
<a href="https://ojs.aaai.org/index.php/AAAI/article/view/37608"><img alt='Paper' src="https://img.shields.io/badge/Paper-AAAI.37608-green.svg"></a>
<a href="https://lee-zixu.github.io/HABIT.github.io/"><img alt='Project Page' src="https://img.shields.io/badge/Website-orange"></a>
<a href="https://github.com/iLearn-Lab/AAAI26-HABIT"><img alt='GitHub' src="https://img.shields.io/badge/GitHub-Repository-black?style=flat-square&logo=github"></a>
</p>
</div>
This repository hosts the official pre-trained checkpoints for **HABIT**, a highly robust progressive learning framework designed to tackle the Noise Triplet Correspondence (NTC) problem in Composed Image Retrieval (CIR).
---
## πŸ“Œ Model Information
### 1. Model Name
**HABIT** (cHrono-synergiA roBust progressIve learning framework for composed image reTrieval) Checkpoints.
### 2. Task Type & Applicable Tasks
- **Task Type:** Composed Image Retrieval (CIR) / Vision-Language Retrieval.
- **Applicable Tasks:** Retrieving target images based on a reference image and a modification text. These weights are specifically robust against noisy training data (Noise Triplet Correspondence).
### 3. Project Introduction
Existing Composed Image Retrieval (CIR) methods often suffer from the "Noise Triplet Correspondence (NTC)" problem in real-world scenarios, struggling to precisely estimate composed semantic discrepancies. **HABIT** effectively addresses this through:
- 🧠 **Mutual Knowledge Estimation (MKE):** Quantifies sample cleanliness by computing the transition rate of mutual knowledge.
- ⏳ **Dual-consistency Progressive Learning (DPL):** A collaborative mechanism between historical and current models to simulate human habit formation (retaining good habits, calibrating bad ones).
Based on the BLIP-2 architecture, HABIT maintains State-of-the-Art (SOTA) retrieval performance under various noise ratios.
### 4. Training Data Source & Hosted Weights
The models were trained on the **FashionIQ** and **CIRR** datasets under varying simulated noise ratios ($N \in \{0.2, 0.5, 0.8\}$). This Hugging Face repository provides the corresponding `.pt` checkpoint files organized by dataset:
* πŸ“‚ `fiq/`
* `HABIT-FIQ_N0.2.pt` (Trained on FashionIQ with 20% noise)
* `HABIT-FIQ_N0.5.pt` (Trained on FashionIQ with 50% noise)
* `HABIT-FIQ_N0.8.pt` (Trained on FashionIQ with 80% noise)
* πŸ“‚ `cirr/`
* `HABIT-CIRR_N0.2.pt` (Trained on CIRR with 20% noise)
* `HABIT-CIRR_N0.5.pt` (Trained on CIRR with 50% noise)
* `HABIT-CIRR_N0.8.pt` (Trained on CIRR with 80% noise)
---
## πŸš€ Usage & Basic Inference
These weights are designed to be evaluated seamlessly using the official [HABIT GitHub repository](https://github.com/Lee-zixu/HABIT).
### Step 1: Prepare the Environment
Clone the GitHub repository and install dependencies:
```bash
git clone https://github.com/iLearn-Lab/AAAI26-HABIT
cd HABIT
conda create -n habit python=3.8 -y
conda activate habit
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url [https://download.pytorch.org/whl/cu121](https://download.pytorch.org/whl/cu121)
pip install open-clip-torch==2.24.0 scikit-learn==1.3.2 transformers==4.25.0 salesforce-lavis==1.0.2 timm==0.9.16
```
### Step 2: Download Model Weights
Download the specific `.pt` files you wish to evaluate from this Hugging Face repository. Place them into a `checkpoints/` directory within your cloned GitHub repo. For example, to evaluate the CIRR model trained with 50% noise:
```text
HABIT/
└── checkpoints/
└── cirr_noise0.5/
└── HABIT-CIRR_N0.5.pt <-- (Rename to best_model.pt if required by your specific test script)
```
### Step 3: Run Testing / Evaluation
To generate prediction files on the CIRR dataset for the [CIRR Evaluation Server](https://cirr.cecs.anu.edu.au/), point the test script to the directory containing your downloaded checkpoint:
```bash
# Example for testing the CIRR 50% noise model
python src/cirr_test_submission.py checkpoints/cirr_noise0.5/
```
*(The script will automatically output `.json` files based on the checkpoint for online evaluation.)*
---
## ⚠️ Limitations & Notes
- **Hardware Requirements:** Because HABIT is built upon the powerful BLIP-2 architecture, inference and further fine-tuning require GPUs with sufficient memory (e.g., NVIDIA A40 48G / V100 32G is recommended).
- **Intended Use:** These weights are provided for academic research and to facilitate reproducibility of the AAAI 2026 paper.
---
## πŸ“β­οΈ Citation
If you find our work, code, or these model weights useful in your research, please consider leaving a **Star** ⭐️ on our GitHub repository and citing our paper:
```bibtex
@inproceedings{HABIT,
title={HABIT: Chrono-Synergia Robust Progressive Learning Framework for Composed Image Retrieval},
author={Li, Zixu and Hu, Yupeng and Chen, Zhiwei and Zhang, Shiqi and Huang, Qinlei and Fu, Zhiheng and Wei, Yinwei},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2026}
}
```