File size: 5,926 Bytes
82b3a15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117

---
license: apache-2.0
task_categories:
- video-retrieval
- image-retrieval
tags:
- composed-video-retrieval
- composed-image-retrieval
- vision-language
- pytorch
- aaai-2026
---

<a id="top"></a>
<div align="center">
  <h1>๐ŸŽฌ (AAAI 2026) ReTrack: Evidence-Driven Dual-Stream Directional Anchor Calibration Network for Composed Video Retrieval (Model Weights)</h1>
    <div>
  <a target="_blank" href="https://lee-zixu.github.io/">Zixu&#160;Li</a><sup>1</sup>,
  <a target="_blank" href="https://faculty.sdu.edu.cn/huyupeng1/zh_CN/index.htm">Yupeng&#160;Hu</a><sup>1&#9993</sup>,
  <a target="_blank" href="https://zivchen-ty.github.io/">Zhiwei&#160;Chen</a><sup>1</sup>,
  <a target="_blank" href="https://windlikeo.github.io/HQL.github.io/">Qinlei&#160;Huang</a><sup>1</sup>,
  Guozhi&#160;Qiu<sup>1</sup>,
  <a target="_blank" href="https://zhihfu.github.io/">Zhiheng&#160;Fu</a><sup>1</sup>,
  <a target="_blank" href="https://mengliu1991.github.io">Meng&#160;Liu</a><sup>2</sup>
  </div>
  <sup>1</sup>School of Software, Shandong University &#160&#160&#160</span>  <br>
  <sup>2</sup>School of Computer Science and Technology, Shandong Jianzhu University&#160&#160&#160</span>
  <br />
  <sup>&#9993&#160;</sup>Corresponding author&#160;&#160;</span>
  <br/>
  <p>
      <a href="https://aaai.org/Conferences/AAAI-26/"><img src="https://img.shields.io/badge/AAAI-2026-blue.svg?style=flat-square" alt="AAAI 2026"></a>
      <a href="https://ojs.aaai.org/index.php/AAAI/article/view/39507"><img alt='Paper' src="https://img.shields.io/badge/Paper-AAAI.39507-green.svg?style=flat-square"></a>
      <a href="https://lee-zixu.github.io/ReTrack.github.io/"><img alt='Project Page' src="https://img.shields.io/badge/Website-orange?style=flat-square"></a>
      <a href="https://github.com/Lee-zixu/ReTrack"><img alt='GitHub' src="https://img.shields.io/badge/GitHub-Repository-black?style=flat-square&logo=github"></a>
  </p>
</div>

This repository hosts the official pre-trained model weights for **ReTrack**, an evidence-driven framework designed to calibrate directional bias in composed features for both Composed Video Retrieval (CVR) and Composed Image Retrieval (CIR) tasks.

---


## ๐Ÿ“Œ Model Information

### 1. Model Name
**ReTrack** (Evidence-Driven Dual-Stream Directional Anchor Calibration Network) Checkpoints.

### 2. Task Type & Applicable Tasks
- **Task Type:** Composed Video Retrieval (CVR) and Composed Image Retrieval (CIR).
- **Applicable Tasks:** Retrieving a target video or image based on a reference visual input combined with a modification text prompt. The model significantly reduces uncertainty caused by highly similar retrieval candidates in multi-modal queries.

### 3. Project Introduction
**ReTrack** is an advanced open-source PyTorch framework built on top of BLIP-2 (via Salesforce LAVIS) that improves multi-modal query understanding. It features:
- ๐ŸŽฏ **Dual-Stream Directional Anchor Calibration:** Explicitly identifies and calibrates visual and textual semantic contributions to resolve directional bias.
- โš–๏ธ **Reliable Evidence-Driven Alignment:** Leverages Dempster-Shafer Theory to evaluate similarity reliability, minimizing ambiguity among candidates.

### 4. Training Data Source & Hosted Weights
The framework is trained to support both the **WebVid-CoVR** dataset for video retrieval and the **FashionIQ / CIRR** datasets for image retrieval. 

This Hugging Face repository provides the following pre-trained checkpoint:
* ๐Ÿ“„ `ReTrack-WebVid-Frame1.ckpt`: The checkpoint trained on the WebVid-CoVR dataset (using a 1-frame configuration setting).

---

## ๐Ÿš€ Usage & Basic Inference

These weights are designed to be evaluated using the highly modular, Hydra-configured [ReTrack GitHub repository](https://github.com/Lee-zixu/ReTrack).

### Step 1: Prepare the Environment
We recommend using Anaconda. Clone the repository and install dependencies:
```bash
git clone https://github.com/iLearn-Lab/AAAI26-ReTrack.git
cd ReTrack
conda create -n retrack python=3.8 -y
conda activate retrack
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r requirements.txt
```

### Step 2: Download Model Weights & Prepare Data
1. Download `ReTrack-WebVid-Frame1.ckpt` from this Hugging Face repository. 
2. Place the checkpoint in the appropriate directory as expected by your Hydra configuration (e.g., within a `checkpoints/` folder).
3. Ensure the WebVid-CoVR dataset is placed under your defined `datasets_dir` in `configs/machine/default.yaml`.

### Step 3: Run Evaluation
To evaluate the trained CVR model, use `test.py` and specify the path to your downloaded checkpoint via Hydra CLI overrides:
```bash
python test.py \
    model.ckpt_path=/path/to/your/ReTrack-WebVid-Frame1.ckpt \
    +test=webvid-covr
```
*(Refer to the `configs/` directory in the code repository for advanced hyperparameter and path adjustments)*.

---

## โš ๏ธ Limitations & Notes

- **Configuration:** ReTrack is entirely managed by **Hydra** and **Lightning Fabric**. Make sure you are familiar with overriding configurations via the CLI or modifying the YAML files in the `configs/` directory.
- **Environment:** The project was specifically developed and evaluated on Python 3.8 and PyTorch 2.1.0; using drastically different versions may yield unexpected behaviors.

---

## ๐Ÿ“โญ๏ธ Citation

If you find our framework, code, or these weights useful in your research, please consider leaving a **Star** โญ๏ธ on our GitHub repository and citing our AAAI 2026 paper:

```bibtex
@inproceedings{ReTrack,
  title={ReTrack: Evidence Driven Dual Stream Directional Anchor Calibration Network for Composed Video Retrieval},
  author={Li, Zixu and Hu, Yupeng and Chen, Zhiwei and Huang, Qinlei and Qiu, Guozhi and Fu, Zhiheng and Liu, Meng},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2026}
}
```