File size: 2,893 Bytes
19f1933
 
 
 
 
 
 
 
 
 
 
7a53e1e
 
19f1933
 
 
 
 
 
 
 
7a53e1e
 
 
 
0412665
7a53e1e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f63d15
 
7a53e1e
 
19f1933
7a53e1e
 
 
 
 
 
 
19f1933
 
7a53e1e
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
pipeline_tag: image-text-to-text
library_name: transformers
base_model: Qwen/Qwen3-VL-4B-Instruct
tags:
- vision-language-tracking
- multimodal
- mllm
- video
---

# VPTracker: Global Vision-Language Tracking via Visual Prompt and MLLM

This repository contains the weights for **VPTracker**, the first global tracking framework based on Multimodal Large Language Models (MLLMs). 

VPTracker exploits the powerful semantic reasoning of MLLMs to locate targets across the entire image space. To address distractions from visually or semantically similar objects during global search, it introduces a location-aware visual prompting mechanism that incorporates spatial priors.

- **Paper:** [VPTracker: Global Vision-Language Tracking via Visual Prompt and MLLM](https://huggingface.co/papers/2512.22799)
- **Repository:** [GitHub - jcwang0602/VPTracker](https://github.com/jcwang0602/VPTracker)

[![arXiv](https://img.shields.io/badge/Arxiv-2512.22799-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2512.22799)
[![Python](https://img.shields.io/badge/Python-3.9-blue.svg)](https://www.python.org/downloads/)
[![PyTorch](https://img.shields.io/badge/PyTorch-2.5.1-red.svg)](https://pytorch.org/)
[![Transformers](https://img.shields.io/badge/Transformers-4.37.2-green.svg)](https://huggingface.co/docs/transformers/)

<!-- <img src="assets/VPTracker.jpg" width="800"> -->

## ๐Ÿš€ Quick Start

### Installation

```bash
conda create -n gltrack python==3.10
conda activate gltrack

cd ms-swift
conda install -c conda-forge pyarrow sentencepiece
pip install -e .
pip install "sglang[all]" -U
pip install "vllm>=0.5.1" "transformers<4.55" "trl<0.21" -U
pip install "lmdeploy>=0.5" -U
pip install autoawq -U --no-deps
pip install auto_gptq optimum bitsandbytes "gradio<5.33" -U
pip install git+https://github.com/modelscope/ms-swift.git
pip install timm -U
pip install "deepspeed" -U
pip install flash-attn==2.7.4.post1 --no-build-isolation

conda install av -c conda-forge
pip install qwen_vl_utils qwen_omni_utils decord librosa icecream soundfile -U
pip install liger_kernel nvitop pre-commit math_verify py-spy -U
```

<!-- ## ๐Ÿ‘€ Visualization
<img src="assets/Results.jpg" width="800"> -->

## ๐Ÿ™ Acknowledgments
This code is developed on top of [ms-swift](https://github.com/modelscope/ms-swift).

## โœ‰๏ธ Contact
Email: jcwang@stu.ecnu.edu.cn. Any kind discussions are welcomed!

---

## ๐Ÿ“– Citation
If our work is useful for your research, please consider citing:
```bibtex
@misc{wang2025vptrackerglobalvisionlanguagetracking,
      title={VPTracker: Global Vision-Language Tracking via Visual Prompt and MLLM}, 
      author={Jingchao Wang and Kaiwen Zhou and Zhijian Wu and Kunhua Ji and Dingjiang Huang and Yefeng Zheng},
      year={2025},
      eprint={2512.22799},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2512.22799}, 
}
```