English

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models

Haoyang Li1,2, Liang Wang1,2, Siyu Zhou1, Jiacheng Sun2, Jing Jiang1, Chao Wang2, Guodong Long1 and Yan Peng2.
1Australian Artificial Intelligence Institute   2Shanghai University

Arxiv Link: https://arxiv.org/abs/2603.08708


πŸ“œ Abstract

CLIP-based prompt tuning enables pretrained Vision-Language Models (VLMs) to efficiently adapt to downstream tasks. Although existing studies have made significant progress, they pay limited attention to changes in the internal attention representations of VLMs during the tuning process. In this paper, we attribute the failure modes of prompt tuning predictions to shifts in foreground attention of the visual encoder, and propose Foreground View-Guided Prompt Tuning (FVG-PT), an adaptive plug-and-play foreground attention guidance module, to alleviate the shifts. Concretely, FVG-PT introduces a learnable Foreground Reliability Gate to automatically enhance the foreground view quality, applies a Foreground Distillation Compensation module to guide visual attention toward the foreground, and further introduces a Prior Calibration module to mitigate generalization degradation caused by excessive focus on the foreground. Experiments on multiple backbone models and datasets show the effectiveness and compatibility of FVG-PT.

πŸ” Framework

Figure 1. Overview of our proposed FVG-PT. As a plug-and-play method, in (a) tuning stage, FVG-PT obtains the foreground view x^fg of image x and fine-tunes the learnable (b) Foreground Reliability Gate to learn an adaptive foreground trust score r. Meanwhile, Foreground Distillation Compensation module inserts adapters after image-text alignment of frozen backbone model to guide visual attention toward the foreground. In parallel, independent Prior Calibration fine-tunes the (c) Backbone Reliability Gate on the decoupled new branch (indicated by dashed lines) to balance the tuned model and the CLIP prior.

πŸ’‘ Our previous work on prompt tuning

  • [CVPR 25] DPC: Dual-Prompt Collaboration for Tuning Vision-Language Models
    Haoyang Li, Liang Wang, Chao Wang, Jing Jiang, Yan Peng and Guodong Long.
    [Paper] [Project Page] [Poster]

  • [ICME 25] MAO: Efficient Model-Agnostic Optimization of Prompt Tuning for Vision-Language Models
    Haoyang Li, Siyu Zhou, Liang Wang and Guodong Long.
    [Paper] [Project Page] [Poster]

  • [arxiv] Raw Data Matters: Enhancing Prompt Tuning by Internal Augmentation on Vision-Language Models
    Haoyang Li, Liang Wang, Chao Wang, Siyu Zhou, Jing Jiang, Yan Peng and Guodong Long
    [Paper] [Project Page]


βš™οΈ Running

  1. Create the environment and install Dassl.pytorch library. Please follow the instructions detailed in INSTALL.md.
  2. Prepare the dataset. We release 11 prompt-tuning datasets with foreground views on [πŸ€—HuggingFace], just use them directly.
    These foreground views are generated by SEEM. They are put in the ./mask folder in each dataset.
    Details of data preparation can be found in DATASETS.md.
  3. Run fine-tuning script on the backbone models first (e.g., CoOp):
    python run_tuning.py --dataset caltech101 --trainer CoOp --seed_list 1 --sub_class base
    python run_tuning.py --dataset caltech101 --trainer CoOp --seed_list 1 --sub_class new
    
  4. Run FVG-PT fine-tuning based on the pre-tuned backbone models:
     python run_tuning.py --dataset caltech101 --trainer FVGPT_CoOp --seed_list 1 --sub_class base --lambda_base 10.0
     python run_tuning.py --dataset caltech101 --trainer FVGPT_CoOp --seed_list 1 --sub_class new --lambda_base 10.0
    

Code Statement

We are still working on organizing the code repository.

βœ… One-key fine-tuning pipeline for backbone models (run_tuning.py)

βœ… FVG-PT foreground datasets and corresponding dassl DataLoader

πŸ“… (To-do) FVG-PT Trainers

We promise to release our full code implementation in the future.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train JREion/FVG-PT

Papers for JREion/FVG-PT