Papers
arxiv:2603.08708

FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models

Published on Mar 9
· Submitted by
Haoyang Li
on Mar 10
Authors:
,
,
,
,
,
,

Abstract

Foreground attention shifts during CLIP-based prompt tuning are addressed through an adaptive module that enhances foreground view quality and mitigates generalization degradation.

AI-generated summary

CLIP-based prompt tuning enables pretrained Vision-Language Models (VLMs) to efficiently adapt to downstream tasks. Although existing studies have made significant progress, they pay limited attention to changes in the internal attention representations of VLMs during the tuning process. In this paper, we attribute the failure modes of prompt tuning predictions to shifts in foreground attention of the visual encoder, and propose Foreground View-Guided Prompt Tuning (FVG-PT), an adaptive plug-and-play foreground attention guidance module, to alleviate the shifts. Concretely, FVG-PT introduces a learnable Foreground Reliability Gate to automatically enhance the foreground view quality, applies a Foreground Distillation Compensation module to guide visual attention toward the foreground, and further introduces a Prior Calibration module to mitigate generalization degradation caused by excessive focus on the foreground. Experiments on multiple backbone models and datasets show the effectiveness and compatibility of FVG-PT. Codes are available at: https://github.com/JREion/FVG-PT

Community

Paper author Paper submitter

For CLIP-based prompt tuning, incorporating external knowledge to enhance model capability has become a common strategy. However, existing methods typically rely on LLMs to introduce knowledge from the text modality, while paying limited attention to the visual modality. In contrast, this paper aims to introduce visual foreground supervision to control the incorrect shift of visual attention during the prompt tuning process.

Regarding implementation, the authors also release both visual foreground datasets and general pipelines, with the expectation of inspiring subsequent research in this direction.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.08708 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.