license: mit
datasets:
- BleachNick/UltraEdit
language:
- en
metrics:
- accuracy
base_model:
- openai/clip-vit-large-patch14-336
- openai/clip-vit-large-patch14
- timm/ViT-SO400M-14-SigLIP
- timm/ViT-SO400M-14-SigLIP2
- timm/ViT-SO400M-16-SigLIP2-384
- timm/ViT-SO400M-14-SigLIP-384
pipeline_tag: zero-shot-image-classification
CLIP-IN: Enhancing Fine-Grained Visual Understanding in CLIP via Instruction Editing Data and Long Captions
Model Description
Despite the success of Vision-Language Models (VLMs) like CLIP in aligning vision and language, their proficiency in detailed, fine-grained visual comprehension remains a key challenge. We present CLIP-IN, a novel framework that bolsters CLIP’s fine-grained perception through two core innovations. Firstly, we lever- age instruction-editing datasets, originally designed for image manipulation, as a unique source of hard negative image-text pairs. Coupled with a symmetric hard negative contrastive loss, this enables the model to effectively distinguish subtle visual-semantic differences. Secondly, CLIP-IN incorporates long descriptive cap- tions, utilizing rotary positional encodings to capture rich semantic context often missed by standard CLIP. Our experiments demonstrate that CLIP-IN achieves sub- stantial gains on the MMVP benchmark and various fine-grained visual recognition tasks, without compromising robust zero-shot performance on broader classifica- tion and retrieval tasks. Critically, integrating CLIP-IN’s visual representations into Multimodal Large Language Models significantly reduces visual hallucinations and enhances reasoning abilities. This work underscores the considerable potential of synergizing targeted, instruction-based contrastive learning with comprehensive descriptive information to elevate the fine-grained understanding of VLMs
Evaluation
BibTeX:
@article{Wang2025CLIPINEF, title={CLIP-IN: Enhancing Fine-Grained Visual Understanding in CLIP via Instruction Editing Data and Long Captions}, author={Ziteng Wang and Siqi Yang and Limeng Qiao and Lin Ma}, journal={NeurIPS}, year={2025} }




