File size: 2,784 Bytes
7b13465
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license: mit
datasets:
- BleachNick/UltraEdit
language:
- en
metrics:
- accuracy
base_model:
- openai/clip-vit-large-patch14-336
- openai/clip-vit-large-patch14
- timm/ViT-SO400M-14-SigLIP
- timm/ViT-SO400M-14-SigLIP2
- timm/ViT-SO400M-16-SigLIP2-384
- timm/ViT-SO400M-14-SigLIP-384
pipeline_tag: zero-shot-image-classification
---

# CLIP-IN: Enhancing Fine-Grained Visual Understanding in CLIP via Instruction Editing Data and Long Captions

<!-- Provide a quick summary of what the model is/does. -->


### Model Description

Despite the success of Vision-Language Models (VLMs) like CLIP in aligning vision and language, their proficiency in detailed, fine-grained visual comprehension remains a key challenge. We present CLIP-IN, a novel framework that bolsters CLIP’s fine-grained perception through two core innovations. Firstly, we lever-
age instruction-editing datasets, originally designed for image manipulation, as a
unique source of hard negative image-text pairs. Coupled with a symmetric hard
negative contrastive loss, this enables the model to effectively distinguish subtle
visual-semantic differences. Secondly, CLIP-IN incorporates long descriptive cap-
tions, utilizing rotary positional encodings to capture rich semantic context often
missed by standard CLIP. Our experiments demonstrate that CLIP-IN achieves sub-
stantial gains on the MMVP benchmark and various fine-grained visual recognition
tasks, without compromising robust zero-shot performance on broader classifica-
tion and retrieval tasks. Critically, integrating CLIP-IN’s visual representations into
Multimodal Large Language Models significantly reduces visual hallucinations
and enhances reasoning abilities. This work underscores the considerable potential
of synergizing targeted, instruction-based contrastive learning with comprehensive
descriptive information to elevate the fine-grained understanding of VLMs


## Evaluation

![result1](https://cdn-uploads.huggingface.co/production/uploads/678db465b6eed17d5640d492/bpm9k7S7TOBwE5bO4QASM.jpeg)

![result2](https://cdn-uploads.huggingface.co/production/uploads/678db465b6eed17d5640d492/PweGh-UQSH3iKN7lEBc3t.jpeg)

![result3](https://cdn-uploads.huggingface.co/production/uploads/678db465b6eed17d5640d492/N89bao5CAvgLh_ptaBEo7.jpeg)

![result4](https://cdn-uploads.huggingface.co/production/uploads/678db465b6eed17d5640d492/qV6V3155gNkDvU5NkWzJB.png)

![result5](https://cdn-uploads.huggingface.co/production/uploads/678db465b6eed17d5640d492/feciTXg-I0BILZ_HVIxOr.png)

**BibTeX:**

@article{Wang2025CLIPINEF,
  title={CLIP-IN: Enhancing Fine-Grained Visual Understanding in CLIP via Instruction Editing Data and Long Captions},
  author={Ziteng Wang and Siqi Yang and Limeng Qiao and Lin Ma},
  journal={NeurIPS},
  year={2025}
}