ziiiteng commited on
Commit
7b13465
·
verified ·
1 Parent(s): 7d047ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -3
README.md CHANGED
@@ -1,3 +1,61 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - BleachNick/UltraEdit
5
+ language:
6
+ - en
7
+ metrics:
8
+ - accuracy
9
+ base_model:
10
+ - openai/clip-vit-large-patch14-336
11
+ - openai/clip-vit-large-patch14
12
+ - timm/ViT-SO400M-14-SigLIP
13
+ - timm/ViT-SO400M-14-SigLIP2
14
+ - timm/ViT-SO400M-16-SigLIP2-384
15
+ - timm/ViT-SO400M-14-SigLIP-384
16
+ pipeline_tag: zero-shot-image-classification
17
+ ---
18
+
19
+ # CLIP-IN: Enhancing Fine-Grained Visual Understanding in CLIP via Instruction Editing Data and Long Captions
20
+
21
+ <!-- Provide a quick summary of what the model is/does. -->
22
+
23
+
24
+ ### Model Description
25
+
26
+ Despite the success of Vision-Language Models (VLMs) like CLIP in aligning vision and language, their proficiency in detailed, fine-grained visual comprehension remains a key challenge. We present CLIP-IN, a novel framework that bolsters CLIP’s fine-grained perception through two core innovations. Firstly, we lever-
27
+ age instruction-editing datasets, originally designed for image manipulation, as a
28
+ unique source of hard negative image-text pairs. Coupled with a symmetric hard
29
+ negative contrastive loss, this enables the model to effectively distinguish subtle
30
+ visual-semantic differences. Secondly, CLIP-IN incorporates long descriptive cap-
31
+ tions, utilizing rotary positional encodings to capture rich semantic context often
32
+ missed by standard CLIP. Our experiments demonstrate that CLIP-IN achieves sub-
33
+ stantial gains on the MMVP benchmark and various fine-grained visual recognition
34
+ tasks, without compromising robust zero-shot performance on broader classifica-
35
+ tion and retrieval tasks. Critically, integrating CLIP-IN’s visual representations into
36
+ Multimodal Large Language Models significantly reduces visual hallucinations
37
+ and enhances reasoning abilities. This work underscores the considerable potential
38
+ of synergizing targeted, instruction-based contrastive learning with comprehensive
39
+ descriptive information to elevate the fine-grained understanding of VLMs
40
+
41
+
42
+ ## Evaluation
43
+
44
+ ![result1](https://cdn-uploads.huggingface.co/production/uploads/678db465b6eed17d5640d492/bpm9k7S7TOBwE5bO4QASM.jpeg)
45
+
46
+ ![result2](https://cdn-uploads.huggingface.co/production/uploads/678db465b6eed17d5640d492/PweGh-UQSH3iKN7lEBc3t.jpeg)
47
+
48
+ ![result3](https://cdn-uploads.huggingface.co/production/uploads/678db465b6eed17d5640d492/N89bao5CAvgLh_ptaBEo7.jpeg)
49
+
50
+ ![result4](https://cdn-uploads.huggingface.co/production/uploads/678db465b6eed17d5640d492/qV6V3155gNkDvU5NkWzJB.png)
51
+
52
+ ![result5](https://cdn-uploads.huggingface.co/production/uploads/678db465b6eed17d5640d492/feciTXg-I0BILZ_HVIxOr.png)
53
+
54
+ **BibTeX:**
55
+
56
+ @article{Wang2025CLIPINEF,
57
+ title={CLIP-IN: Enhancing Fine-Grained Visual Understanding in CLIP via Instruction Editing Data and Long Captions},
58
+ author={Ziteng Wang and Siqi Yang and Limeng Qiao and Lin Ma},
59
+ journal={NeurIPS},
60
+ year={2025}
61
+ }