Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,61 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
datasets:
|
| 4 |
+
- BleachNick/UltraEdit
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
metrics:
|
| 8 |
+
- accuracy
|
| 9 |
+
base_model:
|
| 10 |
+
- openai/clip-vit-large-patch14-336
|
| 11 |
+
- openai/clip-vit-large-patch14
|
| 12 |
+
- timm/ViT-SO400M-14-SigLIP
|
| 13 |
+
- timm/ViT-SO400M-14-SigLIP2
|
| 14 |
+
- timm/ViT-SO400M-16-SigLIP2-384
|
| 15 |
+
- timm/ViT-SO400M-14-SigLIP-384
|
| 16 |
+
pipeline_tag: zero-shot-image-classification
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# CLIP-IN: Enhancing Fine-Grained Visual Understanding in CLIP via Instruction Editing Data and Long Captions
|
| 20 |
+
|
| 21 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
### Model Description
|
| 25 |
+
|
| 26 |
+
Despite the success of Vision-Language Models (VLMs) like CLIP in aligning vision and language, their proficiency in detailed, fine-grained visual comprehension remains a key challenge. We present CLIP-IN, a novel framework that bolsters CLIP’s fine-grained perception through two core innovations. Firstly, we lever-
|
| 27 |
+
age instruction-editing datasets, originally designed for image manipulation, as a
|
| 28 |
+
unique source of hard negative image-text pairs. Coupled with a symmetric hard
|
| 29 |
+
negative contrastive loss, this enables the model to effectively distinguish subtle
|
| 30 |
+
visual-semantic differences. Secondly, CLIP-IN incorporates long descriptive cap-
|
| 31 |
+
tions, utilizing rotary positional encodings to capture rich semantic context often
|
| 32 |
+
missed by standard CLIP. Our experiments demonstrate that CLIP-IN achieves sub-
|
| 33 |
+
stantial gains on the MMVP benchmark and various fine-grained visual recognition
|
| 34 |
+
tasks, without compromising robust zero-shot performance on broader classifica-
|
| 35 |
+
tion and retrieval tasks. Critically, integrating CLIP-IN’s visual representations into
|
| 36 |
+
Multimodal Large Language Models significantly reduces visual hallucinations
|
| 37 |
+
and enhances reasoning abilities. This work underscores the considerable potential
|
| 38 |
+
of synergizing targeted, instruction-based contrastive learning with comprehensive
|
| 39 |
+
descriptive information to elevate the fine-grained understanding of VLMs
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
## Evaluation
|
| 43 |
+
|
| 44 |
+

|
| 45 |
+
|
| 46 |
+

|
| 47 |
+
|
| 48 |
+

|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
|
| 52 |
+

|
| 53 |
+
|
| 54 |
+
**BibTeX:**
|
| 55 |
+
|
| 56 |
+
@article{Wang2025CLIPINEF,
|
| 57 |
+
title={CLIP-IN: Enhancing Fine-Grained Visual Understanding in CLIP via Instruction Editing Data and Long Captions},
|
| 58 |
+
author={Ziteng Wang and Siqi Yang and Limeng Qiao and Lin Ma},
|
| 59 |
+
journal={NeurIPS},
|
| 60 |
+
year={2025}
|
| 61 |
+
}
|