Improve dataset card for CLIP-SVD (Singular Value Few-shot Adaptation of Vision-Language Models)

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +97 -4
README.md CHANGED
@@ -1,10 +1,11 @@
1
  ---
 
 
2
  license: apache-2.0
3
  task_categories:
4
  - zero-shot-image-classification
5
  - image-to-text
6
- language:
7
- - en
8
  tags:
9
  - vision-language-model
10
  - few-shot-learning
@@ -15,11 +16,99 @@ tags:
15
  # Biomedical Few-shot Image Classification for Vision-Language Models
16
 
17
  [![paper](https://img.shields.io/badge/arXiv-BiomedCoOp-<COLOR>.svg)](https://arxiv.org/abs/2411.15232)
18
- [![paper](https://img.shields.io/badge/arXiv-CLIP_SVD-<COLOR>.svg)](https://www.arxiv.org/abs/2509.03740)
19
  [![Code](https://img.shields.io/badge/Code-BiomedCoOp-orange.svg)](https://github.com/HealthX-Lab/BiomedCoOp)
20
  [![Code](https://img.shields.io/badge/Code-CLIP_SVD-orange.svg)](https://github.com/HealthX-Lab/CLIP-SVD)
21
 
22
- ## Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  *<p align="justify"> Recent advancements in vision-language models (VLMs), such as CLIP, have demonstrated substantial success in self-supervised representation learning for vision tasks.
24
  However, effectively adapting VLMs to downstream applications remains challenging, as their accuracy often depends on time-intensive and expertise-demanding prompt
25
  engineering, while full model fine-tuning is costly. This is particularly true for biomedical images, which, unlike natural images, typically suffer from limited
@@ -74,6 +163,10 @@ BTMRI/
74
  |–– split_BTMRI.json
75
  ```
76
 
 
 
 
 
77
  ## Citation
78
  If you use our work, please consider citing:
79
  ```bibtex
 
1
  ---
2
+ language:
3
+ - en
4
  license: apache-2.0
5
  task_categories:
6
  - zero-shot-image-classification
7
  - image-to-text
8
+ - image-text-to-text
 
9
  tags:
10
  - vision-language-model
11
  - few-shot-learning
 
16
  # Biomedical Few-shot Image Classification for Vision-Language Models
17
 
18
  [![paper](https://img.shields.io/badge/arXiv-BiomedCoOp-<COLOR>.svg)](https://arxiv.org/abs/2411.15232)
19
+ [![paper](https://img.shields.io/badge/HuggingFace_Paper-CLIP_SVD-<COLOR>.svg)](https://huggingface.co/papers/2509.03740)
20
  [![Code](https://img.shields.io/badge/Code-BiomedCoOp-orange.svg)](https://github.com/HealthX-Lab/BiomedCoOp)
21
  [![Code](https://img.shields.io/badge/Code-CLIP_SVD-orange.svg)](https://github.com/HealthX-Lab/CLIP-SVD)
22
 
23
+ ## Overview (CLIP-SVD)
24
+
25
+ Vision-language models (VLMs) like CLIP have shown impressive zero-shot and few-shot learning capabilities across diverse applications. However, adapting these models to new fine-grained domains remains difficult due to reliance on prompt engineering and the high cost of full model fine-tuning. Existing adaptation approaches rely on augmented components, such as prompt tokens and adapter modules, which could limit adaptation quality, destabilize the model, and compromise the rich knowledge learned during pretraining. In this work, we present CLIP-SVD, a novel multi-modal and parameter-efficient adaptation technique that leverages Singular Value Decomposition (SVD) to modify the internal parameter space of CLIP without injecting additional modules. Specifically, we fine-tune only the singular values of the CLIP parameter matrices to rescale the basis vectors for domain adaptation while retaining the pretrained model. This design enables enhanced adaptation performance using only 0.04% of the model's total parameters and better preservation of its generalization ability. CLIP-SVD achieves state-of-the-art classification results on 11 natural and 10 biomedical datasets, outperforming previous methods in both accuracy and generalization under few-shot settings. Additionally, we leverage a natural language-based approach to analyze the effectiveness and dynamics of the CLIP adaptation to allow interpretability of CLIP-SVD. The code is publicly available at this https URL .
26
+
27
+ ## Method (CLIP-SVD)
28
+
29
+ <p float="left">
30
+ <img src="https://github.com/HealthX-Lab/CLIP-SVD/blob/main/assets/CLIP-SVD.png" width="100%" />
31
+ </p>
32
+
33
+ 1) **SVD-Based Few-Shot Adaptation**: We propose an SVD-based adaptation framework for Transformer-based multi-modal models (e.g., CLIP and BiomedCLIP) for the first time, requiring only **0.04%** of the model’s total parameters—significantly lower than other multi-modal methods.
34
+ 2) **Comprehensive Validation Across Domains**: We perform extensive evaluation on 11 natural and 10 biomedical datasets, showing that CLIP-SVD outperforms state-of-the-art methods in both accuracy and generalization.
35
+ 3) **Interpretable Adaptation Dynamics**: By analyzing ranked weight changes, we employ a natural language-facilitated approach to intuitively interpret the effectiveness and dynamics of task-specific CLIP adaptation.
36
+ 4) **Semantic Interpretation for Biomedical Applications**: To address the need for interpretability of attention heads in CLIP for biomedical use cases (e.g., CLIP-SVD analysis), we build the first corpus of biomedical image descriptions.
37
+
38
+ ## Results (CLIP-SVD)
39
+ Results reported below show accuracy for few-shot scenarios as well as base and novel classes across 11 natural domain and 10 biomedical domain recognition datasets averaged over 3 seeds.
40
+ ### Natural Few-shot Evaluation
41
+ | **Method** | K=1 | K=2 | K=4 | K=8 | K=16 |
42
+ |------------------|:-----:|:-----:|:-----:|:-----:|:-----:|
43
+ | Zero-shot CLIP | – | – | 65.36 | – | – |
44
+ | CoOp | 68.09 | 70.13 | 73.59 | 76.45 | 79.01 |
45
+ | CoCoOp | 66.95 | 67.63 | 71.98 | 72.92 | 75.02 |
46
+ | ProGrad | 68.20 | 71.78 | 74.21 | 77.93 | 79.20 |
47
+ | KgCoOp | 69.51 | 71.57 | 74.48 | 75.82 | 77.26 |
48
+ | MaPLe | 69.27 | 72.58 | 75.37 | 78.89 | 81.79 |
49
+ | Linear Probing | 45.77 | 56.92 | 66.79 | 73.43 | 78.39 |
50
+ | LP++ | 70.35 | 72.93 | 75.77 | 77.94 | 80.32 |
51
+ | CLIP-Adapter | 67.87 | 70.20 | 72.65 | 76.92 | 79.86 |
52
+ | Tip-Adapter | 68.89 | 70.42 | 72.69 | 74.41 | 76.44 |
53
+ | Tip-Adapter-F | 70.62 | 73.08 | 75.75 | 78.51 | 81.15 |
54
+ | GDA | 69.39 | 73.09 | 76.24 | 79.71 | 81.70 |
55
+ | ProKeR | 71.32 | 73.74 | 76.23 | 79.84 | 82.01 |
56
+ | AdaLoRA | 69.04 | 72.21 | 75.50 | 78.13 | 80.95 |
57
+ | TCP | 70.63 | 73.59 | 76.07 | 78.39 | 80.98 |
58
+ | CLIP-LoRA | _72.20_ | _75.41_ | _77.32_ | _80.10_ | _82.89_ |
59
+ | **CLIP-SVD (Ours)** | **73.20** | **76.06** | **78.18** | **80.55** | **82.97** |
60
+
61
+ ### Biomedical Few-shot Evaluation
62
+ | **Method** | K=1 | K=2 | K=4 | K=8 | K=16 |
63
+ |------------------|:-----:|:-----:|:-----:|:-----:|:-----:|
64
+ | Zero-shot BiomedCLIP | 42.38 | – | – | – | – |
65
+ | CoOp | 52.59 | 55.71 | 61.35 | 67.74 | 71.48 |
66
+ | CoCoOp | 50.88 | 53.91 | 57.63 | 63.15 | 67.51 |
67
+ | ProGrad | 53.67 | 56.42 | 62.10 | 67.06 | 69.21 |
68
+ | KgCoOp | 54.31 | 55.79 | 60.92 | 66.00 | 67.71 |
69
+ | Linear Probing | 48.91 | 55.82 | 62.12 | 67.33 | 70.81 |
70
+ | LP++ | 49.27 | 55.88 | 61.30 | 65.48 | 70.09 |
71
+ | CLIP-Adapter | 45.53 | 44.70 | 45.30 | 46.54 | 48.46 |
72
+ | Tip-Adapter | 50.35 | 53.50 | 58.33 | 62.01 | 67.60 |
73
+ | Tip-Adapter-F | 52.55 | 54.17 | 62.30 | 68.12 | 68.12 |
74
+ | MaPLe | 37.99 | 40.89 | 44.09 | 47.37 | 52.93 |
75
+ | BiomedCoOp | **56.87** | _59.32_ | _64.34_ | _68.96_ | _73.41_ |
76
+ | **CLIP-SVD (Ours)** | _56.35_ | **62.63** | **68.02** | **73.26** | **76.46** |
77
+
78
+ ### Base-to-Novel Generalization (Natural Domain)
79
+ | Name | Base Acc. | Novel Acc. | HM |
80
+ |--------------------------------------------|:---------:|:----------:|:---------:|
81
+ | CLIP | 69.34 | 74.22 | 71.70 |
82
+ | CoOp | 82.69 | 63.22 | 71.66 |
83
+ | CoCoOp | 80.47 | 71.69 | 75.83 |
84
+ | KgCoOp | 80.73 | 73.60 | 77.00 |
85
+ | ProGrad | 82.48 | 70.75 | 76.16 |
86
+ | MaPLe | 82.28 | 75.14 | 78.55 |
87
+ | IVLP | 84.21 | 71.79 | 77.51 |
88
+ | GDA | 83.96 | 74.53 | 78.72 |
89
+ | TCP | 84.13 | 75.36 | 79.51 |
90
+ | CLIP-LoRA | 84.10 | 74.80 | 79.18 |
91
+ | **CLIP-SVD (ours)** | **84.38** | **76.29** | **80.13** |
92
+
93
+ ### Base-to-Novel Generalization (Biomedical Domain)
94
+ | Name | Base Acc. | Novel Acc. | HM |
95
+ |------------------------------------------------|:---------:|:----------:|:---------:|
96
+ | BiomedCLIP | 49.27 | 67.17 | 55.23 |
97
+ | CoOp | 76.71 | 65.34 | 68.80 |
98
+ | CoCoOp | 75.52 | 67.74 | 69.11 |
99
+ | KgCoOp | 71.90 | 65.94 | 67.22 |
100
+ | ProGrad | 75.69 | 67.33 | 69.86 |
101
+ | MaPLe | 65.40 | 49.51 | 53.10 |
102
+ | XCoOp | 74.62 | 63.19 | 68.43 |
103
+ | BiomedCoOp | 78.60 | 73.90 | 74.04 |
104
+ | GDA | 57.70 | 64.66 | 60.98 |
105
+ | DCPL | 73.70 | 69.35 | 71.46 |
106
+ | CLIP-LoRA | 70.56 | 59.84 | 64.76 |
107
+ | **CLIP-SVD (ours)** | **82.64** | **74.31** | **78.25** |
108
+
109
+ ---
110
+
111
+ ## Overview (BiomedCoOp)
112
  *<p align="justify"> Recent advancements in vision-language models (VLMs), such as CLIP, have demonstrated substantial success in self-supervised representation learning for vision tasks.
113
  However, effectively adapting VLMs to downstream applications remains challenging, as their accuracy often depends on time-intensive and expertise-demanding prompt
114
  engineering, while full model fine-tuning is costly. This is particularly true for biomedical images, which, unlike natural images, typically suffer from limited
 
163
  |–– split_BTMRI.json
164
  ```
165
 
166
+ ## Sample Usage
167
+
168
+ For installation, data preparation, training, evaluation, and reproduction of results using our pre-trained models, please refer to the [RUN.md](https://github.com/HealthX-Lab/CLIP-SVD/blob/main/assets/RUN.md) file in the [CLIP-SVD GitHub repository](https://github.com/HealthX-Lab/CLIP-SVD).
169
+
170
  ## Citation
171
  If you use our work, please consider citing:
172
  ```bibtex