Update README.md
Browse files
README.md
CHANGED
|
@@ -1,14 +1,19 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
| 3 |
language:
|
| 4 |
- en
|
|
|
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
-
## Model Checkpoints and Logs
|
| 7 |
| Name | Few-Shot | Base-to-Novel |
|
| 8 |
|-----------------------------------------------------------|:---------:|:----------:|
|
| 9 |
| [**BiomedCoOp**](https://github.com/HealthX-Lab/BiomedCoOp/blob/main/trainers/BiomedCoOp/biomedcoop_biomedclip.py) | [link](https://huggingface.co/TahaKoleilat/BiomedCoOp/tree/main/few_shot) | [link](https://huggingface.co/TahaKoleilat/BiomedCoOp/tree/main/base2new) |
|
| 10 |
|
| 11 |
-
###
|
| 12 |
|
| 13 |
Run the following scripts to use the checkpoints and get testing results. Note that the following scripts automatically download the desired model weights:
|
| 14 |
|
|
@@ -26,4 +31,15 @@ CUDA_VISIBLE_DEVICES=0 bash scripts/biomedcoop/eval_fewshot.sh data btmri 16
|
|
| 26 |
CUDA_VISIBLE_DEVICES=<GPU number> bash scripts/biomedcoop/eval_base2new.sh <data directory> <dataset> <nb of shots>
|
| 27 |
# Example on BTMRI using 16 shots and the BiomedCLIP model on GPU 0
|
| 28 |
CUDA_VISIBLE_DEVICES=0 bash scripts/biomedcoop/eval_base2new.sh data btmri 16
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-classification
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
+
tags:
|
| 8 |
+
- medical
|
| 9 |
+
- biology
|
| 10 |
---
|
| 11 |
+
### Model Checkpoints and Logs
|
| 12 |
| Name | Few-Shot | Base-to-Novel |
|
| 13 |
|-----------------------------------------------------------|:---------:|:----------:|
|
| 14 |
| [**BiomedCoOp**](https://github.com/HealthX-Lab/BiomedCoOp/blob/main/trainers/BiomedCoOp/biomedcoop_biomedclip.py) | [link](https://huggingface.co/TahaKoleilat/BiomedCoOp/tree/main/few_shot) | [link](https://huggingface.co/TahaKoleilat/BiomedCoOp/tree/main/base2new) |
|
| 15 |
|
| 16 |
+
### Reproducing Results
|
| 17 |
|
| 18 |
Run the following scripts to use the checkpoints and get testing results. Note that the following scripts automatically download the desired model weights:
|
| 19 |
|
|
|
|
| 31 |
CUDA_VISIBLE_DEVICES=<GPU number> bash scripts/biomedcoop/eval_base2new.sh <data directory> <dataset> <nb of shots>
|
| 32 |
# Example on BTMRI using 16 shots and the BiomedCLIP model on GPU 0
|
| 33 |
CUDA_VISIBLE_DEVICES=0 bash scripts/biomedcoop/eval_base2new.sh data btmri 16
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
### Citation
|
| 37 |
+
If you use our work, please consider citing:
|
| 38 |
+
```bibtex
|
| 39 |
+
@article{koleilat2024biomedcoop,
|
| 40 |
+
title={BiomedCoOp: Learning to Prompt for Biomedical Vision-Language Models},
|
| 41 |
+
author={Koleilat, Taha and Asgariandehkordi, Hojat and Rivaz, Hassan and Xiao, Yiming},
|
| 42 |
+
journal={arXiv preprint arXiv:2411.15232},
|
| 43 |
+
year={2024}
|
| 44 |
+
}
|
| 45 |
```
|