BiomedCoOp: Learning to Prompt for Biomedical Vision-Language Models
Paper
•
2411.15232
•
Published
| Name | Few-Shot | Base-to-Novel |
|---|---|---|
| BiomedCoOp | link | link |
Run the following scripts to use the checkpoints and get testing results. Note that the following scripts automatically download the desired model weights:
CUDA_VISIBLE_DEVICES=<GPU number> bash scripts/biomedcoop/eval_fewshot.sh <data directory> <dataset> <nb of shots>
# Example on BTMRI using 16 shots and the BiomedCLIP model on GPU 0
CUDA_VISIBLE_DEVICES=0 bash scripts/biomedcoop/eval_fewshot.sh data btmri 16
CUDA_VISIBLE_DEVICES=<GPU number> bash scripts/biomedcoop/eval_base2new.sh <data directory> <dataset> <nb of shots>
# Example on BTMRI using 16 shots and the BiomedCLIP model on GPU 0
CUDA_VISIBLE_DEVICES=0 bash scripts/biomedcoop/eval_base2new.sh data btmri 16
If you use our work, please consider citing:
@article{koleilat2024biomedcoop,
title={BiomedCoOp: Learning to Prompt for Biomedical Vision-Language Models},
author={Koleilat, Taha and Asgariandehkordi, Hojat and Rivaz, Hassan and Xiao, Yiming},
journal={arXiv preprint arXiv:2411.15232},
year={2024}
}