Update readme
Browse files
README.md
CHANGED
|
@@ -14,11 +14,11 @@ You can try a Streamlit demo app that uses this model on [🤗 Spaces](https://h
|
|
| 14 |

|
| 15 |
|
| 16 |
🤗 Hub Model card: https://huggingface.co/flax-community/medclip-roco
|
| 17 |
-
## Dataset
|
| 18 |
|
| 19 |
-
Each image is accompanied by a
|
| 20 |
Training set: 57,780 images with their caption.
|
| 21 |
-
Validation set: 7
|
| 22 |
Test set: 7,650
|
| 23 |
|
| 24 |
[ ] Give an example
|
|
@@ -26,7 +26,7 @@ Test set: 7,650
|
|
| 26 |
## Installation 💽
|
| 27 |
This repo depends on the master branch of [Hugging Face - Transformers library](https://github.com/huggingface/transformers). First you need to clone the transformers repository and then install it locally (preferably inside a virtual environment) with `pip install -e ".[flax]"`.
|
| 28 |
|
| 29 |
-
## Model
|
| 30 |
You can load the pretrained model from the Hugging Face Hub with
|
| 31 |
```
|
| 32 |
from medclip.modeling_hybrid_clip import FlaxHybridCLIP
|
|
@@ -39,6 +39,13 @@ You can fine-tune a CLIP model implemented in Flax by simply running `sh run_med
|
|
| 39 |
This is the validation loss curve we observed when we trained the model using the `run_medclip.sh` script.
|
| 40 |

|
| 41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
## TODO
|
| 43 |
[ ] Evaluation on down-stream tasks
|
| 44 |
|
|
|
|
| 14 |

|
| 15 |
|
| 16 |
🤗 Hub Model card: https://huggingface.co/flax-community/medclip-roco
|
| 17 |
+
## Dataset 🧩
|
| 18 |
|
| 19 |
+
Each image is accompanied by a textual caption. The caption length varies from a few characters (a single word) to 2,000 characters (multiple sentences). During preprocessing we remove all images that has a caption shorter than 10 characters.
|
| 20 |
Training set: 57,780 images with their caption.
|
| 21 |
+
Validation set: 7,200
|
| 22 |
Test set: 7,650
|
| 23 |
|
| 24 |
[ ] Give an example
|
|
|
|
| 26 |
## Installation 💽
|
| 27 |
This repo depends on the master branch of [Hugging Face - Transformers library](https://github.com/huggingface/transformers). First you need to clone the transformers repository and then install it locally (preferably inside a virtual environment) with `pip install -e ".[flax]"`.
|
| 28 |
|
| 29 |
+
## The Model ⚙️
|
| 30 |
You can load the pretrained model from the Hugging Face Hub with
|
| 31 |
```
|
| 32 |
from medclip.modeling_hybrid_clip import FlaxHybridCLIP
|
|
|
|
| 39 |
This is the validation loss curve we observed when we trained the model using the `run_medclip.sh` script.
|
| 40 |

|
| 41 |
|
| 42 |
+
## Limitations 🚨
|
| 43 |
+
The current model is capable of identifying if a given radiology image is a PET scan or an ultrasound scan. However it fails at identifying a brain scan from a lung scan. ❗️This model **should not** be used in a medical setting without further evaluations❗️.
|
| 44 |
+
|
| 45 |
+
## Acknowledgements
|
| 46 |
+
Huge thanks to the Hugging Face 🤗 team and Google JAX/Flax team for organizing the community week and letting us use cloud compute for 2 weeks. We specially thank [@patil-suraj](https://github.com/patil-suraj) & [@patrickvonplaten](https://github.com/patrickvonplaten) for the continued support on Slack and the detailed feedback.
|
| 47 |
+
|
| 48 |
+
|
| 49 |
## TODO
|
| 50 |
[ ] Evaluation on down-stream tasks
|
| 51 |
|