Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,4 @@
|
|
| 1 |
-
MR-PLIP Description
|
| 2 |
MR-PLIP is a vision-language foundation model trained on the multiresolution 34 million images curated from TCGA dataset. It can perform various vision-language processing (VLP) tasks such as image classification, detection and segmentation.
|
| 3 |
##Uses
|
| 4 |
As per the original CLIP model card, this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
|
|
|
|
| 1 |
+
##MR-PLIP Description
|
| 2 |
MR-PLIP is a vision-language foundation model trained on the multiresolution 34 million images curated from TCGA dataset. It can perform various vision-language processing (VLP) tasks such as image classification, detection and segmentation.
|
| 3 |
##Uses
|
| 4 |
As per the original CLIP model card, this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
|