AnabSohail commited on
Commit
31770ee
·
verified ·
1 Parent(s): 536852b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  MR-PLIP Description
2
  MR-PLIP is a vision-language foundation model trained on the multiresolution 34 million images curated from TCGA dataset. It can perform various vision-language processing (VLP) tasks such as image classification, detection and segmentation.
3
- Uses
4
  As per the original CLIP model card, this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
5
  Direct Use
6
  Zero-shot image classification, object detection and segmentation.
 
1
  MR-PLIP Description
2
  MR-PLIP is a vision-language foundation model trained on the multiresolution 34 million images curated from TCGA dataset. It can perform various vision-language processing (VLP) tasks such as image classification, detection and segmentation.
3
+ ##Uses
4
  As per the original CLIP model card, this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
5
  Direct Use
6
  Zero-shot image classification, object detection and segmentation.