Richarizardd commited on
Commit
87b2bb8
·
verified ·
1 Parent(s): f1a782a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +187 -0
README.md CHANGED
@@ -1,3 +1,190 @@
1
  ---
2
  license: cc-by-nc-nd-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-nd-4.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - histology
7
+ - pathology
8
+ - vision
9
+ - pytorch
10
+ - self-supervised
11
+ - vit
12
+ extra_gated_prompt: >-
13
+ This model and associated code are released under the CC-BY-NC-ND 4.0 license and may only be used for non-commercial, academic research purposes with proper attribution.
14
+ Any commercial use, sale, or other monetization of the UNI model and its derivatives, which include models trained on outputs from the UNI model or datasets created from the UNI model, is prohibited and requires prior approval.
15
+ Downloading the model requires prior registration on Hugging Face and agreeing to the terms of use.
16
+ By downloading this model, you agree not to distribute, publish or reproduce a copy of the model.
17
+ If another user within your organization wishes to use the UNI model, they must register as an individual user and agree to comply with the terms of use.
18
+ Users may not attempt to re-identify the deidentified data used to develop the underlying model.
19
+ If you are a commercial entity, please contact the corresponding author or Mass General Brigham Innovation Office.
20
+ extra_gated_fields:
21
+ Full name: text
22
+ Affiliation: text
23
+ Type of affiliation:
24
+ type: select
25
+ options:
26
+ - Academia
27
+ - Industry
28
+ - label: Other
29
+ value: other
30
+ Official email: text
31
+ Please explain your intended research use: text
32
+ I agree to all terms outlined above: checkbox
33
+ I agree to use this model for non-commercial, academic purposes only: checkbox
34
+ I agree not to distribute the model, if another user within your organization wishes to use the UNI model, they must register as an individual user: checkbox
35
+ metrics:
36
+ - accuracy
37
+ pipeline_tag: image-feature-extraction
38
+ library_name: timm
39
  ---
40
+
41
+ # Model Card for UNI
42
+
43
+ \[[Paper](https://www.nature.com/articles/s41591-024-02857-3)\] \[[GitHub](https://github.com/mahmoodlab/uni)\]
44
+
45
+ ## What is UNI?
46
+
47
+ UNI is the largest pretrained vision encoder for histopathology (100M images, 100K WSIs) _**developed on internal neoplastic, infectious, inflamatory and normal tissue and also made publicly availablee**_. We show state-of-the-art performance across 34 clinical tasks, with strong performance gains on rare and underrepresented cancer types.
48
+ - _**Why use UNI?**_: UNI does not use open datasets and large public histology slide collections (TCGA, CPTAC, PAIP, CAMELYON, PANDA, and others in TCIA) for pretraining, which are routinely used in benchmark development in computational pathology. We make UNI available for the research community in building and evaluating pathology AI models without risk of data contamination on public benchmarks or private histopathology slide collections.
49
+
50
+ ![image/png](https://cdn-lfs-us-1.huggingface.co/repos/08/c1/08c16d6ae2e7c7a29e5b85ac4927cf3f0ba56e7e2b5e6fbcad5f7c3fbf7165da/4ae14e4e52ff42768bfb22145c66690754fdc8b43d67dabd6b70580542105481?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27uni.jpg%3B+filename%3D%22uni.jpg%22%3B&response-content-type=image%2Fjpeg&Expires=1710977575&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcxMDk3NzU3NX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzA4L2MxLzA4YzE2ZDZhZTJlN2M3YTI5ZTViODVhYzQ5MjdjZjNmMGJhNTZlN2UyYjVlNmZiY2FkNWY3YzNmYmY3MTY1ZGEvNGFlMTRlNGU1MmZmNDI3NjhiZmIyMjE0NWM2NjY5MDc1NGZkYzhiNDNkNjdkYWJkNmI3MDU4MDU0MjEwNTQ4MT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoifV19&Signature=BH8H0v-JvIBVJJUaj2tz3YPb4sfVp2dHMS1oJg4OdNrMBoqyNNmZeoK0z4mUtfHoXXZe49hGAHfO3uKDTulU5znA6NyR3wS0dA-jA6iMp4bXsEIbrnb8M5zjvY1OfiVKrjKsmG9pI3njCpIPvsPcvlZtzn7wyh1im8u7VuPBk3HvM%7E2PPvXdgGR0O41QOY0jsXnEGIjpPITZg92waewuVKcR9-8HCkIu-1PXxWsFKXtY6wH8zkfcndwGBRLv5u1lQBL%7ELFXgsbrtZDgzm7vj35IV4YriccD8sxTD4feg77TZ196vxb2%7EzZOQhkUSvGgNQqB458e6W%7EuxreL598tBmA__&Key-Pair-Id=KCD77M1F0VK2B)
51
+
52
+
53
+ ## Model Description
54
+
55
+ - **Developed by:** Mahmood Lab AI for Pathology @ Harvard/BWH
56
+ - **Model type:** Pretrained vision backbone (ViT-L/16 via DINOv2) for multi-purpose evaluation on histopathology images
57
+ - **Pretraining dataset:** Mass-100K, sourced from private histology collections (BWH / MGH), in addition to slides from the public GTEx consortium.
58
+ - **Repository:** https://github.com/mahmoodlab/UNI
59
+ - **Paper:** https://www.nature.com/articles/s41591-024-02857-3
60
+ - **License:** CC-BY-NC-ND-4.0
61
+
62
+ ### How To Use (Feature Extraction)
63
+
64
+ Following authentication (using ```huggingface_hub```), the ViT-L/16 model architecture with pretrained weights and image transforms for UNI can be directly loaded using the [timm](https://huggingface.co/docs/hub/en/timm) library. This method automatically downloads the model weights to the [huggingface_hub cache](https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache) in your home directory (```~/.cache/huggingface/hub/models--MahmoodLab--UNI```), which ```timm``` will automatically find when using the commands below:
65
+
66
+ ```python
67
+ import timm
68
+ from timm.data import resolve_data_config
69
+ from timm.data.transforms_factory import create_transform
70
+ from huggingface_hub import login
71
+
72
+ login() # login with your User Access Token, found at https://huggingface.co/settings/tokens
73
+
74
+ # pretrained=True needed to load UNI weights (and download weights for the first time)
75
+ # init_values need to be passed in to successfully load LayerScale parameters (e.g. - block.0.ls1.gamma)
76
+ model = timm.create_model("hf-hub:MahmoodLab/uni", pretrained=True, init_values=1e-5, dynamic_img_size=True)
77
+ transform = create_transform(**resolve_data_config(model.pretrained_cfg, model=model))
78
+ model.eval()
79
+ ```
80
+
81
+ You can also download the model weights to a specified checkpoint location in your local directory. The ```timm``` library is still used for defining the ViT-L/16 model architecture. Pretrained weights and image transforms for UNI need to be manually loaded and defined.
82
+ ```python
83
+ import os
84
+ import torch
85
+ from torchvision import transforms
86
+ import timm
87
+ from huggingface_hub import login, hf_hub_download
88
+
89
+ login() # login with your User Access Token, found at https://huggingface.co/settings/tokens
90
+
91
+ local_dir = "../assets/ckpts/vit_large_patch16_224.dinov2.uni_mass100k/"
92
+ os.makedirs(local_dir, exist_ok=True) # create directory if it does not exist
93
+ hf_hub_download("MahmoodLab/UNI", filename="pytorch_model.bin", local_dir=local_dir, force_download=True)
94
+ model = timm.create_model(
95
+ "vit_large_patch16_224", img_size=224, patch_size=16, init_values=1e-5, num_classes=0, dynamic_img_size=True
96
+ )
97
+ model.load_state_dict(torch.load(os.path.join(local_dir, "pytorch_model.bin"), map_location="cpu"), strict=True)
98
+ transform = transforms.Compose(
99
+ [
100
+ transforms.Resize(224),
101
+ transforms.ToTensor(),
102
+ transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
103
+ ]
104
+ )
105
+ model.eval()
106
+ ```
107
+
108
+ With your model set to evaluation mode, you can use the UNI pretrained encoder to extract features from your favorite histopathology images, as follows:
109
+
110
+ ```python
111
+ from PIL import Image
112
+ image = Image.open("uni.jpg")
113
+ Image = transform(image).unsqueeze(dim=0) # Image (torch.Tensor) with shape [1, 3, 224, 224] following image resizing and normalization (ImageNet parameters)
114
+ with torch.inference_mode():
115
+ feature_emb = model(img) # Extracted features (torch.Tensor) with shape [1,1024]
116
+ ```
117
+
118
+ With this, you can use these pre-extracted features for ROI classification (via linear probing), slide classification (via multiple instance learning), and other machine learning settings.
119
+
120
+
121
+
122
+ ### Direct Use (with Pre-Extracted and Frozen Features)
123
+
124
+ The models can be used without fine-tuning to obtain competitive results on:
125
+ - ROI classification, with logistic regression classifiers applied on the class token.
126
+ - ROI classification, with k-nearest neighbors (k-NN) classifiers applied on the class token.
127
+ - ROI classification, with nearest centroid classifiers (SimpleShot) applied on the class token.
128
+ - ROI retrieval, using nearest neighbors classifiers
129
+ - slide classification, with multiple instance learning (MIL) classifiers applied on a bag of class tokens extracted from the WSI
130
+
131
+ ### Downstream Use (Finetuning)
132
+
133
+ It is also possible to perform fine-tuning on the models, and recommended for competitive performance on segmentation tasks. We recommend finetuning using frameworks specialized for adapting ViTs for dense prediction tasks, such as ViTDet or ViT-Adapter (which depends on Mask2Former).
134
+
135
+ ## Training Details
136
+
137
+ - **Training data:** Mass-100K, a pretraining dataset (sourced from MGH, BWH, and GTEx) composed of 75,832,905 [256×256] and 24,297,995 [512×512] histology images at 20× resolution, sampled from 100,402 H&E WSIs (100,130,900 images in total).
138
+ - **Training regime:** fp16 using PyTorch-FSDP mixed-precision.
139
+ - **Training objective:** DINOv2 SSL recipe with the following losses:
140
+ - DINO self-distillation loss with multi-crop
141
+ - iBOT masked-image modeling loss
142
+ - KoLeo regularization on [CLS] tokens
143
+ - **Training length:** 125,000 iterations with a batch size of 3072
144
+ - **Model architecture:** ViT-Large (0.3B params): Patch size 16, embedding dimension 1024, 16 heads, MLP FFN
145
+ - **Hardware used:** 4x8 Nvidia A100 80GB
146
+ - **Hours trained:** Approx 1024 GPU hours (32 hours total)
147
+ - **Cloud provider:** MGB ERIS Research Computing Core
148
+
149
+ ## Software Dependencies
150
+
151
+ **Python Packages**
152
+ - torch>=2.0: https://pytorch.org
153
+ - xformers>=0.0.18: https://github.com/facebookresearch/xformers
154
+ - timm>=0.9.8: https://github.com/huggingface/pytorch-image-models
155
+
156
+ **Repositories**
157
+ - DINOv2 (self-supervised learning): https://github.com/facebookresearch/dinov2
158
+ - CLAM (slide classification): https://github.com/mahmoodlab/CLAM
159
+ - Mask2Former (cell and tissue segmentation): https://github.com/facebookresearch/Mask2Former
160
+ - ViT-Adapter (cell and tissue segmentation): https://github.com/czczup/ViT-Adapter
161
+ - LGSSL (Linear Probe & Few-Shot Eval): https://github.com/mbanani/lgssl
162
+
163
+ ## License
164
+ This model and associated code are released under the CC-BY-NC-ND 4.0 license and may only be used for non-commercial, academic research purposes with proper attribution. Any commercial use, sale, or other monetization of the UNI model and its derivatives, which include models trained on outputs from the UNI model or datasets created from the UNI model, is prohibited and requires prior approval. Downloading the model requires prior registration on Hugging Face and agreeing to the terms of use. By downloading this model, you agree not to distribute, publish or reproduce a copy of the model. If another user within your organization wishes to use the UNI model, they must register as an individual user and agree to comply with the terms of use. Users may not attempt to re-identify the deidentified data used to develop the underlying model. If you are a commercial entity, please contact the corresponding author or Mass General Brigham Innovation Office.
165
+
166
+
167
+ ## Contact
168
+ For any additional questions or comments, contact Faisal Mahmood (`faisalmahmood@bwh.harvard.edu`),
169
+ Richard J. Chen (`richardchen@g.harvard.edu`),
170
+ Tong Ding (`tong_ding@g.harvard.edu`),
171
+ or Ming Y. Lu (`mlu16@bwh.harvard.edu`).
172
+
173
+
174
+ ## Acknowledgements
175
+ The project was built on top of amazing repositories such as [ViT](https://github.com/google-research/big_vision), [DINOv2](https://github.com/facebookresearch/dinov2), [LGSSL](https://github.com/mbanani/lgssl), and [Timm](https://github.com/huggingface/pytorch-image-models/) (ViT model implementation). We thank the authors and developers for their contribution.
176
+
177
+
178
+ ## BibTeX
179
+ If you found our work useful in your research, please consider citing our work at:
180
+ ```
181
+ @article{chen2024towards,
182
+ title={Towards a General-Purpose Foundation model for Computational Pathology},
183
+ author={Chen, Richard J and Ding, Tong and Lu, Ming Y and Williamson, Drew FK and Jaume, Guillaume and Song, Andrew H and Chen, Bowen and Zhang, Andrew and Shao, Daniel and Shaban, Muhammad and others},
184
+ journal={Nature Medicine},
185
+ year={2024},
186
+ publisher={Nature Publishing Group US New York},
187
+ doi={10.1038/s41591-024-02857-3}
188
+ }
189
+ ```
190
+ Works that use UNI should also attribute ViT and DINOv2.