|
|
--- |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- pathology |
|
|
- foundation-model |
|
|
- medical-imaging |
|
|
- computational-pathology |
|
|
- histopathology |
|
|
- vision-transformer |
|
|
- dinov2 |
|
|
- vision |
|
|
extra_gated_prompt: > |
|
|
|
|
|
The OpenMidnight model weights and associated code are released under the Apache License, Version 2.0 (the "License"). You may obtain a copy of the License at: |
|
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0 |
|
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. |
|
|
|
|
|
Please note that the primary email used to sign up for your Hugging Face account must match your institutional email to receive approval. By downloading the OpenMidnight model weights, you attest that all information (affiliation, research use) is correct and up-to-date. Downloading the model requires prior registration on Hugging Face and agreeing to the terms of use. |
|
|
|
|
|
By using the OpenMidnight model, you acknowledge that you have read and understood these terms. |
|
|
|
|
|
extra_gated_fields: |
|
|
First and Last Name: text |
|
|
Institutional Email (must match your primary HuggingFace email): text |
|
|
I agree to the license and terms of use described above: checkbox |
|
|
--- |
|
|
|
|
|
# OpenMidnight |
|
|
|
|
|
<div align="center"> |
|
|
|
|
|
|
|
|
 |
|
|
|
|
|
<p><em>Overview of the OpenMidnight pathology foundation model</em></p> |
|
|
|
|
|
*State-of-the-art pathology foundation model trained on 12K slides* |
|
|
|
|
|
**Developed by [Sophont](https://sophont.med)** |
|
|
|
|
|
[Blog Post](https://sophont.med/blog/openmidnight) | [GitHub](https://github.com/MedARC-AI/OpenMidnight) | [Demo](https://huggingface.co/spaces/SophontAI/OpenMidnightDemo) |
|
|
|
|
|
</div> |
|
|
|
|
|
--- |
|
|
|
|
|
## What is OpenMidnight? |
|
|
|
|
|
OpenMidnight is our open replication of Kaiko.AI's Midnight, a 1.1 billion parameter Vision Transformer foundation model for computational pathology. OpenMidnight achieves state-of-the-art performance despite being trained on significantly less data than Kaiko.AI's Midnight or comparable models. |
|
|
|
|
|
**Key advantages:** |
|
|
|
|
|
- 🏆 **State-of-the-art performance**: Achieves 0.775 average score across 14 benchmarks |
|
|
- ⚡ **Efficient training**: Trained in ~83 hours on 8× H100 GPUs for only **$1,600 USD** (estimated) |
|
|
- 📊 **Minimal data requirements**: Uses only **12K slides from TCGA for training** |
|
|
- 🔓 **Fully open source**: Complete model weights, training code, and pipeline publicly available |
|
|
|
|
|
OpenMidnight is aimed for computational pathology tasks including: |
|
|
- Tumor detection and classification |
|
|
- Histological grading |
|
|
- Tissue segmentation |
|
|
- Margin assessment |
|
|
- Clinical outcome prediction |
|
|
|
|
|
--- |
|
|
|
|
|
|
|
|
## Model Description |
|
|
|
|
|
- **Developed by**: [Sophont](https://sophont.med) |
|
|
- **Model type**: Finetuned DINOv2 ViT-G for H&E pathology images |
|
|
- **Training data**: TCGA, 12M H&E WSI |
|
|
- **Training repository**: https://github.com/MedARC-AI/OpenMidnight/tree/main |
|
|
|
|
|
## Usage |
|
|
|
|
|
### Requirements |
|
|
|
|
|
```bash |
|
|
pip install torch torchvision huggingface_hub |
|
|
``` |
|
|
|
|
|
**Recommended**: Run on GPU with mixed precision for optimal performance. |
|
|
|
|
|
### Quick Start: Loading the Model |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from huggingface_hub import hf_hub_download |
|
|
|
|
|
#Downloads to hf cache location |
|
|
download_location = hf_hub_download(repo_id="SophontAI/OpenMidnight", filename="teacher_checkpoint_load.pt") |
|
|
model = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14_reg', weights = None) |
|
|
|
|
|
#Load OpenMidnight weights |
|
|
checkpoint = torch.load(download_location, map_location = "cpu") |
|
|
|
|
|
#Required because dinov2 is baseline 392 and we are baseline 224 resolution |
|
|
pos_embed = checkpoint["pos_embed"] |
|
|
model.pos_embed = torch.nn.parameter.Parameter(pos_embed) |
|
|
model.load_state_dict(checkpoint) |
|
|
model.eval() |
|
|
|
|
|
print(f"Model loaded with {sum(p.numel() for p in model.parameters()):,} parameters") |
|
|
``` |
|
|
|
|
|
### Extracting Embeddings from Tissue Patches |
|
|
|
|
|
```python |
|
|
from PIL import Image |
|
|
import torchvision.transforms as transforms |
|
|
|
|
|
# Standard preprocessing for pathology images |
|
|
transform = transforms.Compose([ |
|
|
transforms.Resize((224, 224)), |
|
|
transforms.ToTensor(), |
|
|
transforms.Normalize( |
|
|
mean=[0.485, 0.456, 0.406], # ImageNet normalization |
|
|
std=[0.229, 0.224, 0.225] |
|
|
) |
|
|
]) |
|
|
|
|
|
# Load and preprocess an H&E tissue patch |
|
|
image = Image.open("path/to/tissue_patch.jpg") |
|
|
input_tensor = transform(image).unsqueeze(0) # Shape: [1, 3, 224, 224] |
|
|
|
|
|
# Extract embeddings |
|
|
with torch.no_grad(): |
|
|
embeddings = model(input_tensor) # Shape: [1, 1536] |
|
|
|
|
|
print(f"Embedding shape: {embeddings.shape}") |
|
|
print(f"Embedding norm: {embeddings.norm().item():.4f}") |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
|
|
|
## Model Performance |
|
|
|
|
|
OpenMidnight achieves **competitive or superior performance** compared to models trained on 8-30× more data: |
|
|
|
|
|
### Benchmark Comparison |
|
|
|
|
|
<div align="center"> |
|
|
|
|
|
|
|
|
 |
|
|
|
|
|
<p><em>Average performance of top pathology foundation models and baselines across computational pathology benchmarks</em></p> |
|
|
</div> |
|
|
|
|
|
### Detailed Benchmark Results |
|
|
|
|
|
<table class="performance-table" id="performance-table"> |
|
|
<thead> |
|
|
<tr> |
|
|
<th>Model</th> |
|
|
<th class="wsi-count">#WSIs</th> |
|
|
<th>PCam (10 shots)</th> |
|
|
<th>BACH</th> |
|
|
<th>BRACS</th> |
|
|
<th>BreakHis</th> |
|
|
<th>CRC-100K</th> |
|
|
<th>Gleason</th> |
|
|
<th>MHIST</th> |
|
|
<th>PCam</th> |
|
|
<th>Cam16 (small)</th> |
|
|
<th>Panda (small)</th> |
|
|
<th>CoNSeP</th> |
|
|
<th>MoNuSAC</th> |
|
|
<th>HEST</th> |
|
|
<th>Average</th> |
|
|
</tr> |
|
|
</thead> |
|
|
<tbody> |
|
|
<tr> |
|
|
<td class="model-name">OpenMidnight (Ours)</td><td class="wsi-count">12K</td><td>0.790</td><td>0.916</td><td>0.661</td><td><strong>0.873</strong></td><td>0.961</td><td>0.817</td><td>0.844</td><td>0.938</td><td><strong>0.946</strong></td><td>0.652</td><td>0.631</td><td>0.655</td><td>0.390</td><td><strong>0.775</strong></td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">Midnight</td><td class="wsi-count">92K</td><td><strong>0.900</strong></td><td>0.906</td><td>0.642</td><td>0.850</td><td>0.964</td><td>0.809</td><td>0.825</td><td>0.951</td><td>0.831</td><td>0.633</td><td><strong>0.663</strong></td><td><strong>0.707</strong></td><td>0.384</td><td>0.774</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">UNI-2</td><td class="wsi-count">350K</td><td>0.887</td><td>0.914</td><td>0.661</td><td>0.860</td><td>0.965</td><td>0.778</td><td>0.823</td><td>0.949</td><td>0.868</td><td>0.659</td><td>0.628</td><td>0.644</td><td>0.414</td><td>0.773</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">UNI-2/392</td><td class="wsi-count">350K</td><td>0.821</td><td><strong>0.917</strong></td><td><strong>0.663</strong></td><td>0.829</td><td>0.965</td><td>0.791</td><td>0.849</td><td>0.927</td><td>0.858</td><td>0.653</td><td>0.629</td><td>0.659</td><td>0.407</td><td>0.767</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">Virchow2</td><td class="wsi-count">3.1M</td><td>0.851</td><td>0.884</td><td>0.624</td><td>0.823</td><td>0.966</td><td>0.778</td><td><strong>0.861</strong></td><td>0.936</td><td>0.865</td><td>0.656</td><td>0.639</td><td>0.676</td><td>0.398</td><td>0.766</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">Midnight 92k</td><td class="wsi-count">92K</td><td>0.876</td><td>0.896</td><td>0.616</td><td>0.789</td><td>0.966</td><td><strong>0.820</strong></td><td>0.811</td><td>0.950</td><td>0.861</td><td>0.625</td><td>0.629</td><td>0.656</td><td>0.392</td><td>0.761</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">Midnight 12k</td><td class="wsi-count">12K</td><td>0.791</td><td>0.904</td><td>0.644</td><td>0.841</td><td>0.966</td><td>0.801</td><td>0.807</td><td>0.930</td><td>0.850</td><td>0.663</td><td>0.626</td><td>0.663</td><td>0.395</td><td>0.760</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">H-Optimus-0</td><td class="wsi-count">500K</td><td>0.824</td><td>0.757</td><td>0.615</td><td>0.808</td><td>0.956</td><td>0.771</td><td>0.842</td><td>0.942</td><td>0.838</td><td><strong>0.670</strong></td><td>0.644</td><td>0.685</td><td><strong>0.415</strong></td><td>0.751</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">Kaiko-B8</td><td class="wsi-count">29K</td><td>0.786</td><td>0.872</td><td>0.617</td><td>0.825</td><td>0.957</td><td>0.748</td><td>0.828</td><td>0.917</td><td>0.831</td><td>0.642</td><td>0.643</td><td>0.686</td><td>0.373</td><td>0.748</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">TCGA-100M</td><td class="wsi-count">12K</td><td>0.774</td><td>0.864</td><td>0.615</td><td>0.779</td><td><strong>0.967</strong></td><td>0.799</td><td>0.792</td><td>0.927</td><td>0.852</td><td>0.667</td><td>0.622</td><td>0.656</td><td>0.396</td><td>0.747</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">Prov-GigaPath</td><td class="wsi-count">171K</td><td>0.852</td><td>0.766</td><td>0.616</td><td>0.821</td><td>0.951</td><td>0.720</td><td>0.831</td><td>0.942</td><td>0.791</td><td>0.660</td><td>0.626</td><td>0.687</td><td>0.393</td><td>0.743</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">Hibou-L</td><td class="wsi-count">1.1M</td><td>0.804</td><td>0.811</td><td>0.637</td><td>0.740</td><td>0.933</td><td>0.763</td><td>0.839</td><td><strong>0.952</strong></td><td>0.823</td><td>0.634</td><td>0.645</td><td>0.668</td><td>0.388</td><td>0.740</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">UNI</td><td class="wsi-count">100K</td><td>0.815</td><td>0.791</td><td>0.593</td><td>0.789</td><td>0.948</td><td>0.757</td><td>0.840</td><td>0.938</td><td>0.822</td><td>0.655</td><td>0.627</td><td>0.659</td><td>0.386</td><td>0.740</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">UNI/512</td><td class="wsi-count">100K</td><td>0.737</td><td>0.877</td><td>0.612</td><td>0.732</td><td>0.950</td><td>0.754</td><td>0.814</td><td>0.883</td><td>0.814</td><td>0.654</td><td>0.621</td><td>0.658</td><td>0.364</td><td>0.728</td> |
|
|
</tr> |
|
|
<tr> |
|
|
<td class="model-name">Phikon</td><td class="wsi-count">12K</td><td>0.820</td><td>0.735</td><td>0.568</td><td>0.713</td><td>0.942</td><td>0.729</td><td>0.804</td><td>0.923</td><td>0.809</td><td>0.644</td><td>0.623</td><td>0.644</td><td>0.367</td><td>0.717</td> |
|
|
</tr> |
|
|
<tr class="dotted-divider"> |
|
|
<td class="model-name">Phikon v2</td><td class="wsi-count">60K</td><td>0.741</td><td>0.734</td><td>0.600</td><td>0.716</td><td>0.939</td><td>0.755</td><td>0.784</td><td>0.893</td><td>0.803</td><td>0.631</td><td>0.626</td><td>0.645</td><td>0.375</td><td>0.711</td> |
|
|
</tr> |
|
|
<tr id="random-baseline-row"> |
|
|
<td class="model-name">DINOv2-giant (pretrained)</td><td class="wsi-count">0</td><td>0.719</td><td>0.725</td><td>0.583</td><td>0.832</td><td>0.935</td><td>0.744</td><td><strong>0.862</strong></td><td>0.874</td><td>0.507</td><td>0.382</td><td>0.564</td><td>0.614</td><td>0.342</td><td>0.668</td> |
|
|
</tr> |
|
|
<tr id="random-baseline-row"> |
|
|
<td class="model-name">DINOv2-giant (random)</td><td class="wsi-count">0</td><td>0.649</td><td>0.473</td><td>0.411</td><td>0.427</td><td>0.748</td><td>0.464</td><td>0.569</td><td>0.755</td><td>0.566</td><td>0.308</td><td>0.461</td><td>0.428</td><td>0.172</td><td>0.495</td> |
|
|
</tr> |
|
|
</tbody> |
|
|
</table> |
|
|
<p><em> Performance comparison of OpenMidnight to existing pathology foundation models on eva+HEST benchmarks. Scores for existing models are taken from Midnight paper. We report balanced accuracy for the classification tasks, Dice score for semantic segmentation (CoNSeP and MoNuSAC), and the average Pearson correlation for the nine HEST regression tasks. Only performance with [CLS] token is reported. Best score per dataset is bolded.</p></em> |
|
|
|
|
|
--- |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Architecture |
|
|
|
|
|
| Parameter | Value | |
|
|
|-----------|-------| |
|
|
| **Base Architecture** | ViT-G/14 | |
|
|
| **Parameters** | 1.1 billion | |
|
|
| **Patch Size** | 14×14 pixels | |
|
|
| **Input Resolution** | 224×224 pixels | |
|
|
| **Embedding Dimension** | 1536 | |
|
|
| **Number of Layers** | 40 | |
|
|
| **Number of Heads** | 16 | |
|
|
| **Initialization** | Meta's DINOv2 pre-trained weights | |
|
|
|
|
|
### Training Data |
|
|
|
|
|
- **Dataset**: TCGA (The Cancer Genome Atlas) |
|
|
- **Slides**: 12k FFPE H&E-stained whole slide images |
|
|
- **Cancer Types**: 32 different cancer types |
|
|
- **Total Patches**: 96 million |
|
|
- **Unique Patches**: 29 million |
|
|
- **Stain Type**: Hematoxylin and Eosin (H&E) |
|
|
- **Preprocessing**: Non-informative patch filtering |
|
|
|
|
|
### Training Configuration |
|
|
|
|
|
- **Hardware**: 8× NVIDIA H100 GPUs (80GB each) |
|
|
- **Batch Size**: 48 per GPU (384 global batch size) |
|
|
- **Training Steps**: 250,000 |
|
|
- **Optimizer**: AdamW |
|
|
- **Learning Rate**: 2.0e-4 |
|
|
- **Regularization**: KDE regularizer for training stability |
|
|
- **Augmentation**: Hematoxylin-Eosin-DAB colorspace transformations |
|
|
- **Training Time**: ~83 hours wall-clock time (667 GPU-hrs) |
|
|
- **Training Cost**: ~$1,600 USD (at $2.50/H100/hour) |
|
|
|
|
|
--- |
|
|
|
|
|
## Blog Post |
|
|
|
|
|
For an in-depth discussion of OpenMidnight, **[read the full blog post](https://sophont.med/blog/openmidnight)**. |
|
|
|
|
|
--- |
|
|
|
|
|
## Contact |
|
|
|
|
|
For questions, feedback, or collaboration opportunities: |
|
|
|
|
|
- **📧 Email**: [contact@sophont.med](mailto:contact@sophont.med) |
|
|
- **🌐 Website**: [sophont.med](https://sophont.med) |
|
|
- **🐦 Twitter/X**: [@SophontAI](https://twitter.com/SophontAI) |
|
|
- **💬 GitHub Issues**: [github.com/MedARC-AI/OpenMidnight/issues](https://github.com/MedARC-AI/OpenMidnight/issues) |
|
|
|
|
|
We welcome: |
|
|
- Bug reports and feature requests |
|
|
- Contributions to the training code |
|
|
- Benchmark results on new datasets |
|
|
- Applications of OpenMidnight to novel tasks |
|
|
|
|
|
--- |
|
|
|
|
|
## Acknowledgments |
|
|
|
|
|
We thank Mikhail Karasikov for answering questions about Midnight. We thank Nicolas Känzig for answering questions about eva. We thank the members of MedARC and the broader research community for their feedback and support. We are very grateful to FAL AI for granting compute to support this open-source research. |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use OpenMidnight in your research, please cite: |
|
|
|
|
|
```bibtex |
|
|
@article{kaplan2025openmidnight, |
|
|
author = {Kaplan, Daniel and Grandhi, Ratna Sagari and Lane, Connor and Warner, Benjamin and Abraham, Tanishq Mathew and Scotti, Paul S.}, |
|
|
title = {How to Train a State-of-the-Art Pathology Foundation Model with \$1.6k}, |
|
|
year = {2025}, |
|
|
url = {https://sophont.med/blog/openmidnight}, |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## License |
|
|
|
|
|
This model is released under the **Apache 2.0 License**. |
|
|
|
|
|
--- |
|
|
|
|
|
## Terms of Use |
|
|
|
|
|
**Research Use**: This model is primarily intended for research purposes in computational pathology, medical imaging, and related fields. |
|
|
|
|
|
**Clinical Use**: This model is not intended for use in medical diagnosis, treatment, or prevention of disease of real patients. It should not be used as a substitute for professional medical advice. |
|
|
|
|
|
**Responsible Use**: Users should: |
|
|
- Validate model performance on their specific use cases |
|
|
- Be aware of potential biases in the training data (TCGA) |
|
|
- Consider demographic and geographic limitations |
|
|
- Respect privacy rights and comply with applicable data protection laws |
|
|
- Follow applicable regulations and ethical guidelines |
|
|
|
|
|
--- |
|
|
|