LEMON / README.md
iclr2025-anonymous's picture
Update README.md
b33d732 verified
---
tags:
- image-feature-extraction
- cell representation
- histology
- medical imaging
- self-supervised learning
- vision transformer
- foundation model
license: mit
---
# Model card for LEMON
`LEMON` is an open-source foundation model for single-cell histology images. The model is a Vision Transformer (ViT-s/8) trained using self-supervised learning on a dataset of 10 million histology cell images sampled from 10,000 slides from TCGA.
It is described in detail in its [OpenReview paper](https://openreview.net/pdf?id=JAalsmy7bZ).
`LEMON` can be used to extract robust features from single-cell histology images for various downstream applications, such as gene expression prediction or cell type classification.
## How to use it to extract features.
The code below can be used to run inference. `LEMON` expects images of size 40x40 that were extracted at 0.25 microns per pixel (40X).
```python
import torch
from pathlib import Path
from torchvision.transforms import ToPILImage
from model import prepare_transform, get_vit_feature_extractor
device = "cpu"
model_name = "vits8"
target_cell_size = 40
weight_path = Path("lemon.pth.tar")
stats_path = Path("mean_std.json")
# Model
transform = prepare_transform(stats_path, size=target_cell_size)
model = get_vit_feature_extractor(weight_path, model_name, img_size=target_cell_size)
model.eval()
model.to(device)
# Data
input = torch.rand(3, target_cell_size, target_cell_size)
input = ToPILImage()(input)
# Inference
with torch.autocast(device_type=device, dtype=torch.float16):
with torch.inference_mode():
features = model(transform(input).unsqueeze(0).to(device))
assert features.shape == (1, 384)
```
## BibTeX entry and citation info.
If you find this repository useful, please consider citing our work:
```
@inproceedings{
anonymous2025lemon,
title={{LEMON} - a foundation model for single-cell nuclear morphologies for digital pathology},
author={Anonymous},
booktitle={Submitted to The Fourteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=JAalsmy7bZ},
note={under review}
}
```