metadata
extra_gated_heading: Access Terms for Athena-0
extra_gated_button_content: Submit and Accept Terms
extra_gated_fields:
Full Name:
type: text
Institution / Organization:
type: text
Role / Position:
type: select
options:
- Pathologist
- Researcher
- AI Engineer
- Clinician
- Student
- Industry Partner
- Other
Country:
type: country
Interested in collaboration?:
type: select
options:
- 'No'
- Yes — data sharing, validation or co-commercialization
Newsletter preference:
type: select
options:
- Subscribe me to newsletter
- Not interested
I confirm that I will use this model strictly for research or educational purposes, in compliance with the license (CC BY-NC 4.0) and without direct clinical application.:
type: checkbox
I consent to the storage and processing of my personal information in accordance with GDPR for the purpose of controlled model access and collaboration follow-up.:
type: checkbox
Model Card: Athena-0 (Histopathology Foundation Model)
Athena-0 is a ViT-G/14 foundation model trained on only 115 million patches derived from a diverse set of 282k H&E slides, emphasizing slide diversity over patch volume and achieving near state-of-the-art performance on both tile- and slide-level downstream tasks.
Model Details
- Model type: ViT-G/14
- Params: ~1.1B
- Input: RGB patches 224×224
- Output: 1536-dim features (CLS)
Training Data
- Slides: ~282,500 H&E WSIs
- Patches: ~115 million
- Diversity: Multi-country (25), multi-institution, 8 scanner models, broad organ coverage
Benchmark
| Model | BACH | BreakHis | CRC | Gleason | MHIST | PCam | PCam/test |
|---|---|---|---|---|---|---|---|
| Athena-0 | 0.865 | 0.789 | 0.970 | 0.740 | 0.852 | 0.944 | 0.951 |
| Virchow2 | 0.883 | 0.821 | 0.967 | 0.783 | 0.861 | 0.933 | 0.938 |
| UNI2 | 0.915 | 0.859 | 0.965 | 0.775 | 0.824 | 0.944 | 0.950 |
| H-optimus-0 | 0.759 | 0.801 | 0.955 | 0.770 | 0.843 | 0.932 | 0.943 |
| Midnight-12 | 0.904 | 0.819 | 0.966 | 0.800 | 0.804 | 0.929 | 0.929 |
| Prov-GigaPath | 0.759 | 0.827 | 0.951 | 0.724 | 0.829 | 0.935 | 0.945 |
| UNI | 0.785 | 0.785 | 0.944 | 0.750 | 0.843 | 0.936 | 0.937 |
| hibou-L | 0.810 | 0.735 | 0.932 | 0.764 | 0.839 | 0.939 | 0.955 |
| Phikon-v2 | 0.732 | 0.713 | 0.939 | 0.757 | 0.777 | 0.918 | 0.894 |
| Lunit | 0.783 | 0.742 | 0.940 | 0.750 | 0.781 | 0.894 | 0.897 |
Paper
Install
This model requires the official DINOv2 repository.
Clone it and add it to your PYTHONPATH:
git clone https://github.com/facebookresearch/dinov2.git
export PYTHONPATH="$PYTHONPATH:$(pwd)/dinov2"
Usage
from huggingface_hub import snapshot_download
import torch
from torchvision import transforms
import sys
local_dir = snapshot_download(
"PAICON-GmbH/Athena-0",
revision="main",
allow_patterns=["athena0.py","model_39999.safetensors","config.json"]
)
sys.path.insert(0, local_dir)
from athena0 import Athena0
model, transform = Athena0.from_pretrained(weights_path=f"{local_dir}/model_39999.safetensors", device="cuda")
model.eval()
inp = torch.rand(3, 224, 224)
inp = transforms.ToPILImage()(inp)
inp = transform(inp).unsqueeze(0).to("cuda")
with torch.inference_mode():
features = model(inp)
assert features.shape == (1, 1536)