The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Without the consent of Xiangya Hospital of Central South University or its Pathology Department, the use of the raw data is prohibited. Should any data be used for commercial purposes, we will hold the user legally accountable. We support the following 21 pre-trained foundation models to extract the feature representation of WSI. Please contact us by email before using. (Strongly recommended!!)
🔨 1. Installation:
- Create an environment:
conda create -n "trident" python=3.10, and activate itconda activate trident. - Cloning:
git clone https://github.com/mahmoodlab/trident.git && cd trident. - Local installation:
pip install -e ..
Additional packages may be required to load some pretrained models. Follow error messages for instructions.
🔨 2. Running Trident:
Already familiar with WSI processing? Perform segmentation, patching, and UNI feature extraction from a directory of WSIs with:
python run_batch_of_slides.py --task all --wsi_dir ./wsis --job_dir ./trident_processed --patch_encoder uni_v1 --mag 20 --patch_size 256
Feeling cautious?
Run this command to perform all processing steps for a single slide:
python run_single_slide.py --slide_path ./wsis/xxxx.svs --job_dir ./trident_processed --patch_encoder uni_v1 --mag 20 --patch_size 256
Or follow step-by-step instructions:
Step 1: Tissue Segmentation: Segments tissue vs. background from a dir of WSIs
- Command:
python run_batch_of_slides.py --task seg --wsi_dir ./wsis --job_dir ./trident_processed --gpu 0 --segmenter hest--task seg: Specifies that you want to do tissue segmentation.--wsi_dir ./wsis: Path to dir with your WSIs.--job_dir ./trident_processed: Output dir for processed results.--gpu 0: Uses GPU with index 0.--segmenter: Segmentation model. Defaults tohest. Switch tograndqcfor fast H&E segmentation. Add the option--remove_artifactsfor additional artifact clean up.
- Outputs:
- WSI thumbnails in
./trident_processed/thumbnails. - WSI thumbnails with tissue contours in
./trident_processed/contours. - GeoJSON files containing tissue contours in
./trident_processed/contours_geojson. These can be opened in QuPath for editing/quality control, if necessary.
- WSI thumbnails in
Step 2: Tissue Patching: Extracts patches from segmented tissue regions at a specific magnification.
- Command:
python run_batch_of_slides.py --task coords --wsi_dir ./wsis --job_dir ./trident_processed --mag 20 --patch_size 256 --overlap 0--task coords: Specifies that you want to do patching.--wsi_dir wsis: Path to the dir with your WSIs.--job_dir ./trident_processed: Output dir for processed results.--mag 20: Extracts patches at 20x magnification.--patch_size 256: Each patch is 256x256 pixels.--overlap 0: Patches overlap by 0 pixels (always an absolute number in pixels, e.g.,--overlap 128for 50% overlap for 256x256 patches.
- Outputs:
- Patch coordinates as h5 files in
./trident_processed/20x_256px/patches. - WSI thumbnails annotated with patch borders in
./trident_processed/20x_256px/visualization.
- Patch coordinates as h5 files in
Step 3a: Patch Feature Extraction: Extracts features from tissue patches using a specified encoder
- Command:
python run_batch_of_slides.py --task feat --wsi_dir ./wsis --job_dir ./trident_processed --patch_encoder uni_v1 --mag 20 --patch_size 256--task feat: Specifies that you want to do feature extraction.--wsi_dir wsis: Path to the dir with your WSIs.--job_dir ./trident_processed: Output dir for processed results.--patch_encoder uni_v1: Uses theUNIpatch encoder. See below for list of supported models.--mag 20: Features are extracted from patches at 20x magnification.--patch_size 256: Patches are 256x256 pixels in size.
- Outputs:
- Features are saved as h5 files in
./trident_processed/20x_256px/features_uni_v1. (Shape:(n_patches, feature_dim))
- Features are saved as h5 files in
Trident supports 21 patch encoders, loaded via a patch encoder_factory. Models requiring specific installations will return error messages with additional instructions. Gated models on HuggingFace require access requests.
| Patch Encoder | Embedding Dim | Args | Link |
|---|---|---|---|
| UNI | 1024 | --patch_encoder uni_v1 --patch_size 256 --mag 20 |
MahmoodLab/UNI |
| UNI2-h | 1536 | --patch_encoder uni_v2 --patch_size 256 --mag 20 |
MahmoodLab/UNI2-h |
| CONCH | 512 | --patch_encoder conch_v1 --patch_size 512 --mag 20 |
MahmoodLab/CONCH |
| CONCHv1.5 | 768 | --patch_encoder conch_v15 --patch_size 512 --mag 20 |
MahmoodLab/conchv1_5 |
| Virchow | 2560 | --patch_encoder virchow --patch_size 224 --mag 20 |
paige-ai/Virchow |
| Virchow2 | 2560 | --patch_encoder virchow2 --patch_size 224 --mag 20 |
paige-ai/Virchow2 |
| Phikon | 768 | --patch_encoder phikon --patch_size 224 --mag 20 |
owkin/phikon |
| Phikon-v2 | 1024 | --patch_encoder phikon_v2 --patch_size 224 --mag 20 |
owkin/phikon-v2 |
| Prov-Gigapath | 1536 | --patch_encoder gigapath --patch_size 256 --mag 20 |
prov-gigapath |
| H-Optimus-0 | 1536 | --patch_encoder hoptimus0 --patch_size 224 --mag 20 |
bioptimus/H-optimus-0 |
| H-Optimus-1 | 1536 | --patch_encoder hoptimus1 --patch_size 224 --mag 20 |
bioptimus/H-optimus-1 |
| MUSK | 1024 | --patch_encoder musk --patch_size 384 --mag 20 |
xiangjx/musk |
| Midnight-12k | 3072 | --patch_encoder midnight12k --patch_size 224 --mag 20 |
kaiko-ai/midnight |
| Kaiko | 384/768/1024 | --patch_encoder {kaiko-vits8, kaiko-vits16, kaiko-vitb8, kaiko-vitb16, kaiko-vitl14} --patch_size 256 --mag 20 |
1aurent/kaikoai-models-66636c99d8e1e34bc6dcf795 |
| Lunit | 384 | --patch_encoder lunit-vits8 --patch_size 224 --mag 20 |
1aurent/vit_small_patch8_224.lunit_dino |
| Hibou | 1024 | --patch_encoder hibou_l --patch_size 224 --mag 20 |
histai/hibou-L |
| CTransPath-CHIEF | 768 | --patch_encoder ctranspath --patch_size 256 --mag 10 |
— |
| ResNet50 | 1024 | --patch_encoder resnet50 --patch_size 256 --mag 20 |
— |
Step 3b: Slide Feature Extraction: Extracts slide embeddings using a slide encoder. Will also automatically extract the right patch embeddings.
- Command:
python run_batch_of_slides.py --task feat --wsi_dir ./wsis --job_dir ./trident_processed --slide_encoder titan --mag 20 --patch_size 512--task feat: Specifies that you want to do feature extraction.--wsi_dir wsis: Path to the dir containing WSIs.--job_dir ./trident_processed: Output dir for processed results.--slide_encoder titan: Uses theTitanslide encoder. See below for supported models.--mag 20: Features are extracted from patches at 20x magnification.--patch_size 512: Patches are 512x512 pixels in size.
- Outputs:
- Features are saved as h5 files in
./trident_processed/20x_256px/slide_features_titan. (Shape:(feature_dim))
- Features are saved as h5 files in
Trident supports 5 slide encoders, loaded via a slide-level encoder_factory. Models requiring specific installations will return error messages with additional instructions. Gated models on HuggingFace require access requests.
| Slide Encoder | Patch Encoder | Args | Link |
|---|---|---|---|
| Threads | conch_v15 | --slide_encoder threads --patch_size 512 --mag 20 |
(Coming Soon!) |
| Titan | conch_v15 | --slide_encoder titan --patch_size 512 --mag 20 |
MahmoodLab/TITAN |
| PRISM | virchow | --slide_encoder prism --patch_size 224 --mag 20 |
paige-ai/Prism |
| CHIEF | ctranspath | --slide_encoder chief --patch_size 256 --mag 10 |
CHIEF |
| GigaPath | gigapath | --slide_encoder gigapath --patch_size 256 --mag 20 |
prov-gigapath |
| Madeleine | conch_v1 | --slide_encoder madeleine --patch_size 256 --mag 10 |
MahmoodLab/madeleine |
If your task includes multiple slides per patient, you can generate patient-level embeddings by: (1) processing each slide independently and taking their average slide embedding (late fusion) or (2) pooling all patches together and processing that as a single "pseudo-slide" (early fusion). For an implementation of both fusion strategies, please check out our sister repository Patho-Bench.
Here, we provide the feature representations of WSIs extracted by CTransPath. Link: https://pan.baidu.com/s/1zpt7D_XNgqZpLnUyOmtkgA?pwd=8yn6 (password: 8yn6).
If you require features extracted by other pre-trained models, please contact panlr@hnu.edu.cn (strongly recommended).
PathGene Datasets
PathGene-CSU
Overview
PathGene-CSU comprises whole-slide images and matched genomic labels from 1,576 lung cancer patients, including adenocarcinoma and squamous cell carcinoma subtypes.
Genomic Annotation
All samples underwent NGS profiling. For each driver gene (TP53, EGFR, KRAS, ALK), the dataset provides:
- Mutation status: presence/absence of any mutation.
- Mutation subtype (TP53 only): wild-type, nonsense, missense.
- Mutational exon:
- TP53: EX5, EX6, EX7, EX8, other
- EGFR: EX19, EX20, EX21
- KRAS: EX2, other (EX3+merged)
- ALK: EML4–ALK fusion, other (non-fusion)
- Tumor Mutation Burden (TMB): Low vs. High (9 mut/Mb cutoff)
Prediction Tasks
- Driver gene mutation status (early genetic screening)
- Driver gene subtype & exon (precision genetic profiling)
- TMB status (immune-therapy response prediction)
- Future extensions: microenvironmental biomarkers
PathGene-TCGA-LUAD
Overview
PathGene-TCGA-LUAD contains 510 histopathology slides from 448 TCGA lung adenocarcinoma patients. Slides from the same tumor share identical genomic labels.
Genomic Annotation
Using cBioPortal data:
- Mutation status (TP53, EGFR, KRAS, ALK): 0 = wild-type, 1 = mutant
- TMB: Low vs. High (10 mut/Mb cutoff)
- TP53 subtype: wild-type, nonsense, missense
- Exon labels: Not provided (low per-exon counts)
Prediction Tasks
- Driver gene mutation status
- TMB status
##Interpretability Analysis
Figure 1. Interpretability analysis of WSI predicted by TransMIL in patients without target gene mutations in high and low TMB states. For WSI with high and low TMB, the areas of pathologists’ attention on the WSI were first visualized and analyzed, and then the attention prediction heat map at 20X was visualized using TransMIL and finally the TME density map was visualized. Given that there was no statistical difference between the high TMB group and the low TMB group, it was not possible to calculate and count the key biomarkers associated with high and low TMB. a: Interpretability analysis of WSI predicted by TransMIL in the high TMB state. b: Interpretability analysis of WSI predicted by TransMILin the low TMB state.
License Licensed under the MIT License.
- Downloads last month
- 3,123
