--- license: apache-2.0 tags: - medical - pathology - histopathology language: - en pretty_name: >- A Protocol for Evaluating Robustness to H&E Staining Variation in Computational Pathology Models --- # A Protocol for Evaluating Robustness to H&E Staining Variation in Computational Pathology Models This repository provides the stain references, pretrained models, and experimental results required to: 1. **Define custom staining references using our PLISM reference library** 2. **Reproduce our published controlled staining robustness experiments** 👉 **Code repository:** https://github.com/lely475/staining-robustness-evaluation/tree/main 👉 **Associated publication:** [Paper](https://arxiv.org/abs/2603.12886) --- ## Overview: How This Project Is Structured The project is split into a code repository [GitHub](https://github.com/lely475/staining-robustness-evaluation/tree/main) and this repository providing the precomputed results. You can reproduce everything or selectively reuse precomputed artifacts: | Work Package | Code (GitHub) | Precomputed Results (This Repo) | |---------------|---------------------|---------------------| | PLISM stain characterization | `stain_vector_concentration_extraction/compute_stats.py`, `stain_vector_concentration_extraction/unmix_tiles.py` | `plism-wsi_stain_references` | | SurGen stain characterization | `stain_vector_concentration_extraction/unmix_wsi_v1.py` | `surgen_stain_properties` | | Sample ABMIL train hyperparameters | `controlled_staining_simulations/simulation_settings.ipynb` | `MSI_classification_models/fixed_splits_n=300`, `MSI_classification_models/fixed_simulation_hps_n=300.csv` | | ABMIL training (n=300 models) | `controlled_staining_simulations/unmix_wsi_v1.py` | `MSI_classification_models/trained_models` | | Extract features under simulated reference staining conditions | `controlled_staining_simulations/extract_features.py` | Not provided, follow steps in GitHub. | | Apply models on extracted features | `controlled_staining_simulations/apply_simulated_models.py`, `controlled_staining_simulations/apply_public_models.py` | `exp_results` | | Evaluate results | `controlled_staining_simulations/evaluate_results.ipynb` | See paper for results. | **Quick Navigation**: - Want to define new stain simulations? → [Define Custom Staining References](#1-define-custom-staining-references) - Want to reproduce our results? → [Reproduce Published Results](#2-reproduce-our-published-results) - Looking for pretrained models? → [Trained ABMIL Models](#c-trained-abmil-models) - Looking for experiment results? → [Experiment Results](#d-controlled-staining-simulation-results) --- ## Repository Structure ### a) Reference Stain Library (PLISM) `plism-wsi_stain_references/`: PLISM staining references. ### Contents - `img_stats/` – Tile-level quality metrics - `intensities/` – Extracted H&E intensities - `stain_vectors/` – Extracted H&E stain vectors Each `.npz` file in `stain_vectors/` contains: - `stainMatrix` (3×3): - `[:,0]` – Hematoxylin vector - `[:,1]` – Eosin vector - `[:,2]` – Residual component --- ### b) Characterized Test Set (SurGen) `surgen_stain_properties/`: Slide-level stain properties extracted from SurGen WSIs. #### Contents - `intensities/` – Extracted H&E intensities - `stain_vectors/` – Extracted H&E stain vectors --- ### c) Trained ABMIL Models `MSI_classification_models/`: Provides 306 pre-trained MSI classification models and files to reproduce the 300 ABMIL models #### Contents Pretrained MSI Classification Backbones: - `NIEHEUS2023/`: Pretrained model from [Paper](https://www.sciencedirect.com/science/article/pii/S2666379123000861?via%3Dihub), [Original Repo](https://github.com/KatherLab/crc-models-2022/tree/main/Quasar_models/Wang%2BattMIL/isMSIH) - `WAGNER2023/`: Pretrained model from [Paper](https://www.sciencedirect.com/science/article/pii/S1535610823002787?via%3Dihub), [Original Repo](https://github.com/peng-lab/HistoBistro/tree/main/CancerCellCRCTransformer/trained_models) Note: We provide the pretrained models to enable faster access, they are also available on their original repositories, all credit and ownership goes to the model creators. Simulated ABMIL Models (n = 300): - `fixed_splits_n=300/` – Fixed train/val splits - `fixed_simulation_hps_n=300.csv` – Sampled hyperparameters - `trained_models/` – 300 trained ABMIL checkpoints --- ### d) Controlled Staining Simulation Results `exp_results/`: Slide-wise predictions MSI classification results for **306 models** across five staining conditions: #### Contents Folders contain detailed per-model, slide level MSI classification results for each staining condition: - `reference/`: Original dataset - `intensity=GV_AT2_stain=None/`: High intensity condition - `intensity=KRH_GT450_stain=None/`: Low intensity condition - `intensity=None_stain=GV_GT450/`: High H&E color similarity condition - `intensity=None_stain=HRH_S60/`: Low H&E color similarity condition AUC across whole dataset per model and condition: - `performance_auc_reference.csv`: Original dataset - `performance_auc_intensity=GV_AT2_stain=None.csv`: High intensity condition - `performance_auc_intensity=KRH_GT450_stain=None.csv`: Low intensity condition - `performance_auc_intensity=None_stain=GV_GT450.csv`: High H&E color similarity condition - `performance_auc_intensity=None_stain=HRH_S60.csv`: Low H&E color similarity condition Robustness metric results for each model as the min-max AUC range across the five staining conditions: - `robustness_auc_minmax_range.csv` --- # 1. Define Custom Staining References The PLISM library contains stain properties for multiple **staining protocol × scanner device** combinations (e.g., `GV_GT450`, `HRH_S60`). Each combination represents a distinct real-world H&E appearance captured across controlled staining and digitization settings. The below Figure shows the staining properties of each condition, enabling custom staining reference selection. For more details please refer to our publication [Paper](https://arxiv.org/abs/2603.12886). PLISM stain properties Figure: Staining characteristics of the PLISM reference library (for each unique staining condition-device combination). a) Intensity of Hematoxylin and Eosin, b) Angle between H&E stain vector in OD space, c) Distribution of H&E hues, measured as hue h° in CIELab space; left violin: Hematoxylin, right violin: Eosin. Marker colors correspond to RGB stain colors. The reference conditions selected in our paper (low and high intensity; low and high H&E color similarity) are circled in red and green respectively and highlighted with a black frame. You can: - Reuse the published reference conditions - Select alternative PLISM stain × device combinations - Define new intensity or stain vector targets for custom simulations To run controlled staining simulation based on your selected staining references, please refer to our GitHub repository [GitHub](https://github.com/lely475/staining-robustness-evaluation/tree/main) . --- # 2. Reproduce Our Published Results To reproduce the full pipeline: 1. Download this repository. 2. Clone the GitHub repository. 3. Re-run or verify each work package as needed. You can reproduce everything or selectively reuse precomputed artifacts, please refer to the [Overview Table](#overview-how-this-project-is-structured) for navigating the different project components. --- ## Citation If you use this repository, please cite: [A protocol for evaluating robustness to H&E staining variation in computational pathology models ](https://arxiv.org/abs/2603.12886) ``` @misc{schönpflug2026protocolevaluatingrobustnesshe, title={A protocol for evaluating robustness to H&E staining variation in computational pathology models}, author={Lydia A. Schönpflug and Nikki van den Berg and Sonali Andani and Nanda Horeweg and Jurriaan Barkey Wolf and Tjalling Bosse and Viktor H. Koelzer and Maxime W. Lafarge}, year={2026}, eprint={2603.12886}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2603.12886}, } ``` ## References This repository utilizes and builds on: **Datasets:** * PLISM dataset: Ochi, M., Komura, D., Onoyama, T. et al. Registered multi-device/staining histology image dataset for domain-agnostic machine learning models. Sci Data 11, 330 (2024). [Link](https://doi.org/10.1038/s41597-024-03122-5) * SurGen dataset: Myles C., Um, I.H., Marshall, C. et al. 1020 H&E-stained whole-slide images with survival and genetic markers. GigaScience, Volume 14 (2025). [Link](https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giaf086/8277208?login=true) * TCGA COADREAD: WSIs: [GDC Portal](https://portal.gdc.cancer.gov/), MSI Status from CBioportal: [TCGA COADREAD Pan-cancer Atlas (2018)](https://www.cbioportal.org/study/summary?id=coadread_tcga_pan_can_atlas_2018), [TCGA COADREAD Nature (2012)](https://www.cbioportal.org/study/summary?id=coadread_tcga_pub). The results shown here are in whole or part based upon data generated by the TCGA Research Network: https://www.cancer.gov/tcga. **Foundation models:** - UNI2-h: Chen, R.J., Ding, T., Lu, M.Y., Williamson, D.F.K., et al. Towards a general-purpose foundation model for computational pathology. Nat Med (2024). [Paper](https://doi.org/10.1038/s41591-024-02857-3), [HuggingFace](https://huggingface.co/MahmoodLab/UNI2-h) - HOptimus1: [HuggingFace](https://huggingface.co/bioptimus/H-optimus-1) - Virchow2: Zimmermann, E., Vorontsov, E., Viret et al. Virchow2: Scaling self-supervised mixed magnification models in pathology (2024). [Paper](arXiv:2408.00738), [HuggingFace](https://huggingface.co/paige-ai/Virchow2) - CTransPath: Wang, X., Yang, S., Zhang et. al. Transformer-based unsupervised contrastive learning for histopathological image classification. Medical image analysis, 81, p.102559 (2022). [Paper](https://www.sciencedirect.com/science/article/pii/S1361841522002043), [GitHub](https://github.com/Xiyue-Wang/TransPath?tab=readme-ov-file) - RetCCL: Wang, X., Du, Y., Yang, S. et. al. RetCCL: Clustering-guided contrastive learning for whole-slide image retrieval. Medical image analysis, 83, 102645 (2023). [Paper](https://www.sciencedirect.com/science/article/pii/S1361841522002730), [GitHub](https://github.com/Xiyue-Wang/RetCCL) **Public MSI models:** - Niehues, J. M., Quirke, P., West, N. P., et. al. Generalizable biomarker prediction from cancer pathology slides with self-supervised deep learning: A retrospective multi-centric study. Cell reports medicine, 4(4) (2023). [Paper](https://www.sciencedirect.com/science/article/pii/S2666379123000861?via%3Dihub), [HuggingFace](https://huggingface.co/datasets/CTPLab-DBE-UniBas/staining-robustness-evaluation/tree/main/MSI_classification_models/NIEHEUS2023), [Original Repo](https://github.com/KatherLab/crc-models-2022/tree/main/Quasar_models/Wang%2BattMIL/isMSIH) - Wagner, S. J., Reisenbüchler, D., West, N. P. et al. Transformer-based biomarker prediction from colorectal cancer histology: A large-scale multicentric study. Cancer cell, 41(9), 1650-1661 (2023). [Paper](https://www.sciencedirect.com/science/article/pii/S1535610823002787?via%3Dihub), [HuggingFace](https://huggingface.co/datasets/CTPLab-DBE-UniBas/staining-robustness-evaluation/tree/main/MSI_classification_models/WAGNER2023), [Original Repo](https://github.com/peng-lab/HistoBistro/tree/main/CancerCellCRCTransformer/trained_models)