| | --- |
| | language: en |
| | library_name: pytorch |
| | license: mit |
| | pipeline_tag: image-classification |
| | tags: |
| | - medical-imaging |
| | - chest-x-ray |
| | - explainable-ai |
| | - efficientnet |
| | - MedicalPatchNet |
| | --- |
| | |
| | # MedicalPatchNet: Model Weights |
| |
|
| | This repository hosts the pre-trained model weights for **MedicalPatchNet** and the baseline **EfficientNetV2-S** model, as described in the paper: |
| |
|
| | **[MedicalPatchNet: A Patch-Based Self-Explainable AI Architecture for Chest X-ray Classification](https://www.nature.com/articles/s41598-026-40358-0)** (Nature Scientific Reports, 2026). |
| | Preprint available on [arXiv:2509.07477](https://arxiv.org/abs/2509.07477). |
| |
|
| | For the complete source code, documentation, and instructions on how to train and evaluate the models, please visit our main GitHub repository: |
| |
|
| | **[https://github.com/TruhnLab/MedicalPatchNet](https://github.com/TruhnLab/MedicalPatchNet)** |
| |
|
| | --- |
| |
|
| | ## Overview |
| |
|
| | MedicalPatchNet is a self-explainable deep learning architecture designed for chest X-ray classification that provides transparent and interpretable predictions without relying on post-hoc explanation methods. Unlike traditional black-box models that require external tools like Grad-CAM for interpretability, MedicalPatchNet integrates explainability directly into its architectural design. |
| |
|
| | The architecture divides images into non-overlapping patches, independently classifies each patch using an EfficientNetV2-S backbone, and aggregates predictions through averaging. This enables intuitive visualization of each patch's diagnostic contribution. |
| |
|
| | ### Key Features |
| |
|
| | - **Self-explainable by design**: No need for external interpretation methods like Grad-CAM. |
| | - **Competitive performance**: Matches the classification performance (AUROC 0.907 vs. 0.908) of EfficientNetV2-S. |
| | - **Superior localization**: Significantly outperforms Grad-CAM variants in pathology localization tasks (mean hit-rate 0.485 vs. 0.376) on the CheXlocalize dataset. |
| | - **Faithful explanations**: Saliency maps directly reflect the model's true reasoning, mitigating risks associated with shortcut learning. |
| |
|
| | --- |
| |
|
| | ## How to Use These Weights |
| |
|
| | The weights provided here are intended to be used with the code from our [GitHub repository](https://github.com/TruhnLab/MedicalPatchNet). The repository includes scripts for data preprocessing, training, and evaluation. |
| |
|
| | ## Models Included |
| |
|
| | - **MedicalPatchNet**: The main patch-based, self-explainable model. |
| | - **EfficientNetV2-S**: The baseline model used for comparison with post-hoc methods (Grad-CAM, Grad-CAM++, and Eigen-CAM). |
| |
|
| | --- |
| |
|
| | ## Citation |
| |
|
| | If you use MedicalPatchNet or these model weights in your research, please cite our work: |
| |
|
| | ```bibtex |
| | @article{wienholt2026medicalpatchnet, |
| | title={MedicalPatchNet: a patch-based self-explainable AI architecture for chest X-ray classification}, |
| | author={Wienholt, Patrick and Kuhl, Christiane and Kather, Jakob Nikolas and Nebelung, Sven and Truhn, Daniel}, |
| | journal={Scientific Reports}, |
| | year={2026}, |
| | publisher={Nature Publishing Group UK London} |
| | } |
| | ``` |