SensiNet-Mammography: Bayesian Dual-Stream Architecture for Interpretable Mammographic Malignancy Classification
Model Description
SensiNet is a state-of-the-art (SOTA) computer vision model engineered specifically for the clinical detection of breast cancer in mammograms. Designed to operate as a high-fidelity diagnostic support system, SensiNet categorizes structural deformations as either Malignant or Benign.
Unlike standard "black box" deep learning classifiers, SensiNet prioritizes clinical transparency and epistemic uncertainty quantification. It utilizes a Dual-Stream Vision architecture built upon an ImageNet-pretrained Xception backbone, heavily optimized using Focal Loss to prioritize Sensitivity (Recall).
Furthermore, SensiNet is designed to be paired with a frontend interpreting environment capable of performing Bayesian Monte Carlo (MC) Dropout Inference. This allows the model to output not just a standard probability vector, but an absolute statistical variance measurement representing the model's true Decision Confidence—thus providing a robust computational "second opinion."
Intended Uses & Limitations
Intended Clinical Use Cases
- Computer-Aided Detection (CADx): SensiNet is intended to assist radiologists in screening environments by highlighting suspicious Regions of Interest (ROI) via Grad-CAM mapping.
- Triage / Workflow Prioritization: In environments with severe radiologist shortages, SensiNet can be utilized to flag high-probability malignant scans, pushing them to the top of the reading unassigned queue.
- Educational Tooling: As a deterministic reference point for identifying subtle architectural distortions and microcalcifications in digital mammography.
Limitations & Domain Exclusions
- Domain Shift Vulnerability: SensiNet was exclusively tuned on the digitized film signatures of the CBIS-DDSM dataset. Applying these weights directly to modern 3D Digital Breast Tomosynthesis (DBT) scans or pure Full-Field Digital Mammography (FFDM) arrays directly off proprietary hardware (e.g., modern Hologic scanners) without domain-adaptation retraining will likely result in degraded AUC performance due to varying pixel-noise profiles.
- Not an Autonomous Diagnostician: SensiNet is a support tool. It is mathematically incapable of rendering a legal medical diagnosis. A board-certified physician must contextualize its output alongside patient history.
- Processing Requirements: Implementing the 10-pass Bayesian MC Dropout inference requires significantly higher computational latency compared to a single deterministic forward pass.
Training Data
SensiNet was fine-tuned upon a pre-processed and heavily augmented derivation of the CBIS-DDSM (Curated Breast Imaging Subset of DDSM) dataset.
- Modalities: CC (Craniocaudal) and MLO (Mediolateral Oblique) views were processed.
- Class Imbalance Mediation: The dataset exhibits standard oncological class imbalance. This was mediated during training via Focal Loss, forcing the optimizer to heavily penalize False Negatives (missed malignancies) while down-weighting the gradient influence of easily solvable benign tissues.
Performance Metrics
SensiNet was evaluated on a sequestered testing subset of the CBIS-DDSM dataset utilizing boot-strapped confidence intervals to ensure stochastic stability.
- Area Under the Curve (AUC-ROC):
0.926 - Peak Sensitivity (Recall):
0.892 - Specificity:
0.841 - F1-Score:
0.865
Expected Inputs & Usage
To load this advanced_model_best.pth file locally for inference, ensure you have initialized an identical PyTorch architecture. The model expects single-channel or normalized 3-channel tensors shaped identically to the ImageNet training distribution.
Expected Preprocessing
- Resize Input Image:
(299, 299) - Normalize:
mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225] - Format:
torch.Tensorof shape[Batch, 3, 299, 299]
Usage Example
This repository provides the .pth weights. A full graphical Clinical Web Dashboard built on FastAPI and Alpine.js explicitly designed to ingest this specific Hugging Face model file can be found at the SensiNet GitHub Repository.