Anirudh Balaraman commited on
Commit
070a2e9
Β·
unverified Β·
1 Parent(s): 63647f2

Revise README for improved clarity and structure

Browse files

Updated README to enhance clarity and organization, including changes to usage instructions and section headings.

Files changed (1) hide show
  1. README.md +22 -37
README.md CHANGED
@@ -29,7 +29,7 @@ Deep learning methods used in medical AIβ€”particularly for csPCa prediction and
29
  - ⚑ **Automatic Attention Heatmaps** - Weak attention heatmaps generated automatically from DWI and ADC sequnces.
30
  - 🧠 **Weakly-Supervised Attention** β€” Heatmap-guided patch sampling and cosine-similarity attention loss, replace the need for voxel-level labels.
31
  - 🧩 **3D Multiple Instance Learning** β€” Extracts volumetric patches from bpMRI scans and aggregates them via transformer + attention pooling.
32
- - πŸ‘οΈ **Two-stage pipeline** β€” Visualise salient patches highlighting probable tumour regions.
33
  - 🧹 **Preprocessing** β€” Preprocessing to minimize inter-center MRI acquisiton variability.
34
  - πŸ₯ **End-to-end Pipeline** β€” Open source, clinically viable complete pipeline.
35
 
@@ -52,32 +52,31 @@ curl -L -o models/file3.pth https://huggingface.co/anirudh0410/WSAttention-Prost
52
  ```
53
 
54
  ## πŸš€ Usage
55
- ### Inference
56
  ```bash
57
  python run_inference.py --config config/config_preprocess.yaml
58
  ```
59
 
60
  Run run_inference.py to execute the full pipeline, from preprocessing to model predictions.
61
- - πŸ“ **Input arguments:**
62
- - πŸ“‚ *t2_dir, dwi_dir, adc_dir*: Path to T2W, DWI and ADC sequnces respectively.
63
- - πŸ“‚ *output_dir*: Path to store preprocessed files and results.
64
 
65
 
66
  ⚠️ ***NOTE: For each scan, all sequences should share the same filename, and the input files must be in NRRD format.***
67
 
68
  - πŸ“Š **Outputs:**
69
  The following are stored for each scan:
70
- - 🩺 Risk of csPCa.
71
- - πŸ”’ PI-RADS score.
72
- - πŸ“ Coordinaates of top 5 salient patches.
73
- The results are stored in results.json saved in output_dir along with the intermediary files from pre processing including the prostate segmentation mask. The patches can be visualised using visualisation.ipynb
74
 
75
 
 
76
 
77
-
78
- ### Preprocessing
79
-
80
- Execute preprocess_main.py to preprocess your MRI files. Each sequenceβ€”T2W, DWI, and ADCβ€”must be placed in separate folders, with paths specified in config_preprocess.yaml.
81
  ```bash
82
  python preprocess_main.py \
83
  --steps register_and_crop get_segmentation_mask histogram_match get_heatmap \
@@ -85,23 +84,22 @@ python preprocess_main.py \
85
  ```
86
 
87
 
88
- ### PI-RADS Training
89
 
90
  ```bash
91
  python run_pirads.py --mode train --config config/config_pirads_train.yaml
92
  ```
93
 
94
- ### csPCa Training
95
 
96
  ```bash
97
  python run_cspca.py --mode train --config config/config_cspca_train.yaml
98
  ```
99
- ### Testing
100
 
101
  ```bash
102
  python run_pirads.py --mode test --config config/config_pirads_test.yaml
103
  python run_cspca.py --mode test --config config/config_cspca_test.yaml
104
- python run_inference.py --config config/config_preprocess.yaml
105
  ```
106
 
107
  See the [full documentation](https://anirudhbalaraman.github.io/WSAttention-Prostate/) for detailed configuration options and data format requirements.
@@ -117,35 +115,22 @@ WSAttention-Prostate/
117
  β”œβ”€β”€ config/ # YAML configuration files
118
  β”œβ”€β”€ src/
119
  β”‚ β”œβ”€β”€ model/
120
- β”‚ β”‚ β”œβ”€β”€ MIL.py # MILModel_3D β€” core MIL architecture
121
- β”‚ β”‚ └── csPCa_model.py # csPCa_Model + SimpleNN head
122
  β”‚ β”œβ”€β”€ data/
123
  β”‚ β”‚ β”œβ”€β”€ data_loader.py # MONAI data pipeline
124
- β”‚ β”‚ └── custom_transforms.py
125
  β”‚ β”œβ”€β”€ train/
126
  β”‚ β”‚ β”œβ”€β”€ train_pirads.py # PI-RADS training loop
127
  β”‚ β”‚ └── train_cspca.py # csPCa training loop
128
- β”‚ β”œβ”€β”€ preprocessing/ # Registration, segmentation, heatmaps
129
- β”‚ └── utils.py # Shared utilities and step validation
130
  β”œβ”€β”€ tests/
131
  β”œβ”€β”€ dataset/ # Reference images for histogram matching
132
  └── models/ # Downloaded checkpoints (not in repo)
133
  ```
134
 
135
- ## Architecture
136
-
137
- Input MRI patches are processed independently through a 3D ResNet18 backbone, then aggregated via a transformer encoder and attention pooling:
138
-
139
- ```mermaid
140
- flowchart TD
141
- A["Input [B, N, C, D, H, W]"] --> B["Reshape to [B*N, C, D, H, W]"]
142
- B --> C[ResNet18-3D Backbone]
143
- C --> D["Reshape to [B, N, 512]"]
144
- D --> E[Transformer Encoder\n4 layers, 8 heads]
145
- E --> F[Attention Pooling\n512 β†’ 2048 β†’ 1]
146
- F --> G["Weighted Sum [B, 512]"]
147
- G --> H["FC Head [B, num_classes]"]
148
- ```
149
 
150
- For csPCa prediction, the backbone is frozen and a 3-layer MLP (`512 β†’ 256 β†’ 128 β†’ 1`) replaces the classification head.
151
 
 
29
  - ⚑ **Automatic Attention Heatmaps** - Weak attention heatmaps generated automatically from DWI and ADC sequnces.
30
  - 🧠 **Weakly-Supervised Attention** β€” Heatmap-guided patch sampling and cosine-similarity attention loss, replace the need for voxel-level labels.
31
  - 🧩 **3D Multiple Instance Learning** β€” Extracts volumetric patches from bpMRI scans and aggregates them via transformer + attention pooling.
32
+ - πŸ‘οΈ **Explainable** β€” Visualise salient patches highlighting probable tumour regions.
33
  - 🧹 **Preprocessing** β€” Preprocessing to minimize inter-center MRI acquisiton variability.
34
  - πŸ₯ **End-to-end Pipeline** β€” Open source, clinically viable complete pipeline.
35
 
 
52
  ```
53
 
54
  ## πŸš€ Usage
55
+ ### 🩺 Inference
56
  ```bash
57
  python run_inference.py --config config/config_preprocess.yaml
58
  ```
59
 
60
  Run run_inference.py to execute the full pipeline, from preprocessing to model predictions.
61
+ - πŸ“‚ **Input arguments:**
62
+ - *t2_dir, dwi_dir, adc_dir*: Path to T2W, DWI and ADC sequnces respectively.
63
+ - *output_dir*: Path to store preprocessed files and results.
64
 
65
 
66
  ⚠️ ***NOTE: For each scan, all sequences should share the same filename, and the input files must be in NRRD format.***
67
 
68
  - πŸ“Š **Outputs:**
69
  The following are stored for each scan:
70
+ - Risk of csPCa.
71
+ - PI-RADS score.
72
+ - Coordinaates of top 5 salient patches.
73
+ The results are stored in `results.json` saved in output_dir along with the intermediary files from pre processing including the prostate segmentation mask. The patches can be visualised using `visualisation.ipynb`
74
 
75
 
76
+ ### 🧹 Preprocessing
77
 
78
+ Execute preprocess_main.py to preprocess your MRI files.
79
+ ⚠️ ***NOTE: For each scan, all sequences should share the same filename, and the input files must be in NRRD format.***
 
 
80
  ```bash
81
  python preprocess_main.py \
82
  --steps register_and_crop get_segmentation_mask histogram_match get_heatmap \
 
84
  ```
85
 
86
 
87
+ ### βš™οΈ PI-RADS Training
88
 
89
  ```bash
90
  python run_pirads.py --mode train --config config/config_pirads_train.yaml
91
  ```
92
 
93
+ ### βš™οΈ csPCa Training
94
 
95
  ```bash
96
  python run_cspca.py --mode train --config config/config_cspca_train.yaml
97
  ```
98
+ ### πŸ“Š Testing
99
 
100
  ```bash
101
  python run_pirads.py --mode test --config config/config_pirads_test.yaml
102
  python run_cspca.py --mode test --config config/config_cspca_test.yaml
 
103
  ```
104
 
105
  See the [full documentation](https://anirudhbalaraman.github.io/WSAttention-Prostate/) for detailed configuration options and data format requirements.
 
115
  β”œβ”€β”€ config/ # YAML configuration files
116
  β”œβ”€β”€ src/
117
  β”‚ β”œβ”€β”€ model/
118
+ β”‚ β”‚ β”œβ”€β”€ MIL.py # MILModel_3D β€” core MIL architecture, PI-RADS model
119
+ β”‚ β”‚ └── csPCa_model.py # csPCa_Model
120
  β”‚ β”œβ”€β”€ data/
121
  β”‚ β”‚ β”œβ”€β”€ data_loader.py # MONAI data pipeline
122
+ β”‚ β”‚ └── custom_transforms.py # Custom MONAI transforms
123
  β”‚ β”œβ”€β”€ train/
124
  β”‚ β”‚ β”œβ”€β”€ train_pirads.py # PI-RADS training loop
125
  β”‚ β”‚ └── train_cspca.py # csPCa training loop
126
+ β”‚ β”œβ”€β”€ preprocessing/ # Registration, segmentation, histogram matching, heatmaps
127
+ β”‚ └── utils.py # Shared utilities
128
  β”œβ”€β”€ tests/
129
  β”œβ”€β”€ dataset/ # Reference images for histogram matching
130
  └── models/ # Downloaded checkpoints (not in repo)
131
  ```
132
 
133
+ ## πŸ™ Acknowledgement
134
+ This work was in large parts funded by the Wilhelm Sander Foundation. Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Health and Digital Executive Agency (HADEA). Neither the European Union nor the granting authority can be held responsible for them.
 
 
 
 
 
 
 
 
 
 
 
 
135
 
 
136