Update README.md
Browse files
README.md
CHANGED
|
@@ -14,11 +14,11 @@ metrics:
|
|
| 14 |
library_name: tensorflow
|
| 15 |
---
|
| 16 |
|
| 17 |
-
#
|
| 18 |
|
| 19 |
A deep learning model for **Polycystic Ovary Syndrome (PCOS)** detection from ultrasound images with **Grad-CAM** visualization for clinical interpretability.
|
| 20 |
|
| 21 |
-
##
|
| 22 |
|
| 23 |
- **Architecture**: Dual-path CNN with multi-head attention
|
| 24 |
- **Input**: 224Γ224 RGB ultrasound images
|
|
@@ -65,7 +65,7 @@ CLASS_NAMES = ["infected", "noninfected"]
|
|
| 65 |
# Download model from HF
|
| 66 |
# ============================================================
|
| 67 |
MODEL_PATH = hf_hub_download(repo_id=HF_MODEL_REPO, filename=MODEL_FILENAME)
|
| 68 |
-
print(f"
|
| 69 |
|
| 70 |
# ============================================================
|
| 71 |
# Custom Lambda Functions
|
|
@@ -124,7 +124,7 @@ output = tf.keras.layers.Activation('softmax', name='softmax')(logits)
|
|
| 124 |
|
| 125 |
model = Model(input_layer, output)
|
| 126 |
model.load_weights(MODEL_PATH)
|
| 127 |
-
print("
|
| 128 |
|
| 129 |
# ============================================================
|
| 130 |
# Load & Preprocess Image
|
|
@@ -144,8 +144,8 @@ pred = model.predict(img_array, verbose=0)[0]
|
|
| 144 |
pred_class = np.argmax(pred)
|
| 145 |
confidence = pred[pred_class]
|
| 146 |
|
| 147 |
-
print(f"\n
|
| 148 |
-
print(f"
|
| 149 |
|
| 150 |
# ============================================================
|
| 151 |
# Grad-CAM
|
|
@@ -230,7 +230,7 @@ img_array = np.expand_dims(img, axis=0)
|
|
| 230 |
- **Blue/Cool regions**: Low influence on decision
|
| 231 |
- **Dual visualization**: Separate heatmaps for upper and lower ovarian regions
|
| 232 |
|
| 233 |
-
##
|
| 234 |
|
| 235 |
```
|
| 236 |
Input (224Γ224Γ3)
|
|
@@ -247,7 +247,7 @@ Input (224Γ224Γ3)
|
|
| 247 |
- Multi-head attention for feature fusion
|
| 248 |
- Logits-based Grad-CAM (fixes saturated softmax gradients)
|
| 249 |
|
| 250 |
-
##
|
| 251 |
|
| 252 |
- **Total**: 11,784 ultrasound images
|
| 253 |
- **PCOS-positive**: 6,784 images (57.5%)
|
|
@@ -255,34 +255,34 @@ Input (224Γ224Γ3)
|
|
| 255 |
- **Source**: 3 clinics (2018-2022), expert-annotated
|
| 256 |
- **Dataset**: [PCOS XAI Ultrasound](https://www.kaggle.com/datasets/ibadeus/pcos-xai-ultrasound-dataset)
|
| 257 |
|
| 258 |
-
##
|
| 259 |
|
| 260 |
**Clinical Use:**
|
| 261 |
-
-
|
| 262 |
-
-
|
| 263 |
-
-
|
| 264 |
|
| 265 |
**Technical:**
|
| 266 |
- Fixed 224Γ224 input size required
|
| 267 |
- RGB images only
|
| 268 |
- Model performance may vary across different ultrasound machines
|
| 269 |
|
| 270 |
-
##
|
| 271 |
|
| 272 |
```bibtex
|
| 273 |
@misc{pcos_xai_2024,
|
| 274 |
title={PCOS Detection with Explainable AI},
|
| 275 |
author={Dehsahk-AI},
|
| 276 |
-
year={
|
| 277 |
url={https://huggingface.co/Dehsahk-AI/Pcos-Detect}
|
| 278 |
}
|
| 279 |
```
|
| 280 |
|
| 281 |
-
##
|
| 282 |
|
| 283 |
MIT License - See LICENSE file for details.
|
| 284 |
|
| 285 |
-
##
|
| 286 |
|
| 287 |
- Grad-CAM: Selvaraju et al. (ICCV 2017)
|
| 288 |
- Multi-head Attention: Vaswani et al. (NeurIPS 2017)
|
|
|
|
| 14 |
library_name: tensorflow
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# PCOS Detection with Explainable AI
|
| 18 |
|
| 19 |
A deep learning model for **Polycystic Ovary Syndrome (PCOS)** detection from ultrasound images with **Grad-CAM** visualization for clinical interpretability.
|
| 20 |
|
| 21 |
+
## Model Overview
|
| 22 |
|
| 23 |
- **Architecture**: Dual-path CNN with multi-head attention
|
| 24 |
- **Input**: 224Γ224 RGB ultrasound images
|
|
|
|
| 65 |
# Download model from HF
|
| 66 |
# ============================================================
|
| 67 |
MODEL_PATH = hf_hub_download(repo_id=HF_MODEL_REPO, filename=MODEL_FILENAME)
|
| 68 |
+
print(f" Model downloaded to: {MODEL_PATH}")
|
| 69 |
|
| 70 |
# ============================================================
|
| 71 |
# Custom Lambda Functions
|
|
|
|
| 124 |
|
| 125 |
model = Model(input_layer, output)
|
| 126 |
model.load_weights(MODEL_PATH)
|
| 127 |
+
print(" Weights loaded successfully")
|
| 128 |
|
| 129 |
# ============================================================
|
| 130 |
# Load & Preprocess Image
|
|
|
|
| 144 |
pred_class = np.argmax(pred)
|
| 145 |
confidence = pred[pred_class]
|
| 146 |
|
| 147 |
+
print(f"\n Prediction: {CLASS_NAMES[pred_class]}")
|
| 148 |
+
print(f" Confidence: {confidence:.2%}")
|
| 149 |
|
| 150 |
# ============================================================
|
| 151 |
# Grad-CAM
|
|
|
|
| 230 |
- **Blue/Cool regions**: Low influence on decision
|
| 231 |
- **Dual visualization**: Separate heatmaps for upper and lower ovarian regions
|
| 232 |
|
| 233 |
+
## Model Architecture
|
| 234 |
|
| 235 |
```
|
| 236 |
Input (224Γ224Γ3)
|
|
|
|
| 247 |
- Multi-head attention for feature fusion
|
| 248 |
- Logits-based Grad-CAM (fixes saturated softmax gradients)
|
| 249 |
|
| 250 |
+
## Dataset
|
| 251 |
|
| 252 |
- **Total**: 11,784 ultrasound images
|
| 253 |
- **PCOS-positive**: 6,784 images (57.5%)
|
|
|
|
| 255 |
- **Source**: 3 clinics (2018-2022), expert-annotated
|
| 256 |
- **Dataset**: [PCOS XAI Ultrasound](https://www.kaggle.com/datasets/ibadeus/pcos-xai-ultrasound-dataset)
|
| 257 |
|
| 258 |
+
## Important Notes
|
| 259 |
|
| 260 |
**Clinical Use:**
|
| 261 |
+
- Research purposes only - NOT FDA approved
|
| 262 |
+
- Not a diagnostic tool - requires professional validation
|
| 263 |
+
- Must be validated on local datasets before clinical deployment
|
| 264 |
|
| 265 |
**Technical:**
|
| 266 |
- Fixed 224Γ224 input size required
|
| 267 |
- RGB images only
|
| 268 |
- Model performance may vary across different ultrasound machines
|
| 269 |
|
| 270 |
+
## Citation
|
| 271 |
|
| 272 |
```bibtex
|
| 273 |
@misc{pcos_xai_2024,
|
| 274 |
title={PCOS Detection with Explainable AI},
|
| 275 |
author={Dehsahk-AI},
|
| 276 |
+
year={2025},
|
| 277 |
url={https://huggingface.co/Dehsahk-AI/Pcos-Detect}
|
| 278 |
}
|
| 279 |
```
|
| 280 |
|
| 281 |
+
## License
|
| 282 |
|
| 283 |
MIT License - See LICENSE file for details.
|
| 284 |
|
| 285 |
+
## Acknowledgments
|
| 286 |
|
| 287 |
- Grad-CAM: Selvaraju et al. (ICCV 2017)
|
| 288 |
- Multi-head Attention: Vaswani et al. (NeurIPS 2017)
|