Delete src
Browse files- src/DEPLOYMENT_GUIDE.md +0 -366
- src/QUICKSTART.md +0 -220
- src/README.md +0 -291
- src/TODO.md +0 -181
- src/check_mask.py +0 -53
- src/dice_calculator_app.py +0 -692
- src/example_usage.md +0 -389
- src/preprocessing_consolidation.py +0 -156
- src/quickstart.sh +0 -88
- src/requirements.txt +0 -31
- src/sam_integration.py +0 -395
- src/streamlit_app.py +0 -1207
- src/test_image_loading.py +0 -121
- src/uploaded_images/7035909_20240326.jpg +0 -3
- src/uploaded_images/7043276_20240403.jpg +0 -3
src/DEPLOYMENT_GUIDE.md
DELETED
|
@@ -1,366 +0,0 @@
|
|
| 1 |
-
# π Deployment Guide: Pneumonia Annotation Tool
|
| 2 |
-
|
| 3 |
-
This guide explains how to deploy the annotation app to **Hugging Face Spaces** (recommended, free) or **Streamlit Community Cloud**.
|
| 4 |
-
|
| 5 |
-
---
|
| 6 |
-
|
| 7 |
-
## π Prerequisites
|
| 8 |
-
|
| 9 |
-
1. GitHub account
|
| 10 |
-
2. Hugging Face account (free) β [huggingface.co](https://huggingface.co)
|
| 11 |
-
3. Your annotation app files
|
| 12 |
-
|
| 13 |
-
---
|
| 14 |
-
|
| 15 |
-
## π Step 1: Prepare Your Project Structure
|
| 16 |
-
|
| 17 |
-
Create a new folder with the following structure:
|
| 18 |
-
|
| 19 |
-
```
|
| 20 |
-
pneumonia-annotation-tool/
|
| 21 |
-
βββ app.py # Your main Streamlit app (renamed)
|
| 22 |
-
βββ requirements.txt # Python dependencies
|
| 23 |
-
βββ README.md # Project description
|
| 24 |
-
βββ .gitignore # Files to ignore
|
| 25 |
-
βββ packages.txt # System dependencies (optional)
|
| 26 |
-
βββ sample_images/ # Sample X-ray images for demo (optional)
|
| 27 |
-
βββ sample_xray.jpg
|
| 28 |
-
```
|
| 29 |
-
|
| 30 |
-
### 1.1 Copy and rename your app
|
| 31 |
-
|
| 32 |
-
```bash
|
| 33 |
-
cp annotation_app.py app.py
|
| 34 |
-
```
|
| 35 |
-
|
| 36 |
-
### 1.2 Create `requirements.txt`
|
| 37 |
-
|
| 38 |
-
```txt
|
| 39 |
-
streamlit>=1.28.0
|
| 40 |
-
streamlit-drawable-canvas>=0.9.3
|
| 41 |
-
opencv-python-headless>=4.8.0
|
| 42 |
-
numpy>=1.24.0
|
| 43 |
-
pandas>=2.0.0
|
| 44 |
-
Pillow>=10.0.0
|
| 45 |
-
```
|
| 46 |
-
|
| 47 |
-
> β οΈ Use `opencv-python-headless` instead of `opencv-python` for cloud deployment!
|
| 48 |
-
|
| 49 |
-
### 1.3 Create `.gitignore`
|
| 50 |
-
|
| 51 |
-
```gitignore
|
| 52 |
-
# Python
|
| 53 |
-
__pycache__/
|
| 54 |
-
*.py[cod]
|
| 55 |
-
*.so
|
| 56 |
-
.Python
|
| 57 |
-
env/
|
| 58 |
-
venv/
|
| 59 |
-
.venv/
|
| 60 |
-
|
| 61 |
-
# IDE
|
| 62 |
-
.vscode/
|
| 63 |
-
.idea/
|
| 64 |
-
|
| 65 |
-
# Data (don't upload patient data!)
|
| 66 |
-
data/Pacientes/
|
| 67 |
-
*.png
|
| 68 |
-
*.jpg
|
| 69 |
-
!sample_images/*.jpg
|
| 70 |
-
|
| 71 |
-
# OS
|
| 72 |
-
.DS_Store
|
| 73 |
-
Thumbs.db
|
| 74 |
-
```
|
| 75 |
-
|
| 76 |
-
### 1.4 Create `README.md`
|
| 77 |
-
|
| 78 |
-
```markdown
|
| 79 |
-
---
|
| 80 |
-
title: Pneumonia Consolidation Annotation Tool
|
| 81 |
-
emoji: π«
|
| 82 |
-
colorFrom: blue
|
| 83 |
-
colorTo: green
|
| 84 |
-
sdk: streamlit
|
| 85 |
-
sdk_version: 1.28.0
|
| 86 |
-
app_file: app.py
|
| 87 |
-
pinned: false
|
| 88 |
-
---
|
| 89 |
-
|
| 90 |
-
# π« Pneumonia Consolidation Annotation Tool
|
| 91 |
-
|
| 92 |
-
A Streamlit application for radiologists to annotate pneumonia consolidations
|
| 93 |
-
on chest X-rays, supporting multilobar annotations with different colors.
|
| 94 |
-
|
| 95 |
-
## Features
|
| 96 |
-
- π¨ Draw directly on X-ray images
|
| 97 |
-
- π« Multiple consolidation sites with unique colors
|
| 98 |
-
- π Zoom & pan for detailed annotation
|
| 99 |
-
- πΎ Save masks and metadata
|
| 100 |
-
- π Inter-rater agreement (Dice, IoU)
|
| 101 |
-
```
|
| 102 |
-
|
| 103 |
-
---
|
| 104 |
-
|
| 105 |
-
## π€ Option A: Deploy to Hugging Face Spaces (Recommended)
|
| 106 |
-
|
| 107 |
-
### Step 1: Create a Hugging Face Account
|
| 108 |
-
Go to [huggingface.co](https://huggingface.co) and sign up (free).
|
| 109 |
-
|
| 110 |
-
### Step 2: Create a New Space
|
| 111 |
-
|
| 112 |
-
1. Click your profile β **New Space**
|
| 113 |
-
2. Fill in:
|
| 114 |
-
- **Space name**: `pneumonia-annotation-tool`
|
| 115 |
-
- **License**: Choose one (e.g., MIT)
|
| 116 |
-
- **SDK**: Select **Streamlit**
|
| 117 |
-
- **Visibility**: Public or Private
|
| 118 |
-
3. Click **Create Space**
|
| 119 |
-
|
| 120 |
-
### Step 3: Upload Files
|
| 121 |
-
|
| 122 |
-
**Option A: Via Web Interface**
|
| 123 |
-
1. Go to your Space β **Files** tab
|
| 124 |
-
2. Click **Add file** β **Upload files**
|
| 125 |
-
3. Upload: `app.py`, `requirements.txt`, `README.md`
|
| 126 |
-
|
| 127 |
-
**Option B: Via Git (recommended)**
|
| 128 |
-
|
| 129 |
-
```bash
|
| 130 |
-
# Clone your Space
|
| 131 |
-
git clone https://huggingface.co/spaces/YOUR_USERNAME/pneumonia-annotation-tool
|
| 132 |
-
cd pneumonia-annotation-tool
|
| 133 |
-
|
| 134 |
-
# Copy your files
|
| 135 |
-
cp /path/to/annotation_app.py app.py
|
| 136 |
-
cp /path/to/requirements.txt .
|
| 137 |
-
cp /path/to/README.md .
|
| 138 |
-
|
| 139 |
-
# Add and push
|
| 140 |
-
git add .
|
| 141 |
-
git commit -m "Initial deployment"
|
| 142 |
-
git push
|
| 143 |
-
```
|
| 144 |
-
|
| 145 |
-
### Step 4: Modify App for Cloud
|
| 146 |
-
|
| 147 |
-
Edit `app.py` to change the default patients path:
|
| 148 |
-
|
| 149 |
-
```python
|
| 150 |
-
# Change this line in the sidebar:
|
| 151 |
-
patients_path = st.sidebar.text_input(
|
| 152 |
-
"Patients Folder Path",
|
| 153 |
-
value="./uploaded_images", # Changed from ../data/Pacientes
|
| 154 |
-
help="Upload images or specify folder path",
|
| 155 |
-
)
|
| 156 |
-
```
|
| 157 |
-
|
| 158 |
-
### Step 5: Add Image Upload Feature
|
| 159 |
-
|
| 160 |
-
Add this code after the patients_path input to allow users to upload their own images:
|
| 161 |
-
|
| 162 |
-
```python
|
| 163 |
-
# Add file uploader for cloud deployment
|
| 164 |
-
st.sidebar.header("π€ Upload Images")
|
| 165 |
-
uploaded_files = st.sidebar.file_uploader(
|
| 166 |
-
"Upload X-ray Images",
|
| 167 |
-
type=["jpg", "jpeg", "png"],
|
| 168 |
-
accept_multiple_files=True,
|
| 169 |
-
)
|
| 170 |
-
|
| 171 |
-
if uploaded_files:
|
| 172 |
-
upload_dir = Path("./uploaded_images/patient_upload")
|
| 173 |
-
upload_dir.mkdir(parents=True, exist_ok=True)
|
| 174 |
-
for uf in uploaded_files:
|
| 175 |
-
with open(upload_dir / uf.name, "wb") as f:
|
| 176 |
-
f.write(uf.getbuffer())
|
| 177 |
-
st.sidebar.success(f"β
Uploaded {len(uploaded_files)} images!")
|
| 178 |
-
```
|
| 179 |
-
|
| 180 |
-
### Step 6: Wait for Build
|
| 181 |
-
|
| 182 |
-
Hugging Face will automatically build and deploy. Check the **Logs** tab if there are errors.
|
| 183 |
-
|
| 184 |
-
π **Your app will be live at**: `https://huggingface.co/spaces/YOUR_USERNAME/pneumonia-annotation-tool`
|
| 185 |
-
|
| 186 |
-
---
|
| 187 |
-
|
| 188 |
-
## βοΈ Option B: Deploy to Streamlit Community Cloud
|
| 189 |
-
|
| 190 |
-
### Step 1: Push to GitHub
|
| 191 |
-
|
| 192 |
-
```bash
|
| 193 |
-
# Create a new GitHub repository
|
| 194 |
-
# Then push your code:
|
| 195 |
-
git init
|
| 196 |
-
git add .
|
| 197 |
-
git commit -m "Initial commit"
|
| 198 |
-
git branch -M main
|
| 199 |
-
git remote add origin https://github.com/YOUR_USERNAME/pneumonia-annotation-tool.git
|
| 200 |
-
git push -u origin main
|
| 201 |
-
```
|
| 202 |
-
|
| 203 |
-
### Step 2: Deploy on Streamlit Cloud
|
| 204 |
-
|
| 205 |
-
1. Go to [share.streamlit.io](https://share.streamlit.io)
|
| 206 |
-
2. Sign in with GitHub
|
| 207 |
-
3. Click **New app**
|
| 208 |
-
4. Select:
|
| 209 |
-
- Repository: `YOUR_USERNAME/pneumonia-annotation-tool`
|
| 210 |
-
- Branch: `main`
|
| 211 |
-
- Main file path: `app.py`
|
| 212 |
-
5. Click **Deploy**
|
| 213 |
-
|
| 214 |
-
π **Your app will be live at**: `https://your-app-name.streamlit.app`
|
| 215 |
-
|
| 216 |
-
---
|
| 217 |
-
|
| 218 |
-
## π€ Handling Patient Images
|
| 219 |
-
|
| 220 |
-
### For Privacy/HIPAA Compliance:
|
| 221 |
-
|
| 222 |
-
> β οΈ **NEVER upload real patient data to public cloud services!**
|
| 223 |
-
|
| 224 |
-
### Options for Images:
|
| 225 |
-
|
| 226 |
-
| Method | Best For |
|
| 227 |
-
|--------|----------|
|
| 228 |
-
| **File Upload Widget** | Users upload their own images at runtime |
|
| 229 |
-
| **Private Space** | Hugging Face private Spaces (requires Pro) |
|
| 230 |
-
| **Self-hosted** | Run on your own server with patient data |
|
| 231 |
-
| **Sample Images** | Include anonymized/synthetic samples for demo |
|
| 232 |
-
|
| 233 |
-
### Adding Sample Images for Demo
|
| 234 |
-
|
| 235 |
-
1. Create `sample_images/` folder in your repo
|
| 236 |
-
2. Add anonymized or synthetic X-ray images
|
| 237 |
-
3. Update the default path:
|
| 238 |
-
|
| 239 |
-
```python
|
| 240 |
-
patients_path = st.sidebar.text_input(
|
| 241 |
-
"Patients Folder Path",
|
| 242 |
-
value="./sample_images",
|
| 243 |
-
)
|
| 244 |
-
```
|
| 245 |
-
|
| 246 |
-
---
|
| 247 |
-
|
| 248 |
-
## π§ Complete Modified `app.py` for Cloud
|
| 249 |
-
|
| 250 |
-
Here are the key modifications needed for cloud deployment:
|
| 251 |
-
|
| 252 |
-
### 1. Add imports at the top:
|
| 253 |
-
```python
|
| 254 |
-
import os
|
| 255 |
-
from pathlib import Path
|
| 256 |
-
```
|
| 257 |
-
|
| 258 |
-
### 2. Replace the patients path section with:
|
| 259 |
-
```python
|
| 260 |
-
# ββ Sidebar: patients path βββββββββββββββββββββββββββββββββββββββββ
|
| 261 |
-
st.sidebar.header("π Patient Data")
|
| 262 |
-
|
| 263 |
-
# File uploader for cloud deployment
|
| 264 |
-
st.sidebar.subheader("π€ Upload X-rays")
|
| 265 |
-
uploaded_files = st.sidebar.file_uploader(
|
| 266 |
-
"Upload chest X-ray images",
|
| 267 |
-
type=["jpg", "jpeg", "png"],
|
| 268 |
-
accept_multiple_files=True,
|
| 269 |
-
help="Upload JPG/PNG chest X-ray images to annotate"
|
| 270 |
-
)
|
| 271 |
-
|
| 272 |
-
# Create upload directory
|
| 273 |
-
upload_dir = Path("./uploaded_images/uploads")
|
| 274 |
-
upload_dir.mkdir(parents=True, exist_ok=True)
|
| 275 |
-
|
| 276 |
-
if uploaded_files:
|
| 277 |
-
for uf in uploaded_files:
|
| 278 |
-
file_path = upload_dir / uf.name
|
| 279 |
-
with open(file_path, "wb") as f:
|
| 280 |
-
f.write(uf.getbuffer())
|
| 281 |
-
st.sidebar.success(f"β
{len(uploaded_files)} image(s) ready!")
|
| 282 |
-
|
| 283 |
-
patients_path = st.sidebar.text_input(
|
| 284 |
-
"Images Folder Path",
|
| 285 |
-
value="./uploaded_images",
|
| 286 |
-
help="Folder containing images (or use uploader above)",
|
| 287 |
-
)
|
| 288 |
-
```
|
| 289 |
-
|
| 290 |
-
### 3. Update `get_all_patient_images` function:
|
| 291 |
-
|
| 292 |
-
```python
|
| 293 |
-
def get_all_patient_images(base_path):
|
| 294 |
-
"""Scan folders and collect all JPG/PNG images with annotation status."""
|
| 295 |
-
base = Path(base_path)
|
| 296 |
-
patient_images = []
|
| 297 |
-
if not base.exists():
|
| 298 |
-
return patient_images
|
| 299 |
-
|
| 300 |
-
# Get all subdirectories (including 'uploads')
|
| 301 |
-
folders = [base] + [f for f in base.iterdir() if f.is_dir()]
|
| 302 |
-
|
| 303 |
-
for folder in folders:
|
| 304 |
-
img_files = sorted(
|
| 305 |
-
list(folder.glob("*.jpg")) +
|
| 306 |
-
list(folder.glob("*.JPG")) +
|
| 307 |
-
list(folder.glob("*.jpeg")) +
|
| 308 |
-
list(folder.glob("*.png"))
|
| 309 |
-
)
|
| 310 |
-
for img in img_files:
|
| 311 |
-
if "_mask" in img.name: # Skip mask files
|
| 312 |
-
continue
|
| 313 |
-
mask_path = img.parent / f"{img.stem}_mask.png"
|
| 314 |
-
meta_path = img.parent / f"{img.stem}_annotation.json"
|
| 315 |
-
patient_images.append({
|
| 316 |
-
"patient_id": folder.name,
|
| 317 |
-
"image_path": img,
|
| 318 |
-
"image_name": img.name,
|
| 319 |
-
"mask_path": mask_path,
|
| 320 |
-
"metadata_path": meta_path,
|
| 321 |
-
"annotated": mask_path.exists(),
|
| 322 |
-
})
|
| 323 |
-
return patient_images
|
| 324 |
-
```
|
| 325 |
-
|
| 326 |
-
---
|
| 327 |
-
|
| 328 |
-
## π Quick Deployment Checklist
|
| 329 |
-
|
| 330 |
-
- [ ] Rename `annotation_app.py` β `app.py`
|
| 331 |
-
- [ ] Create `requirements.txt` with `opencv-python-headless`
|
| 332 |
-
- [ ] Create `README.md` with Hugging Face metadata
|
| 333 |
-
- [ ] Add file upload widget for cloud users
|
| 334 |
-
- [ ] Update default paths for cloud environment
|
| 335 |
-
- [ ] Remove/anonymize any patient data
|
| 336 |
-
- [ ] Test locally with `streamlit run app.py`
|
| 337 |
-
- [ ] Push to GitHub or Hugging Face
|
| 338 |
-
- [ ] Verify deployment and check logs
|
| 339 |
-
|
| 340 |
-
---
|
| 341 |
-
|
| 342 |
-
## π Troubleshooting
|
| 343 |
-
|
| 344 |
-
### "No module named cv2"
|
| 345 |
-
β Use `opencv-python-headless` in requirements.txt
|
| 346 |
-
|
| 347 |
-
### "Permission denied" errors
|
| 348 |
-
β Use relative paths (`./uploaded_images`) not absolute paths
|
| 349 |
-
|
| 350 |
-
### App crashes on large images
|
| 351 |
-
β Add memory limit handling or resize images on upload
|
| 352 |
-
|
| 353 |
-
### Slow startup
|
| 354 |
-
β Reduce dependencies, use `@st.cache_data` for heavy computations
|
| 355 |
-
|
| 356 |
-
---
|
| 357 |
-
|
| 358 |
-
## π Support
|
| 359 |
-
|
| 360 |
-
- **Hugging Face Docs**: [huggingface.co/docs/hub/spaces](https://huggingface.co/docs/hub/spaces)
|
| 361 |
-
- **Streamlit Docs**: [docs.streamlit.io](https://docs.streamlit.io)
|
| 362 |
-
- **Streamlit Community**: [discuss.streamlit.io](https://discuss.streamlit.io)
|
| 363 |
-
|
| 364 |
-
---
|
| 365 |
-
|
| 366 |
-
*Created for the Pneumonia Consolidation Annotation Tool - Hospital Alma MΓ‘ter*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/QUICKSTART.md
DELETED
|
@@ -1,220 +0,0 @@
|
|
| 1 |
-
# Quick Start Guide - Get Started in 5 Minutes! β‘
|
| 2 |
-
|
| 3 |
-
## For Immediate Testing (No Installation)
|
| 4 |
-
|
| 5 |
-
If you just want to see what the Dice calculator does:
|
| 6 |
-
|
| 7 |
-
### Option 1: Test with Sample Data
|
| 8 |
-
```bash
|
| 9 |
-
cd dice/
|
| 10 |
-
|
| 11 |
-
# Create a simple test mask
|
| 12 |
-
python3 -c "
|
| 13 |
-
import cv2
|
| 14 |
-
import numpy as np
|
| 15 |
-
|
| 16 |
-
# Create sample image and masks
|
| 17 |
-
img = np.random.randint(0, 255, (512, 512), dtype=np.uint8)
|
| 18 |
-
mask1 = np.zeros((512, 512), dtype=np.uint8)
|
| 19 |
-
mask2 = np.zeros((512, 512), dtype=np.uint8)
|
| 20 |
-
|
| 21 |
-
# Draw circles as "consolidations"
|
| 22 |
-
cv2.circle(mask1, (250, 250), 80, 255, -1)
|
| 23 |
-
cv2.circle(mask2, (260, 260), 75, 255, -1) # Slightly different
|
| 24 |
-
|
| 25 |
-
# Save
|
| 26 |
-
cv2.imwrite('test_image.jpg', img)
|
| 27 |
-
cv2.imwrite('test_gt.png', mask1)
|
| 28 |
-
cv2.imwrite('test_pred.png', mask2)
|
| 29 |
-
|
| 30 |
-
print('Test files created!')
|
| 31 |
-
"
|
| 32 |
-
```
|
| 33 |
-
|
| 34 |
-
### Option 2: Use Your Own Data
|
| 35 |
-
If you already have:
|
| 36 |
-
- Chest X-ray image (JPG/PNG)
|
| 37 |
-
- Ground truth mask (JPG/PNG)
|
| 38 |
-
- Predicted mask (JPG/PNG)
|
| 39 |
-
|
| 40 |
-
Skip to the "Launch App" section below.
|
| 41 |
-
|
| 42 |
-
---
|
| 43 |
-
|
| 44 |
-
## Full Setup (Recommended)
|
| 45 |
-
|
| 46 |
-
### 1. Install Dependencies (2 minutes)
|
| 47 |
-
```bash
|
| 48 |
-
cd dice/
|
| 49 |
-
pip install -r requirements.txt
|
| 50 |
-
```
|
| 51 |
-
|
| 52 |
-
### 2. Launch the Dice Calculator App (30 seconds)
|
| 53 |
-
```bash
|
| 54 |
-
streamlit run dice_calculator_app.py
|
| 55 |
-
```
|
| 56 |
-
|
| 57 |
-
The app will open in your browser at `http://localhost:8501`
|
| 58 |
-
|
| 59 |
-
### 3. Upload Your Images
|
| 60 |
-
1. **Original Image**: Upload your chest X-ray
|
| 61 |
-
2. **Ground Truth Mask**: Upload the expert annotation
|
| 62 |
-
3. **Predicted Mask**: Upload the model/algorithm output
|
| 63 |
-
|
| 64 |
-
You'll instantly see:
|
| 65 |
-
- β
Dice coefficient
|
| 66 |
-
- β
IoU score
|
| 67 |
-
- β
Precision & Recall
|
| 68 |
-
- β
Visual overlays (green=GT, red=prediction, yellow=overlap)
|
| 69 |
-
|
| 70 |
-
---
|
| 71 |
-
|
| 72 |
-
## Common Use Cases
|
| 73 |
-
|
| 74 |
-
### Use Case 1: "I want to enhance my X-rays first"
|
| 75 |
-
```bash
|
| 76 |
-
# Single image
|
| 77 |
-
python preprocessing_consolidation.py \
|
| 78 |
-
--input ../data/Pacientes/7035909/7035909_20240326.jpg \
|
| 79 |
-
--output enhanced_xray.jpg
|
| 80 |
-
|
| 81 |
-
# All images in a folder
|
| 82 |
-
python preprocessing_consolidation.py \
|
| 83 |
-
--input ../data/Pacientes/ \
|
| 84 |
-
--output ./enhanced_images/ \
|
| 85 |
-
--batch
|
| 86 |
-
```
|
| 87 |
-
|
| 88 |
-
### Use Case 2: "I want to compare two annotators"
|
| 89 |
-
```bash
|
| 90 |
-
# Launch app
|
| 91 |
-
streamlit run dice_calculator_app.py
|
| 92 |
-
|
| 93 |
-
# In the app:
|
| 94 |
-
# 1. Upload X-ray as "Original Image"
|
| 95 |
-
# 2. Upload Annotator 1's mask as "Ground Truth"
|
| 96 |
-
# 3. Upload Annotator 2's mask as "Prediction"
|
| 97 |
-
#
|
| 98 |
-
# Dice > 0.80 = Good agreement
|
| 99 |
-
```
|
| 100 |
-
|
| 101 |
-
### Use Case 3: "I want automatic segmentation with SAM"
|
| 102 |
-
```bash
|
| 103 |
-
# First, download SAM checkpoint:
|
| 104 |
-
# wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
|
| 105 |
-
|
| 106 |
-
# Interactive mode (click on consolidation)
|
| 107 |
-
python sam_integration.py \
|
| 108 |
-
--checkpoint sam_vit_h_4b8939.pth \
|
| 109 |
-
--image chest_xray.jpg \
|
| 110 |
-
--mode interactive
|
| 111 |
-
|
| 112 |
-
# Automatic batch processing
|
| 113 |
-
python sam_integration.py \
|
| 114 |
-
--checkpoint sam_vit_h_4b8939.pth \
|
| 115 |
-
--input_dir ../data/Pacientes/ \
|
| 116 |
-
--output_dir ./sam_masks/ \
|
| 117 |
-
--mode auto
|
| 118 |
-
```
|
| 119 |
-
|
| 120 |
-
### Use Case 4: "I want to calculate Dice for many images"
|
| 121 |
-
```bash
|
| 122 |
-
# Use the Streamlit app's "Batch Processing" tab
|
| 123 |
-
# Upload two ZIP files:
|
| 124 |
-
# 1. images.zip (contains all X-rays)
|
| 125 |
-
# 2. masks.zip (contains corresponding masks)
|
| 126 |
-
#
|
| 127 |
-
# The app will process all pairs and generate CSV report
|
| 128 |
-
```
|
| 129 |
-
|
| 130 |
-
---
|
| 131 |
-
|
| 132 |
-
## Interpreting Results
|
| 133 |
-
|
| 134 |
-
### Dice Coefficient
|
| 135 |
-
| Score | Interpretation |
|
| 136 |
-
|-------|---------------|
|
| 137 |
-
| > 0.85 | **Excellent** - Ready for publication |
|
| 138 |
-
| 0.70 - 0.85 | **Good** - Acceptable for fuzzy boundaries |
|
| 139 |
-
| 0.50 - 0.70 | **Fair** - Needs review, may need re-annotation |
|
| 140 |
-
| < 0.50 | **Poor** - Significant disagreement |
|
| 141 |
-
|
| 142 |
-
### Visual Overlay Colors
|
| 143 |
-
- π’ **Green**: Ground truth only (missed by prediction)
|
| 144 |
-
- π΄ **Red**: Prediction only (false positive)
|
| 145 |
-
- π‘ **Yellow**: Overlap (correct prediction) β
|
| 146 |
-
|
| 147 |
-
**Goal**: Maximize yellow, minimize green and red
|
| 148 |
-
|
| 149 |
-
---
|
| 150 |
-
|
| 151 |
-
## Troubleshooting
|
| 152 |
-
|
| 153 |
-
### "ModuleNotFoundError: No module named 'streamlit'"
|
| 154 |
-
```bash
|
| 155 |
-
pip install streamlit
|
| 156 |
-
```
|
| 157 |
-
|
| 158 |
-
### "App won't start"
|
| 159 |
-
```bash
|
| 160 |
-
# Check Python version (need 3.8+)
|
| 161 |
-
python --version
|
| 162 |
-
|
| 163 |
-
# Reinstall dependencies
|
| 164 |
-
pip install --upgrade -r requirements.txt
|
| 165 |
-
|
| 166 |
-
# Try with full path
|
| 167 |
-
python -m streamlit run dice_calculator_app.py
|
| 168 |
-
```
|
| 169 |
-
|
| 170 |
-
### "Images are different sizes"
|
| 171 |
-
Don't worry! The app automatically resizes masks to match the image.
|
| 172 |
-
|
| 173 |
-
### "SAM is too slow"
|
| 174 |
-
- Use smaller model: `--model_type vit_b` instead of `vit_h`
|
| 175 |
-
- Use GPU if available (automatically detected)
|
| 176 |
-
- Process images in smaller batches
|
| 177 |
-
|
| 178 |
-
---
|
| 179 |
-
|
| 180 |
-
## Next Steps After Quick Start
|
| 181 |
-
|
| 182 |
-
1. β
**You've tested the app** β Read [example_usage.md](example_usage.md)
|
| 183 |
-
2. β
**You've calculated some Dice scores** β Check [TODO.md](TODO.md) for project roadmap
|
| 184 |
-
3. β
**You're ready to annotate** β See annotation guidelines in [README.md](README.md)
|
| 185 |
-
4. β
**You want to train ML** β Use annotations as training data
|
| 186 |
-
|
| 187 |
-
---
|
| 188 |
-
|
| 189 |
-
## One-Line Commands for Copy-Paste
|
| 190 |
-
|
| 191 |
-
```bash
|
| 192 |
-
# Install everything
|
| 193 |
-
cd dice && pip install -r requirements.txt
|
| 194 |
-
|
| 195 |
-
# Launch app
|
| 196 |
-
streamlit run dice_calculator_app.py
|
| 197 |
-
|
| 198 |
-
# Enhance single image
|
| 199 |
-
python preprocessing_consolidation.py --input xray.jpg --output enhanced.jpg
|
| 200 |
-
|
| 201 |
-
# Batch enhance
|
| 202 |
-
python preprocessing_consolidation.py --input ../data/Pacientes/ --output ./enhanced/ --batch
|
| 203 |
-
|
| 204 |
-
# Test with sample data
|
| 205 |
-
python3 -c "import cv2; import numpy as np; img=np.random.randint(0,255,(512,512),dtype=np.uint8); m1=np.zeros((512,512),dtype=np.uint8); m2=np.zeros((512,512),dtype=np.uint8); cv2.circle(m1,(250,250),80,255,-1); cv2.circle(m2,(260,260),75,255,-1); cv2.imwrite('test_image.jpg',img); cv2.imwrite('test_gt.png',m1); cv2.imwrite('test_pred.png',m2); print('Test files created!')"
|
| 206 |
-
```
|
| 207 |
-
|
| 208 |
-
---
|
| 209 |
-
|
| 210 |
-
## Support
|
| 211 |
-
|
| 212 |
-
- π **Full Documentation**: [README.md](README.md)
|
| 213 |
-
- π **Examples**: [example_usage.md](example_usage.md)
|
| 214 |
-
- β
**Project Plan**: [TODO.md](TODO.md)
|
| 215 |
-
- π¬ **Questions**: Open an issue or contact project lead
|
| 216 |
-
|
| 217 |
-
**Ready to start? Run this now:**
|
| 218 |
-
```bash
|
| 219 |
-
streamlit run dice_calculator_app.py
|
| 220 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/README.md
DELETED
|
@@ -1,291 +0,0 @@
|
|
| 1 |
-
# Pneumonia Consolidation Segmentation Tools π«
|
| 2 |
-
|
| 3 |
-
A comprehensive toolkit for segmenting pneumonia consolidation in chest X-rays using machine learning, with tools for preprocessing, annotation, and validation using Dice coefficient metrics.
|
| 4 |
-
|
| 5 |
-
## π Features
|
| 6 |
-
|
| 7 |
-
### 1. **Dice Score Calculator (Streamlit App)**
|
| 8 |
-
- Interactive web interface for calculating segmentation metrics
|
| 9 |
-
- Compare ground truth vs predicted masks
|
| 10 |
-
- Metrics: Dice coefficient, IoU, Precision, Recall, F1, Hausdorff distance
|
| 11 |
-
- Visual overlays with color-coded comparisons
|
| 12 |
-
- Batch processing support
|
| 13 |
-
- Built-in annotation guidelines
|
| 14 |
-
|
| 15 |
-
### 2. **Image Preprocessing**
|
| 16 |
-
- CLAHE enhancement for local contrast
|
| 17 |
-
- Sharpening filters to reveal air bronchograms
|
| 18 |
-
- Edge enhancement for consolidation boundaries
|
| 19 |
-
- Batch processing capabilities
|
| 20 |
-
|
| 21 |
-
### 3. **SAM Integration**
|
| 22 |
-
- Automatic mask generation using Segment Anything Model
|
| 23 |
-
- Interactive point-based segmentation
|
| 24 |
-
- Bounding box prompts
|
| 25 |
-
- Batch processing support
|
| 26 |
-
|
| 27 |
-
## π Quick Start
|
| 28 |
-
|
| 29 |
-
### Installation
|
| 30 |
-
|
| 31 |
-
```bash
|
| 32 |
-
# Clone or download this repository
|
| 33 |
-
cd dice/
|
| 34 |
-
|
| 35 |
-
# Install dependencies
|
| 36 |
-
pip install -r requirements.txt
|
| 37 |
-
|
| 38 |
-
# Optional: For SAM integration
|
| 39 |
-
# pip install segment-anything torch torchvision
|
| 40 |
-
# Download SAM checkpoint from: https://github.com/facebookresearch/segment-anything
|
| 41 |
-
```
|
| 42 |
-
|
| 43 |
-
### Running the Dice Calculator App
|
| 44 |
-
|
| 45 |
-
```bash
|
| 46 |
-
streamlit run dice_calculator_app.py
|
| 47 |
-
```
|
| 48 |
-
|
| 49 |
-
The app will open in your browser at `http://localhost:8501`
|
| 50 |
-
|
| 51 |
-
## π Usage Guide
|
| 52 |
-
|
| 53 |
-
### 1. Preprocessing Images
|
| 54 |
-
|
| 55 |
-
Enhance chest X-rays to better visualize consolidations:
|
| 56 |
-
|
| 57 |
-
```bash
|
| 58 |
-
# Single image
|
| 59 |
-
python preprocessing_consolidation.py \
|
| 60 |
-
--input /path/to/image.jpg \
|
| 61 |
-
--output /path/to/enhanced.jpg
|
| 62 |
-
|
| 63 |
-
# Batch processing
|
| 64 |
-
python preprocessing_consolidation.py \
|
| 65 |
-
--input /path/to/images/ \
|
| 66 |
-
--output /path/to/enhanced/ \
|
| 67 |
-
--batch \
|
| 68 |
-
--extension .jpg
|
| 69 |
-
```
|
| 70 |
-
|
| 71 |
-
### 2. Calculating Dice Scores
|
| 72 |
-
|
| 73 |
-
#### Using the Streamlit App:
|
| 74 |
-
1. Start the app: `streamlit run dice_calculator_app.py`
|
| 75 |
-
2. Upload your chest X-ray, ground truth mask, and predicted mask
|
| 76 |
-
3. View metrics and visualizations instantly
|
| 77 |
-
4. Download results as CSV or images
|
| 78 |
-
|
| 79 |
-
#### Programmatic Usage:
|
| 80 |
-
|
| 81 |
-
```python
|
| 82 |
-
import cv2
|
| 83 |
-
from dice_calculator_app import calculate_dice_coefficient
|
| 84 |
-
|
| 85 |
-
# Load masks
|
| 86 |
-
ground_truth = cv2.imread('ground_truth_mask.png', cv2.IMREAD_GRAYSCALE)
|
| 87 |
-
prediction = cv2.imread('predicted_mask.png', cv2.IMREAD_GRAYSCALE)
|
| 88 |
-
|
| 89 |
-
# Calculate Dice
|
| 90 |
-
dice = calculate_dice_coefficient(ground_truth, prediction)
|
| 91 |
-
print(f"Dice Coefficient: {dice:.4f}")
|
| 92 |
-
```
|
| 93 |
-
|
| 94 |
-
### 3. Using SAM for Automatic Segmentation
|
| 95 |
-
|
| 96 |
-
First, download a SAM checkpoint:
|
| 97 |
-
- [ViT-H (Huge)](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth) - Most accurate
|
| 98 |
-
- [ViT-L (Large)](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth) - Balanced
|
| 99 |
-
- [ViT-B (Base)](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth) - Fastest
|
| 100 |
-
|
| 101 |
-
```bash
|
| 102 |
-
# Interactive mode (click points to guide segmentation)
|
| 103 |
-
python sam_integration.py \
|
| 104 |
-
--checkpoint sam_vit_h_4b8939.pth \
|
| 105 |
-
--image chest_xray.jpg \
|
| 106 |
-
--mode interactive
|
| 107 |
-
|
| 108 |
-
# Automatic batch processing
|
| 109 |
-
python sam_integration.py \
|
| 110 |
-
--checkpoint sam_vit_h_4b8939.pth \
|
| 111 |
-
--input_dir /path/to/images/ \
|
| 112 |
-
--output_dir /path/to/masks/ \
|
| 113 |
-
--mode auto
|
| 114 |
-
```
|
| 115 |
-
|
| 116 |
-
## π Understanding the Metrics
|
| 117 |
-
|
| 118 |
-
### Dice Coefficient (Main Metric)
|
| 119 |
-
- **Range**: 0 (no overlap) to 1 (perfect overlap)
|
| 120 |
-
- **Formula**: `2 Γ |A β© B| / (|A| + |B|)`
|
| 121 |
-
- **Interpretation**:
|
| 122 |
-
- **> 0.85**: Excellent segmentation
|
| 123 |
-
- **0.70-0.85**: Good (acceptable for fuzzy consolidation borders)
|
| 124 |
-
- **< 0.70**: Needs review
|
| 125 |
-
|
| 126 |
-
### IoU (Jaccard Index)
|
| 127 |
-
- **Range**: 0 to 1
|
| 128 |
-
- **Formula**: `|A β© B| / |A βͺ B|`
|
| 129 |
-
- More strict than Dice coefficient
|
| 130 |
-
|
| 131 |
-
### Precision & Recall
|
| 132 |
-
- **Precision**: How many predicted pixels are correct
|
| 133 |
-
- **Recall**: How many actual consolidation pixels were found
|
| 134 |
-
|
| 135 |
-
### Hausdorff Distance
|
| 136 |
-
- Measures maximum distance between mask boundaries
|
| 137 |
-
- Lower is better (masks are closer)
|
| 138 |
-
|
| 139 |
-
## π― Annotation Guidelines
|
| 140 |
-
|
| 141 |
-
### Key Radiologic Signs
|
| 142 |
-
|
| 143 |
-
#### 1. **Air Bronchograms** β
|
| 144 |
-
- Dark, branching tubes inside white consolidation
|
| 145 |
-
- **100% diagnostic** for pneumonia
|
| 146 |
-
- Include entire surrounding region in mask
|
| 147 |
-
|
| 148 |
-
#### 2. **Silhouette Sign**
|
| 149 |
-
- Heart or diaphragm border "disappears" into white area
|
| 150 |
-
- Include boundary in segmentation
|
| 151 |
-
|
| 152 |
-
#### 3. **Border Characteristics**
|
| 153 |
-
- Fuzzy, poorly defined edges
|
| 154 |
-
- Blend into surrounding tissue
|
| 155 |
-
- Use enhanced preprocessing to see better
|
| 156 |
-
|
| 157 |
-
### Best Practices
|
| 158 |
-
|
| 159 |
-
β
**DO:**
|
| 160 |
-
- Trace through ribs mentally
|
| 161 |
-
- Include full air bronchogram regions
|
| 162 |
-
- Use preprocessing to see subtle borders
|
| 163 |
-
- Label different types: solid, ground-glass, air bronchograms
|
| 164 |
-
|
| 165 |
-
β **DON'T:**
|
| 166 |
-
- Include ribs in masks
|
| 167 |
-
- Over-segment into normal lung
|
| 168 |
-
- Miss subtle ground-glass opacities
|
| 169 |
-
|
| 170 |
-
## π Project Structure
|
| 171 |
-
|
| 172 |
-
```
|
| 173 |
-
dice/
|
| 174 |
-
βββ dice_calculator_app.py # Main Streamlit application
|
| 175 |
-
βββ preprocessing_consolidation.py # Image enhancement tools
|
| 176 |
-
βββ sam_integration.py # SAM integration
|
| 177 |
-
βββ requirements.txt # Python dependencies
|
| 178 |
-
βββ README.md # This file
|
| 179 |
-
βββ annotations/ # Store annotation masks (create)
|
| 180 |
-
β βββ ground_truth/
|
| 181 |
-
β βββ predictions/
|
| 182 |
-
βββ enhanced_images/ # Preprocessed images (create)
|
| 183 |
-
βββ results/ # Dice scores and reports (create)
|
| 184 |
-
```
|
| 185 |
-
|
| 186 |
-
## π§ Advanced Configuration
|
| 187 |
-
|
| 188 |
-
### Streamlit App Settings
|
| 189 |
-
|
| 190 |
-
In the sidebar:
|
| 191 |
-
- **Overlay Transparency**: Adjust visualization opacity
|
| 192 |
-
- **Calculate Hausdorff Distance**: Enable for boundary distance metrics (slower)
|
| 193 |
-
|
| 194 |
-
### Preprocessing Parameters
|
| 195 |
-
|
| 196 |
-
Edit `preprocessing_consolidation.py` to adjust:
|
| 197 |
-
```python
|
| 198 |
-
clahe = cv2.createCLAHE(
|
| 199 |
-
clipLimit=3.0, # Increase for more contrast
|
| 200 |
-
tileGridSize=(8,8) # Smaller = more local enhancement
|
| 201 |
-
)
|
| 202 |
-
```
|
| 203 |
-
|
| 204 |
-
### SAM Parameters
|
| 205 |
-
|
| 206 |
-
In `sam_integration.py`, adjust:
|
| 207 |
-
```python
|
| 208 |
-
# Confidence threshold for automatic detection
|
| 209 |
-
if scores[0] > 0.8: # Lower = more permissive
|
| 210 |
-
|
| 211 |
-
# Grid density for automatic sampling
|
| 212 |
-
grid_size = 5 # Increase for finer sampling
|
| 213 |
-
```
|
| 214 |
-
|
| 215 |
-
## π¬ Workflow Recommendations
|
| 216 |
-
|
| 217 |
-
### For Manual Annotation:
|
| 218 |
-
1. **Preprocess** images with `preprocessing_consolidation.py`
|
| 219 |
-
2. **Annotate** in CVAT or Label Studio
|
| 220 |
-
3. **Validate** with Dice calculator app
|
| 221 |
-
4. **Iterate** until Dice > 0.80
|
| 222 |
-
|
| 223 |
-
### For ML Training:
|
| 224 |
-
1. **Generate initial masks** with SAM
|
| 225 |
-
2. **Refine manually** in annotation tool
|
| 226 |
-
3. **Calculate metrics** to ensure quality
|
| 227 |
-
4. **Use as training data** for your model
|
| 228 |
-
|
| 229 |
-
### For Validation Study:
|
| 230 |
-
1. Have multiple annotators segment images
|
| 231 |
-
2. Compare annotations using Dice calculator
|
| 232 |
-
3. Calculate inter-rater agreement
|
| 233 |
-
4. Establish ground truth consensus
|
| 234 |
-
|
| 235 |
-
## π References
|
| 236 |
-
|
| 237 |
-
### Tools & Models
|
| 238 |
-
- [CVAT](https://github.com/opencv/cvat) - Computer Vision Annotation Tool
|
| 239 |
-
- [SAM](https://github.com/facebookresearch/segment-anything) - Segment Anything Model
|
| 240 |
-
- [Streamlit](https://streamlit.io/) - Web app framework
|
| 241 |
-
|
| 242 |
-
### Medical Context
|
| 243 |
-
- Air Bronchograms: Air-filled bronchi visible against consolidated lung
|
| 244 |
-
- Silhouette Sign: Loss of normal boundaries due to adjacent opacity
|
| 245 |
-
- Consolidation: Filling of air spaces with fluid/exudate in pneumonia
|
| 246 |
-
|
| 247 |
-
## π Troubleshooting
|
| 248 |
-
|
| 249 |
-
### App won't start
|
| 250 |
-
```bash
|
| 251 |
-
# Check Streamlit installation
|
| 252 |
-
pip install --upgrade streamlit
|
| 253 |
-
|
| 254 |
-
# Run with verbose logging
|
| 255 |
-
streamlit run dice_calculator_app.py --logger.level=debug
|
| 256 |
-
```
|
| 257 |
-
|
| 258 |
-
### SAM errors
|
| 259 |
-
```bash
|
| 260 |
-
# Ensure PyTorch is installed
|
| 261 |
-
pip install torch torchvision
|
| 262 |
-
|
| 263 |
-
# Download correct checkpoint for model type
|
| 264 |
-
# vit_h, vit_l, or vit_b must match checkpoint
|
| 265 |
-
```
|
| 266 |
-
|
| 267 |
-
### Image size issues
|
| 268 |
-
Images are automatically resized to match. For best results:
|
| 269 |
-
- Use same resolution for all images in a study
|
| 270 |
-
- Minimum 512x512 recommended
|
| 271 |
-
- Maximum 2048x2048 for performance
|
| 272 |
-
|
| 273 |
-
## π License
|
| 274 |
-
|
| 275 |
-
This project is for research and educational purposes related to pneumonia diagnosis and medical image segmentation.
|
| 276 |
-
|
| 277 |
-
## π€ Contributing
|
| 278 |
-
|
| 279 |
-
Contributions welcome! Areas for improvement:
|
| 280 |
-
- Additional metrics (Surface Dice, Boundary IoU)
|
| 281 |
-
- 3D visualization support
|
| 282 |
-
- Integration with DICOM files
|
| 283 |
-
- Multi-class segmentation support
|
| 284 |
-
|
| 285 |
-
## π§ Contact
|
| 286 |
-
|
| 287 |
-
For questions or issues, please open an issue in the repository.
|
| 288 |
-
|
| 289 |
-
---
|
| 290 |
-
|
| 291 |
-
**Note**: This tool is for research purposes. Always validate with clinical experts and follow appropriate medical imaging guidelines.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/TODO.md
DELETED
|
@@ -1,181 +0,0 @@
|
|
| 1 |
-
# π TODO List: Pneumonia Consolidation Segmentation Project
|
| 2 |
-
|
| 3 |
-
## β
Completed
|
| 4 |
-
|
| 5 |
-
- [x] Analyze project structure and patient data format
|
| 6 |
-
- [x] Create preprocessing script for consolidation enhancement
|
| 7 |
-
- [x] Build Streamlit app for Dice score calculation
|
| 8 |
-
- [x] Implement SAM integration for automatic segmentation
|
| 9 |
-
- [x] Create requirements.txt and documentation
|
| 10 |
-
- [x] Setup folder structure for annotations and results
|
| 11 |
-
|
| 12 |
-
## π Next Steps (In Order)
|
| 13 |
-
|
| 14 |
-
### Phase 1: Setup & Data Preparation (Week 1)
|
| 15 |
-
|
| 16 |
-
1. **Install Dependencies**
|
| 17 |
-
- [ ] Run `pip install -r requirements.txt`
|
| 18 |
-
- [ ] Test Streamlit app: `streamlit run dice_calculator_app.py`
|
| 19 |
-
- [ ] (Optional) Download SAM checkpoint for automatic segmentation
|
| 20 |
-
|
| 21 |
-
2. **Preprocess Patient Images**
|
| 22 |
-
- [ ] Enhance all chest X-rays in `data/Pacientes/` folder
|
| 23 |
-
- [ ] Save enhanced images to `dice/enhanced_images/`
|
| 24 |
-
- [ ] Review enhanced images for quality
|
| 25 |
-
- [ ] Document any images with poor quality
|
| 26 |
-
|
| 27 |
-
3. **Setup Annotation Tool**
|
| 28 |
-
- [ ] Install CVAT (recommended) or Label Studio
|
| 29 |
-
- [ ] Import enhanced images into annotation tool
|
| 30 |
-
- [ ] Create annotation classes: "consolidation", "ground_glass", "air_bronchogram"
|
| 31 |
-
- [ ] Setup annotation guidelines document for team
|
| 32 |
-
|
| 33 |
-
### Phase 2: Annotation (Weeks 2-4)
|
| 34 |
-
|
| 35 |
-
4. **Create Ground Truth Annotations**
|
| 36 |
-
- [ ] Have 2-3 radiologists independently annotate same 20 images (pilot)
|
| 37 |
-
- [ ] Calculate inter-rater agreement using Dice scores
|
| 38 |
-
- [ ] Resolve disagreements through consensus meeting
|
| 39 |
-
- [ ] Annotate remaining images (aim for 100+ cases)
|
| 40 |
-
- [ ] Save masks to `dice/annotations/ground_truth/`
|
| 41 |
-
|
| 42 |
-
5. **Quality Control**
|
| 43 |
-
- [ ] Use Dice calculator app to validate annotation consistency
|
| 44 |
-
- [ ] Flag cases with unclear consolidation boundaries
|
| 45 |
-
- [ ] Re-annotate cases with Dice < 0.70 between annotators
|
| 46 |
-
- [ ] Document difficult cases and edge cases
|
| 47 |
-
|
| 48 |
-
### Phase 3: SAM Integration (Week 5)
|
| 49 |
-
|
| 50 |
-
6. **Test SAM for Automatic Segmentation**
|
| 51 |
-
- [ ] Download SAM checkpoint (ViT-H recommended)
|
| 52 |
-
- [ ] Test SAM on 10 sample images
|
| 53 |
-
- [ ] Compare SAM predictions vs ground truth
|
| 54 |
-
- [ ] Adjust SAM parameters for best results
|
| 55 |
-
- [ ] Document SAM performance metrics
|
| 56 |
-
|
| 57 |
-
7. **Generate Initial Predictions**
|
| 58 |
-
- [ ] Use SAM to generate masks for all images
|
| 59 |
-
- [ ] Save to `dice/annotations/predictions/`
|
| 60 |
-
- [ ] Calculate Dice scores against ground truth
|
| 61 |
-
- [ ] Identify patterns in SAM failures
|
| 62 |
-
|
| 63 |
-
### Phase 4: Analysis & Validation (Week 6)
|
| 64 |
-
|
| 65 |
-
8. **Calculate Comprehensive Metrics**
|
| 66 |
-
- [ ] Run batch Dice calculation on all mask pairs
|
| 67 |
-
- [ ] Generate statistical reports (mean, std, distribution)
|
| 68 |
-
- [ ] Create visualizations (overlays, comparison grids)
|
| 69 |
-
- [ ] Save results to `dice/results/`
|
| 70 |
-
|
| 71 |
-
9. **Quality Assessment**
|
| 72 |
-
- [ ] Categorize segmentations: Excellent (>0.85), Good (0.70-0.85), Needs Review (<0.70)
|
| 73 |
-
- [ ] Calculate additional metrics: IoU, Precision, Recall, Hausdorff distance
|
| 74 |
-
- [ ] Generate quality control report
|
| 75 |
-
- [ ] Document failure modes and edge cases
|
| 76 |
-
|
| 77 |
-
### Phase 5: ML Model Development (Weeks 7-10)
|
| 78 |
-
|
| 79 |
-
10. **Train Segmentation Model**
|
| 80 |
-
- [ ] Split data: 70% train, 15% validation, 15% test
|
| 81 |
-
- [ ] Choose architecture: U-Net, Attention U-Net, or nnU-Net
|
| 82 |
-
- [ ] Implement data augmentation pipeline
|
| 83 |
-
- [ ] Train model on ground truth annotations
|
| 84 |
-
- [ ] Monitor validation Dice during training
|
| 85 |
-
|
| 86 |
-
11. **Model Evaluation**
|
| 87 |
-
- [ ] Test on held-out test set
|
| 88 |
-
- [ ] Calculate Dice, IoU, and clinical metrics
|
| 89 |
-
- [ ] Compare to SAM baseline
|
| 90 |
-
- [ ] Generate prediction visualizations
|
| 91 |
-
- [ ] Save model checkpoints
|
| 92 |
-
|
| 93 |
-
### Phase 6: Clinical Validation (Weeks 11-12)
|
| 94 |
-
|
| 95 |
-
12. **Expert Review**
|
| 96 |
-
- [ ] Have radiologists review model predictions
|
| 97 |
-
- [ ] Collect feedback on clinically acceptable performance
|
| 98 |
-
- [ ] Test on external validation set (if available)
|
| 99 |
-
- [ ] Document cases where model fails
|
| 100 |
-
|
| 101 |
-
13. **Final Report**
|
| 102 |
-
- [ ] Compile all metrics and visualizations
|
| 103 |
-
- [ ] Write methods section describing workflow
|
| 104 |
-
- [ ] Create supplemental figures
|
| 105 |
-
- [ ] Prepare manuscript or technical report
|
| 106 |
-
|
| 107 |
-
## π§ Technical Debt & Improvements
|
| 108 |
-
|
| 109 |
-
### High Priority
|
| 110 |
-
- [ ] Add DICOM file support (many medical images are DICOM)
|
| 111 |
-
- [ ] Implement multi-class segmentation (consolidation types)
|
| 112 |
-
- [ ] Add data versioning (DVC or similar)
|
| 113 |
-
- [ ] Create automated testing suite
|
| 114 |
-
|
| 115 |
-
### Medium Priority
|
| 116 |
-
- [ ] Add boundary-based metrics (Surface Dice, Normalized Surface Distance)
|
| 117 |
-
- [ ] Implement active learning workflow
|
| 118 |
-
- [ ] Add export to COCO format for model training
|
| 119 |
-
- [ ] Create Docker container for reproducibility
|
| 120 |
-
|
| 121 |
-
### Low Priority
|
| 122 |
-
- [ ] Add 3D visualization support
|
| 123 |
-
- [ ] Implement web-based annotation tool
|
| 124 |
-
- [ ] Add integration with PACS systems
|
| 125 |
-
- [ ] Create mobile app for review
|
| 126 |
-
|
| 127 |
-
## π Success Metrics
|
| 128 |
-
|
| 129 |
-
### Annotation Phase
|
| 130 |
-
- **Target**: 100+ annotated cases
|
| 131 |
-
- **Quality**: Mean inter-rater Dice > 0.80
|
| 132 |
-
- **Efficiency**: < 5 minutes per case
|
| 133 |
-
|
| 134 |
-
### ML Model Phase
|
| 135 |
-
- **Performance**: Mean Dice > 0.75 on test set
|
| 136 |
-
- **Comparison**: Better than SAM baseline
|
| 137 |
-
- **Clinical**: 90% of predictions acceptable to radiologists
|
| 138 |
-
|
| 139 |
-
### Publication
|
| 140 |
-
- **Timeline**: Submit manuscript within 6 months
|
| 141 |
-
- **Target**: Radiology, European Radiology, or similar
|
| 142 |
-
- **Impact**: Tool shared publicly for research use
|
| 143 |
-
|
| 144 |
-
## π Known Issues
|
| 145 |
-
|
| 146 |
-
- [ ] Large images (>2048x2048) may cause memory issues in Streamlit app
|
| 147 |
-
- [ ] SAM requires significant GPU memory (12GB+ recommended)
|
| 148 |
-
- [ ] Batch processing doesn't support progress resumption
|
| 149 |
-
- [ ] Hausdorff distance calculation is slow for large masks
|
| 150 |
-
|
| 151 |
-
## π Learning Resources Needed
|
| 152 |
-
|
| 153 |
-
- [ ] CVAT tutorial videos for team
|
| 154 |
-
- [ ] Radiologic signs of pneumonia refresher
|
| 155 |
-
- [ ] SAM usage best practices
|
| 156 |
-
- [ ] Medical image segmentation literature review
|
| 157 |
-
- [ ] Dice coefficient vs IoU interpretation
|
| 158 |
-
|
| 159 |
-
## π€ Team Assignments
|
| 160 |
-
|
| 161 |
-
- **Radiologist 1**: Lead annotator, quality control
|
| 162 |
-
- **Radiologist 2**: Second annotator, validation
|
| 163 |
-
- **ML Engineer**: Preprocessing, model development
|
| 164 |
-
- **Data Manager**: File organization, data versioning
|
| 165 |
-
- **Project Lead**: Coordination, reporting
|
| 166 |
-
|
| 167 |
-
## π
Timeline Summary
|
| 168 |
-
|
| 169 |
-
- **Week 1**: Setup and preprocessing
|
| 170 |
-
- **Weeks 2-4**: Ground truth annotation
|
| 171 |
-
- **Week 5**: SAM integration and testing
|
| 172 |
-
- **Week 6**: Metrics and analysis
|
| 173 |
-
- **Weeks 7-10**: ML model development
|
| 174 |
-
- **Weeks 11-12**: Clinical validation
|
| 175 |
-
- **Month 4-6**: Manuscript preparation
|
| 176 |
-
|
| 177 |
-
---
|
| 178 |
-
|
| 179 |
-
**Last Updated**: February 6, 2026
|
| 180 |
-
**Project Status**: Phase 1 - Setup Complete
|
| 181 |
-
**Next Action**: Install dependencies and test Streamlit app
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/check_mask.py
DELETED
|
@@ -1,53 +0,0 @@
|
|
| 1 |
-
#!/usr/bin/env python3
|
| 2 |
-
"""Analyze saved annotation mask for area calculation verification."""
|
| 3 |
-
|
| 4 |
-
import cv2
|
| 5 |
-
import numpy as np
|
| 6 |
-
from pathlib import Path
|
| 7 |
-
|
| 8 |
-
mask_path = Path("/Users/alejo/Library/CloudStorage/OneDrive-HospitalAlmaMΓ‘ter/ValidaciΓ³n Humath/data/Pacientes/7035909/7035909_20240326_mask.png")
|
| 9 |
-
orig_path = Path("/Users/alejo/Library/CloudStorage/OneDrive-HospitalAlmaMΓ‘ter/ValidaciΓ³n Humath/data/Pacientes/7035909/7035909_20240326.jpg")
|
| 10 |
-
|
| 11 |
-
# Load mask
|
| 12 |
-
mask = cv2.imread(str(mask_path), cv2.IMREAD_UNCHANGED)
|
| 13 |
-
orig = cv2.imread(str(orig_path))
|
| 14 |
-
|
| 15 |
-
print("=" * 50)
|
| 16 |
-
print("MASK ANALYSIS")
|
| 17 |
-
print("=" * 50)
|
| 18 |
-
print(f"Mask shape: {mask.shape}")
|
| 19 |
-
print(f"Mask dtype: {mask.dtype}")
|
| 20 |
-
print(f"Mask min: {mask.min()}, max: {mask.max()}")
|
| 21 |
-
print(f"Unique values (first 10): {np.unique(mask)[:10]}")
|
| 22 |
-
|
| 23 |
-
print("\n" + "=" * 50)
|
| 24 |
-
print("ORIGINAL IMAGE")
|
| 25 |
-
print("=" * 50)
|
| 26 |
-
print(f"Original shape: {orig.shape}")
|
| 27 |
-
print(f"Dimensions match: {mask.shape[:2] == orig.shape[:2]}")
|
| 28 |
-
|
| 29 |
-
print("\n" + "=" * 50)
|
| 30 |
-
print("AREA STATISTICS")
|
| 31 |
-
print("=" * 50)
|
| 32 |
-
total_pixels = mask.shape[0] * mask.shape[1]
|
| 33 |
-
annotated_pixels = int(np.sum(mask > 0))
|
| 34 |
-
area_percent = (annotated_pixels / total_pixels) * 100
|
| 35 |
-
|
| 36 |
-
print(f"Image dimensions: {mask.shape[1]} x {mask.shape[0]} px")
|
| 37 |
-
print(f"Total pixels: {total_pixels:,}")
|
| 38 |
-
print(f"Annotated pixels: {annotated_pixels:,}")
|
| 39 |
-
print(f"Annotated area: {area_percent:.2f}%")
|
| 40 |
-
|
| 41 |
-
print("\n" + "=" * 50)
|
| 42 |
-
print("VERDICT")
|
| 43 |
-
print("=" * 50)
|
| 44 |
-
if mask.shape[:2] == orig.shape[:2] and annotated_pixels > 0:
|
| 45 |
-
print("β
Mask is CORRECT for area calculation!")
|
| 46 |
-
print(" - Resized to original image dimensions")
|
| 47 |
-
print(" - Contains valid annotation data")
|
| 48 |
-
else:
|
| 49 |
-
print("β Issues found:")
|
| 50 |
-
if mask.shape[:2] != orig.shape[:2]:
|
| 51 |
-
print(" - Mask dimensions don't match original")
|
| 52 |
-
if annotated_pixels == 0:
|
| 53 |
-
print(" - No annotated pixels found")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/dice_calculator_app.py
DELETED
|
@@ -1,692 +0,0 @@
|
|
| 1 |
-
"""
|
| 2 |
-
Streamlit App for Pneumonia Consolidation Ground Truth Annotation
|
| 3 |
-
|
| 4 |
-
This app allows radiologists to:
|
| 5 |
-
1. Load and view chest X-ray images with enhancement
|
| 6 |
-
2. Create segmentation masks using drawing tools
|
| 7 |
-
3. Save ground truth annotations
|
| 8 |
-
4. Compare annotations between radiologists (inter-rater agreement)
|
| 9 |
-
5. Export annotations for ML training
|
| 10 |
-
"""
|
| 11 |
-
|
| 12 |
-
import streamlit as st
|
| 13 |
-
import cv2
|
| 14 |
-
import numpy as np
|
| 15 |
-
from PIL import Image
|
| 16 |
-
import pandas as pd
|
| 17 |
-
from pathlib import Path
|
| 18 |
-
import io
|
| 19 |
-
import zipfile
|
| 20 |
-
import json
|
| 21 |
-
from datetime import datetime
|
| 22 |
-
from streamlit_drawable_canvas import st_canvas
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
def calculate_dice_coefficient(mask1, mask2):
|
| 26 |
-
"""
|
| 27 |
-
Calculate Dice coefficient between two binary masks.
|
| 28 |
-
|
| 29 |
-
Dice = 2 * |A β© B| / (|A| + |B|)
|
| 30 |
-
|
| 31 |
-
Args:
|
| 32 |
-
mask1: First binary mask (numpy array)
|
| 33 |
-
mask2: Second binary mask (numpy array)
|
| 34 |
-
|
| 35 |
-
Returns:
|
| 36 |
-
Dice coefficient (float between 0 and 1)
|
| 37 |
-
"""
|
| 38 |
-
# Ensure masks are binary
|
| 39 |
-
mask1 = (mask1 > 0).astype(np.uint8)
|
| 40 |
-
mask2 = (mask2 > 0).astype(np.uint8)
|
| 41 |
-
|
| 42 |
-
# Calculate intersection and union
|
| 43 |
-
intersection = np.sum(mask1 * mask2)
|
| 44 |
-
mask1_sum = np.sum(mask1)
|
| 45 |
-
mask2_sum = np.sum(mask2)
|
| 46 |
-
|
| 47 |
-
# Handle edge case where both masks are empty
|
| 48 |
-
if mask1_sum + mask2_sum == 0:
|
| 49 |
-
return 1.0
|
| 50 |
-
|
| 51 |
-
dice = (2.0 * intersection) / (mask1_sum + mask2_sum)
|
| 52 |
-
return dice
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
def calculate_iou(mask1, mask2):
|
| 56 |
-
"""
|
| 57 |
-
Calculate Intersection over Union (IoU / Jaccard Index).
|
| 58 |
-
|
| 59 |
-
IoU = |A β© B| / |A βͺ B|
|
| 60 |
-
"""
|
| 61 |
-
mask1 = (mask1 > 0).astype(np.uint8)
|
| 62 |
-
mask2 = (mask2 > 0).astype(np.uint8)
|
| 63 |
-
|
| 64 |
-
intersection = np.sum(mask1 * mask2)
|
| 65 |
-
union = np.sum(mask1) + np.sum(mask2) - intersection
|
| 66 |
-
|
| 67 |
-
if union == 0:
|
| 68 |
-
return 1.0
|
| 69 |
-
|
| 70 |
-
return intersection / union
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
def calculate_precision_recall(ground_truth, prediction):
|
| 74 |
-
"""
|
| 75 |
-
Calculate precision and recall for segmentation.
|
| 76 |
-
|
| 77 |
-
Precision = TP / (TP + FP)
|
| 78 |
-
Recall = TP / (TP + FN)
|
| 79 |
-
"""
|
| 80 |
-
gt = (ground_truth > 0).astype(np.uint8)
|
| 81 |
-
pred = (prediction > 0).astype(np.uint8)
|
| 82 |
-
|
| 83 |
-
tp = np.sum(gt * pred)
|
| 84 |
-
fp = np.sum((1 - gt) * pred)
|
| 85 |
-
fn = np.sum(gt * (1 - pred))
|
| 86 |
-
|
| 87 |
-
precision = tp / (tp + fp) if (tp + fp) > 0 else 0.0
|
| 88 |
-
recall = tp / (tp + fn) if (tp + fn) > 0 else 0.0
|
| 89 |
-
|
| 90 |
-
return precision, recall
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
def calculate_hausdorff_distance(mask1, mask2):
|
| 94 |
-
"""
|
| 95 |
-
Calculate Hausdorff distance between two masks.
|
| 96 |
-
Measures the maximum distance from a point in one set to the closest point in the other set.
|
| 97 |
-
"""
|
| 98 |
-
from scipy.spatial.distance import directed_hausdorff
|
| 99 |
-
|
| 100 |
-
# Get coordinates of mask pixels
|
| 101 |
-
coords1 = np.argwhere(mask1 > 0)
|
| 102 |
-
coords2 = np.argwhere(mask2 > 0)
|
| 103 |
-
|
| 104 |
-
if len(coords1) == 0 or len(coords2) == 0:
|
| 105 |
-
return float('inf')
|
| 106 |
-
|
| 107 |
-
# Calculate directed Hausdorff distances
|
| 108 |
-
d1 = directed_hausdorff(coords1, coords2)[0]
|
| 109 |
-
d2 = directed_hausdorff(coords2, coords1)[0]
|
| 110 |
-
|
| 111 |
-
# Return maximum
|
| 112 |
-
return max(d1, d2)
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
def create_overlay_visualization(image, ground_truth, prediction, alpha=0.5):
|
| 116 |
-
"""
|
| 117 |
-
Create visualization with ground truth (green) and prediction (red) overlaid.
|
| 118 |
-
Overlap areas appear in yellow.
|
| 119 |
-
"""
|
| 120 |
-
# Convert to RGB if grayscale
|
| 121 |
-
if len(image.shape) == 2:
|
| 122 |
-
image = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
|
| 123 |
-
|
| 124 |
-
# Normalize image to 0-255 range
|
| 125 |
-
if image.dtype != np.uint8:
|
| 126 |
-
image = ((image - image.min()) / (image.max() - image.min()) * 255).astype(np.uint8)
|
| 127 |
-
|
| 128 |
-
# Create colored masks
|
| 129 |
-
overlay = image.copy()
|
| 130 |
-
|
| 131 |
-
# Ground truth in green
|
| 132 |
-
gt_mask = (ground_truth > 0)
|
| 133 |
-
overlay[gt_mask] = overlay[gt_mask] * (1 - alpha) + np.array([0, 255, 0]) * alpha
|
| 134 |
-
|
| 135 |
-
# Prediction in red
|
| 136 |
-
pred_mask = (prediction > 0)
|
| 137 |
-
overlay[pred_mask] = overlay[pred_mask] * (1 - alpha) + np.array([255, 0, 0]) * alpha
|
| 138 |
-
|
| 139 |
-
# Overlap in yellow (green + red)
|
| 140 |
-
overlap = gt_mask & pred_mask
|
| 141 |
-
overlay[overlap] = overlay[overlap] * (1 - alpha) + np.array([255, 255, 0]) * alpha
|
| 142 |
-
|
| 143 |
-
return overlay.astype(np.uint8)
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
def create_comparison_grid(image, ground_truth, prediction, overlay):
|
| 147 |
-
"""
|
| 148 |
-
Create a 2x2 grid showing all visualizations.
|
| 149 |
-
"""
|
| 150 |
-
# Ensure all images are the same size and RGB
|
| 151 |
-
h, w = image.shape[:2]
|
| 152 |
-
|
| 153 |
-
# Convert masks to RGB
|
| 154 |
-
gt_rgb = cv2.cvtColor((ground_truth * 255).astype(np.uint8), cv2.COLOR_GRAY2RGB)
|
| 155 |
-
pred_rgb = cv2.cvtColor((prediction * 255).astype(np.uint8), cv2.COLOR_GRAY2RGB)
|
| 156 |
-
|
| 157 |
-
if len(image.shape) == 2:
|
| 158 |
-
image_rgb = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
|
| 159 |
-
else:
|
| 160 |
-
image_rgb = image
|
| 161 |
-
|
| 162 |
-
# Create grid
|
| 163 |
-
top_row = np.hstack([image_rgb, gt_rgb])
|
| 164 |
-
bottom_row = np.hstack([pred_rgb, overlay])
|
| 165 |
-
grid = np.vstack([top_row, bottom_row])
|
| 166 |
-
|
| 167 |
-
# Add labels
|
| 168 |
-
font = cv2.FONT_HERSHEY_SIMPLEX
|
| 169 |
-
font_scale = 0.7
|
| 170 |
-
thickness = 2
|
| 171 |
-
color = (255, 255, 255)
|
| 172 |
-
|
| 173 |
-
cv2.putText(grid, 'Original', (10, 30), font, font_scale, color, thickness)
|
| 174 |
-
cv2.putText(grid, 'Ground Truth', (w + 10, 30), font, font_scale, color, thickness)
|
| 175 |
-
cv2.putText(grid, 'Prediction', (10, h + 30), font, font_scale, color, thickness)
|
| 176 |
-
cv2.putText(grid, 'Overlay', (w + 10, h + 30), font, font_scale, color, thickness)
|
| 177 |
-
|
| 178 |
-
return grid
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
def load_image(uploaded_file):
|
| 182 |
-
"""Load and convert uploaded image to numpy array."""
|
| 183 |
-
image = Image.open(uploaded_file)
|
| 184 |
-
load_and_enhance_image(uploaded_file, enhance=True):
|
| 185 |
-
"""Load image and optionally apply enhancement for better visualization."""
|
| 186 |
-
image = Image.open(uploaded_file)
|
| 187 |
-
img_array = np.array(image)
|
| 188 |
-
|
| 189 |
-
# Convert to grayscale if needed
|
| 190 |
-
if len(img_array.shape) == 3:
|
| 191 |
-
img_gray = cv2.cvtColor(img_array, cv2.COLOR_RGB2GRAY)
|
| 192 |
-
else:
|
| 193 |
-
img_gray = img_array
|
| 194 |
-
|
| 195 |
-
if enhance:
|
| 196 |
-
# Apply CLAHE enhancement
|
| 197 |
-
clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8, 8))
|
| 198 |
-
enhanced = clahe.apply(img_gray)
|
| 199 |
-
|
| 200 |
-
# Sharpening
|
| 201 |
-
kernel = np.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]])
|
| 202 |
-
enhanced = cv2.filter2D(enhanced, -1, kernel)
|
| 203 |
-
|
| 204 |
-
return enhanced
|
| 205 |
-
|
| 206 |
-
return img_gray
|
| 207 |
-
|
| 208 |
-
|
| 209 |
-
def save_annotation(image_name, mask, annotator_name, notes=""):
|
| 210 |
-
"""Save annotation with metadata."""
|
| 211 |
-
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
| 212 |
-
|
| 213 |
-
# Create annotations directory
|
| 214 |
-
annotations_dir = Path("annotations/ground_truth")
|
| 215 |
-
annotations_dir.mkdir(parents=True, exist_ok=True)
|
| 216 |
-
|
| 217 |
-
# Save mask
|
| 218 |
-
mask_filename = f"{Path(image_name).stem}_{annotator_name}_{timestamp}.png"
|
| 219 |
-
mask_path = annotations_dir / mask_filename
|
| 220 |
-
cv2.imwrite(str(mask_path), mask)
|
| 221 |
-
|
| 222 |
-
# Save metadata
|
| 223 |
-
metadata = {
|
| 224 |
-
"image_name": image_name,
|
| 225 |
-
"annotator": annotator_name,
|
| 226 |
-
"timestamp": timestamp,
|
| 227 |
-
"notes": notes,
|
| 228 |
-
"mask_file": mask_filename
|
| 229 |
-
}
|
| 230 |
-
|
| 231 |
-
metadata_path = annotations_dir / f"{Path(image_name).stem}_{annotator_name}_{timestamp}.json"
|
| 232 |
-
with open(metadata_path, 'w') as f:
|
| 233 |
-
json.dump(metadata, f, indent=2)
|
| 234 |
-
|
| 235 |
-
return mask_path, metadata_path
|
| 236 |
-
|
| 237 |
-
|
| 238 |
-
def load_existing_annotations(image_name):
|
| 239 |
-
"""Load all existing annotations for an image."""
|
| 240 |
-
annotations_dir = Path("annotations/ground_truth")
|
| 241 |
-
if not annotations_dir.exists():
|
| 242 |
-
return []
|
| 243 |
-
|
| 244 |
-
image_stem = Path(image_name).stem
|
| 245 |
-
annotations = []
|
| 246 |
-
|
| 247 |
-
for mask_file in annotations_dir.glob(f"{image_stem}_*.png"):
|
| 248 |
-
# Find corresponding metadata
|
| 249 |
-
json_file = mask_file.with_suffix('.json')
|
| 250 |
-
if json_file.exists():
|
| 251 |
-
with open(json_file, 'r') as f:
|
| 252 |
-
metadata = json.load(f)
|
| 253 |
-
mask = cv2.imread(str(mask_file), cv2.IMREAD_GRAYSCALE)
|
| 254 |
-
annotations.append({
|
| 255 |
-
'mask': mask,
|
| 256 |
-
'metadata': metadata,
|
| 257 |
-
'file': mask_file
|
| 258 |
-
})
|
| 259 |
-
|
| 260 |
-
return annotations
|
| 261 |
-
|
| 262 |
-
|
| 263 |
-
def main():
|
| 264 |
-
st.set_page_config(
|
| 265 |
-
page_title="Ground Truth Annotation Tool - Pneumonia",
|
| 266 |
-
page_icon="π«",
|
| 267 |
-
layout="wide"
|
| 268 |
-
)
|
| 269 |
-
|
| 270 |
-
st.title("π« Pneumonia Consolidation Ground Truth Annotation Tool")
|
| 271 |
-
st.markdown("""
|
| 272 |
-
### For Radiologists - Create Expert Annotations
|
| 273 |
-
|
| 274 |
-
This tool helps you create high-quality ground truth segmentation masks for pneumonia consolidation.
|
| 275 |
-
|
| 276 |
-
**Workflow:**
|
| 277 |
-
1. π Load a chest X-ray image
|
| 278 |
-
2. π¨ Draw consolidation masks using the annotation tools
|
| 279 |
-
3. πΎ Save your annotation with notes
|
| 280 |
-
4. π Compare with other radiologists' annotations (optional
|
| 281 |
-
- π’ Green: Ground Truth
|
| 282 |
-
- π΄ Red: Prediction
|
| 283 |
-
- π‘ Yellow: Overlap (correct prediction)
|
| 284 |
-
""")
|
| 285 |
-
|
| 286 |
-
# Sidebar for settings
|
| 287 |
-
st.sidebar.header("π€ Annotator Information")
|
| 288 |
-
annotator_name = st.sidebar.text_input("Your Name/ID", value="Radiologist1",
|
| 289 |
-
help="Enter your name or ID for tracking")
|
| 290 |
-
|
| 291 |
-
st.sidebar.header("π¨ Annotation Settings")
|
| 292 |
-
stroke_width = st.sidebar.slider("Brush Size", 1, 50, 10)
|
| 293 |
-
stroke_color = st.sidebar.color_picker("Drawing Color", "#FFFF00")
|
| 294 |
-
|
| 295 |
-
st.sidebar.header("πΌοΈ Display Settings")
|
| 296 |
-
enhast.header("π Create New Annotation")
|
| 297 |
-
|
| 298 |
-
# File upload
|
| 299 |
-
uploaded_file = st.file_uploader(
|
| 300 |
-
"Upload Chest X-ray Image (JPG/PNG)",
|
| 301 |
-
type=['jpg', 'jpeg', 'png'],
|
| 302 |
-
key='xray_upload'
|
| 303 |
-
)
|
| 304 |
-
|
| 305 |
-
with col3:
|
| 306 |
-
st.subheader("Predicted Mask")
|
| 307 |
-
prediction_file = st.file_uploader(
|
| 308 |
-
"Upload Prediction Mask",
|
| 309 |
-
type=['jpg', 'jpeg', 'png'],
|
| 310 |
-
key='prediction'
|
| 311 |
-
)
|
| 312 |
-
|
| 313 |
-
if original_file and ground_truth_file and prediction_file:
|
| 314 |
-
# Load and process image
|
| 315 |
-
img_array = load_and_enhance_image(uploaded_file, enhance=enhance_image)
|
| 316 |
-
|
| 317 |
-
# Display image info
|
| 318 |
-
st.info(f"πΈ Image: {uploaded_file.name} | Size: {img_array.shape[1]}x{img_array.shape[0]} pixels")
|
| 319 |
-
|
| 320 |
-
# Check for existing annotations
|
| 321 |
-
existing_annotations = load_existing_annotations(uploaded_file.name)
|
| 322 |
-
if existing_annotations:
|
| 323 |
-
st.warning(f"β οΈ Found {len(existing_annotations)} existing annotation(s) for this image")
|
| 324 |
-
if st.checkbox("Load previous annotation to edit"):
|
| 325 |
-
selected_annotation = st.selectbox(
|
| 326 |
-
"Select annotation to load",
|
| 327 |
-
range(len(existing_annotations)),
|
| 328 |
-
format_func=lambda i: f"{existing_annotations[i]['metadata']['annotator']} - {existing_annotations[i]['metadata']['timestamp']}"
|
| 329 |
-
)
|
| 330 |
-
# Load the selected mask as initial drawing
|
| 331 |
-
initial_mask = existing_annotations[selected_annotation]['mask']
|
| 332 |
-
else:
|
| 333 |
-
initial_mask = None
|
| 334 |
-
else:
|
| 335 |
-
initial_mask = None
|
| 336 |
-
|
| 337 |
-
# Annotation interface
|
| 338 |
-
col1, col2 = st.columns([2, 1])
|
| 339 |
-
|
| 340 |
-
with col1:
|
| 341 |
-
st.subheader("π¨ Draw Consolidation Mask")
|
| 342 |
-
|
| 343 |
-
if show_guidelines:
|
| 344 |
-
st.info("""
|
| 345 |
-
**Quick Tips:**
|
| 346 |
-
- β
Include air bronchograms (dark tubes in white area)
|
| 347 |
-
- β
Trace where heart/diaphragm border disappears
|
| 348 |
-
- β Don't include ribs in mask
|
| 349 |
-
- Use eraser to refine fuzzy borders
|
| 350 |
-
""")
|
| 351 |
-
|
| 352 |
-
# Convert image to RGB for canvas
|
| 353 |
-
img_rgb = cv2.cvtColor(img_array, cv2.COLOR_GRAY2RGB)
|
| 354 |
-
|
| 355 |
-
# Drawing canvas
|
| 356 |
-
canvas_result = st_canvas(
|
| 357 |
-
fill_color="rgba(255, 255, 0, 0.3)",
|
| 358 |
-
stroke_width=stroke_width,
|
| 359 |
-
stroke_color=stroke_color,
|
| 360 |
-
background_image=Image.fromarray(img_rgb),
|
| 361 |
-
update_streamlit=True,
|
| 362 |
-
height=img_array.shape[0],
|
| 363 |
-
width=img_array.shape[1],
|
| 364 |
-
drawing_mode="freedraw",
|
| 365 |
-
initial_drawing=initial_mask,
|
| 366 |
-
key="canvas",
|
| 367 |
-
)
|
| 368 |
-
|
| 369 |
-
with col2:
|
| 370 |
-
st.subheader("π Annotation Details")
|
| 371 |
-
|
| 372 |
-
# Consolidation characteristics
|
| 373 |
-
st.write("**Consolidation Type:**")
|
| 374 |
-
consolidation_type = st.multiselect(
|
| 375 |
-
"Select all that apply",
|
| 376 |
-
["Solid Consolidation", "Ground Glass Opacity", "Air Bronchograms", "Pleural Effusion"],
|
| 377 |
-
default=["Solid Consolidation"]
|
| 378 |
-
)
|
| 379 |
-
|
| 380 |
-
# Location
|
| 381 |
-
location = st.selectbox(
|
| 382 |
-
"Location",
|
| 383 |
-
["Right Upper Lobe", "Right Middle Lobe", "Right Lower Lobe",
|
| 384 |
-
"Left Upper Lobe", "Left Lower Lobe", "Bilateral", "Other"]
|
| 385 |
-
)
|
| 386 |
-
|
| 387 |
-
# Confidence
|
| 388 |
-
confidence = st.slider("Annotation Confidence", 1, 5, 5,
|
| 389 |
-
help="How confident are you in this segmentation?")
|
| 390 |
-
|
| 391 |
-
# Notes
|
| 392 |
-
notes = st.text_area(
|
| 393 |
-
"Clinical Notes (optional)",
|
| 394 |
-
placeholder="E.g., 'Silhouette sign present with right heart border'"
|
| 395 |
-
)
|
| 396 |
-
|
| 397 |
-
# Statistics
|
| 398 |
-
if canvas_result.image_data is not None:
|
| 399 |
-
mask = canvas_result.image_data[:, :, 3] > 0
|
| 400 |
-
mask_area = np.sum(mask)
|
| 401 |
-
total_area = mask.shape[0] * mask.shape[1]
|
| 402 |
-
percentage = (mask_area / total_area) * 100
|
| 403 |
-
|
| 404 |
-
st.metric("Annotated Area", f"{mask_area} pxΒ²")
|
| 405 |
-
st.metric("Coverage", f"{percentage:.2f}%")
|
| 406 |
-
|
| 407 |
-
# Save annotation
|
| 408 |
-
st.divider()
|
| 409 |
-
|
| 410 |
-
col1, col2, col3 = st.columns([1, 1, 2])
|
| 411 |
-
|
| 412 |
-
with col1:
|
| 413 |
-
if st.button("πΎ Save Annotation", type="primary", use_container_width=True):
|
| 414 |
-
if canvas_result.image_data is not None:
|
| 415 |
-
# Extract mask from canvas
|
| 416 |
-
mask = canvas_result.image_data[:, :, 3]
|
| 417 |
-
|
| 418 |
-
# Create metadata
|
| 419 |
-
full_notes = {
|
| 420 |
-
"consolidation_type": consolidation_type,
|
| 421 |
-
"location": location,
|
| 422 |
-
"confidence": confidence,
|
| 423 |
-
"clinical_notes": notes
|
| 424 |
-
}
|
| 425 |
-
|
| 426 |
-
# Save
|
| 427 |
-
mask_path, metadata_path = save_annotation(
|
| 428 |
-
uploaded_file.name,
|
| 429 |
-
mask,
|
| 430 |
-
annotator_name,
|
| 431 |
-
json.dumps(full_notes)
|
| 432 |
-
)
|
| 433 |
-
|
| 434 |
-
st.success(f"β
Annotation saved!\n- Mask: {mask_path.name}\n- Metadata: {metadata_path.name}")
|
| 435 |
-
else:
|
| 436 |
-
st.error("β No annotation drawn yet!")
|
| 437 |
-
|
| 438 |
-
with col2:
|
| 439 |
-
if st.button("ποΈ Clear Canvas", use_container_width=True):
|
| 440 |
-
st.rerun()
|
| 441 |
-
|
| 442 |
-
with col3:
|
| 443 |
-
if canvas_result.image_data is not None:
|
| 444 |
-
# Download current mask
|
| 445 |
-
mask = canvas_result.image_data[:, :, 3]
|
| 446 |
-
mask_pil = Image.fromarray(mask)
|
| 447 |
-
buf = io.BytesIO()
|
| 448 |
-
mask_pil.save(buf, format='PNG')
|
| 449 |
-
|
| 450 |
-
st.download_button(
|
| 451 |
-
label="β¬οΈ Download Current Mask",
|
| 452 |
-
data=buf.getvalue(),
|
| 453 |
-
file_name=f"{Path(uploaded_file.name).stem}_mask_{annotator_name}.png",
|
| 454 |
-
mime="image/png",
|
| 455 |
-
use_container_width=True
|
| 456 |
-
)
|
| 457 |
-
|
| 458 |
-
with tab2:
|
| 459 |
-
st.header("π Compare Annotations Between Radiologists")
|
| 460 |
-
st.markdown("Calculate inter-rater agreement (Dice coefficient) between two radiologists' annotations of the same image.")
|
| 461 |
-
|
| 462 |
-
col1, col2 = st.columns(2)
|
| 463 |
-
|
| 464 |
-
with col1:
|
| 465 |
-
st.subheader("Radiologist 1 Annotation")
|
| 466 |
-
mask1_file = st.file_uploader("Upload Mask 1", type=['png', 'jpg'], key='compare_mask1')
|
| 467 |
-
annotator1 = st.text_input("Annotator 1 Name", value="Radiologist 1")
|
| 468 |
-
|
| 469 |
-
with col2:
|
| 470 |
-
st.subheader("Radiologist 2 Annotation")
|
| 471 |
-
mask2_file = st.file_uploader("Upload Mask 2", type=['png', 'jpg'], key='compare_mask2')
|
| 472 |
-
annotator2 = st.text_input("Annotator 2 Name", value="Radiologist 2")
|
| 473 |
-
|
| 474 |
-
if mask1_file and mask2_file:
|
| 475 |
-
mask1 = cv2.imread(io.BytesIO(mask1_file.read()), cv2.IMREAD_GRAYSCALE) if isinstance(mask1_file, st.runtime.uploaded_file_manager.UploadedFile) else cv2.imdecode(np.frombuffer(mask1_file.read(), np.uint8), cv2.IMREAD_GRAYSCALE)
|
| 476 |
-
mask2 = cv2.imread(io.BytesIO(mask2_file.read()), cv2.IMREAD_GRAYSCALE) if isinstance(mask2_file, st.runtime.uploaded_file_manager.UploadedFile) else cv2.imdecode(np.frombuffer(mask2_file.read(), np.uint8), cv2.IMREAD_GRAYSCALE)
|
| 477 |
-
|
| 478 |
-
mask1_file.seek(0)
|
| 479 |
-
mask2_file.seek(0)
|
| 480 |
-
mask1 = cv2.imdecode(np.frombuffer(mask1_file.read(), np.uint8), cv2.IMREAD_GRAYSCALE)
|
| 481 |
-
mask1_file.seek(0)
|
| 482 |
-
mask2_file.seek(0)
|
| 483 |
-
mask2 = cv2.imdecode(np.frombuffer(mask2_file.read(), np.uint8), cv2.IMREAD_GRAYSCALE)
|
| 484 |
-
mask2_file.seek(0)
|
| 485 |
-
|
| 486 |
-
# Resize if needed
|
| 487 |
-
if mask1.shape != mask2.shape:
|
| 488 |
-
mask2 = cv2.resize(mask2, (mask1.shape[1], mask1.shape[0]))
|
| 489 |
-
|
| 490 |
-
# Calculate agreement metrics
|
| 491 |
-
dice = calculate_dice_coefficient(mask1, mask2)
|
| 492 |
-
iou = calculate_iou(mask1, mask2)
|
| 493 |
-
precision, recall = calculate_precision_recall(mask1, mask2)
|
| 494 |
-
|
| 495 |
-
# Display metrics
|
| 496 |
-
st.subheader("π Inter-Rater Agreement")
|
| 497 |
-
|
| 498 |
-
cols = st.columns(4)
|
| 499 |
-
with cols[0]:
|
| 500 |
-
st.metric("Dice Coefficient", f"{dice:.4f}")
|
| 501 |
-
with cols[1]:
|
| 502 |
-
st.metric("IoU", f"{iou:.4f}")
|
| 503 |
-
with cols[2]:
|
| 504 |
-
st.metric("Precision", f"{precision:.4f}")
|
| 505 |
-
with cols[3]:
|
| 506 |
-
st.metric("Recall", f"{recall:.4f}")
|
| 507 |
-
|
| 508 |
-
# Interpretation
|
| 509 |
-
if dice >= 0.80:
|
| 510 |
-
st.success("β
**Excellent Agreement** - Annotations are highly consistent")
|
| 511 |
-
elif dice >= 0.70:
|
| 512 |
-
st.info("βΉοΈ **Good Agreement** - Acceptable consistency, may need minor discussion")
|
| 513 |
-
elif dice >= 0.50:
|
| 514 |
-
st.warning("β οΈ **Fair Agreement** - Significant differences, recommend consensus review")
|
| 515 |
-
else:
|
| 516 |
-
st.error("β **Poor Agreement** - Major differences, requires discussion and re-annotation")
|
| 517 |
-
|
| 518 |
-
# Visual comparison
|
| 519 |
-
st.subheader("πΌοΈ Visual Comparison")
|
| 520 |
-
|
| 521 |
-
# Create overlay: Annotator1=green, Annotator2=red, Overlap=yellow
|
| 522 |
-
overlay = np.zeros((mask1.shape[0], mask1.shape[1], 3), dtype=np.uint8)
|
| 523 |
-
overlay[mask1 > 0] = [0, 255, 0] # Green for annotator 1
|
| 524 |
-
overlap = (mask1 > 0) & (mask2 > 0)
|
| 525 |
-
overlay[mask2 > 0] = [255, 0, 0] # Red for annotator 2
|
| 526 |
-
overlay[overlap] = [255, 255, 0] # Yellow for agreement
|
| 527 |
-
|
| 528 |
-
st.image(overlay, caption=f"Green: {annotator1} only | Red: {annotator2} only | Yellow: Agreement", use_container_width=True)
|
| 529 |
-
|
| 530 |
-
# Area statistics
|
| 531 |
-
st.subheader("π Area Statistics")
|
| 532 |
-
area1 = np.sum(mask1 > 0)
|
| 533 |
-
area2 = np.sum(mask2 > 0)
|
| 534 |
-
overlap_area = np.sum(overlap)
|
| 535 |
-
|
| 536 |
-
data = {
|
| 537 |
-
'Annotator': [annotator1, annotator2, 'Overlap'],
|
| 538 |
-
'Area (pixels)': [area1, area2, overlap_area],
|
| 539 |
-
'Percentage': [100, (area2/area1)*100, (overlap_area/area1)*100]
|
| 540 |
-
}
|
| 541 |
-
df = pd.DataFrame(data)
|
| 542 |
-
st.dataframe(df, use_container_width=True)
|
| 543 |
-
|
| 544 |
-
with tab3:
|
| 545 |
-
st.header("π Browse Existing Annotations")
|
| 546 |
-
|
| 547 |
-
annotations_dir = Path("annotations/ground_truth")
|
| 548 |
-
if annotations_dir.exists():
|
| 549 |
-
all_annotations = list(annotations_dir.glob("*.json"))
|
| 550 |
-
|
| 551 |
-
if all_annotations:
|
| 552 |
-
st.info(f"Found {len(all_annotations)} annotation(s)")
|
| 553 |
-
|
| 554 |
-
# Group by image
|
| 555 |
-
annotations_by_image = {}
|
| 556 |
-
for json_file in all_annotations:
|
| 557 |
-
with open(json_file, 'r') as f:
|
| 558 |
-
metadata = json.load(f)
|
| 559 |
-
image_name = metadata['image_name']
|
| 560 |
-
if image_name not in annotations_by_image:
|
| 561 |
-
annotations_by_image[image_name] = []
|
| 562 |
-
annotations_by_image[image_name].append(metadata)
|
| 563 |
-
|
| 564 |
-
# Display selector
|
| 565 |
-
selected_image = st.selectbox("Select Image", list(annotations_by_image.keys()))
|
| 566 |
-
|
| 567 |
-
if selected_image:
|
| 568 |
-
st.subheader(f"Annotations for: {selected_image}")
|
| 569 |
-
|
| 570 |
-
annotations = annotations_by_image[selected_image]
|
| 571 |
-
|
| 572 |
-
# Display as table
|
| 573 |
-
df_data = []
|
| 574 |
-
for ann in annotations:
|
| 575 |
-
df_data.append({
|
| 576 |
-
'Annotator': ann['annotator'],
|
| 577 |
-
'Timestamp': ann['timestamp'],
|
| 578 |
-
'Mask File': ann['mask_file']
|
| 579 |
-
})
|
| 580 |
-
|
| 581 |
-
df = pd.DataFrame(df_data)
|
| 582 |
-
st.dataframe(df, use_container_width=True)
|
| 583 |
-
|
| 584 |
-
# Show masks
|
| 585 |
-
cols = st.columns(min(len(annotations), 3))
|
| 586 |
-
for idx, ann in enumerate(annotations):
|
| 587 |
-
mask_path = annotations_dir / ann['mask_file']
|
| 588 |
-
if mask_path.exists():
|
| 589 |
-
mask = cv2.imread(str(mask_path), cv2.IMREAD_GRAYSCALE)
|
| 590 |
-
with cols[idx % 3]:
|
| 591 |
-
st.image(mask, caption=f"{ann['annotator']}", use_container_width=True)
|
| 592 |
-
|
| 593 |
-
# Export options
|
| 594 |
-
st.divider()
|
| 595 |
-
st.subheader("π¦ Export Annotations")
|
| 596 |
-
|
| 597 |
-
if st.button("Export All Annotations as ZIP"):
|
| 598 |
-
# Create ZIP file
|
| 599 |
-
zip_buffer = io.BytesIO()
|
| 600 |
-
with zipfile.ZipFile(zip_buffer, 'w', zipfile.ZIP_DEFLATED) as zip_file:
|
| 601 |
-
for ann in annotations:
|
| 602 |
-
mask_file = annotations_dir / ann['mask_file']
|
| 603 |
-
json_file = annotations_dir / f"{Path(ann['mask_file']).stem}.json"
|
| 604 |
-
|
| 605 |
-
if mask_file.exists():
|
| 606 |
-
zip_file.write(mask_file, mask_file.name)
|
| 607 |
-
if json_file.exists():
|
| 608 |
-
zip_file.write(json_file, json_file.name)
|
| 609 |
-
|
| 610 |
-
st.download_button(
|
| 611 |
-
label="β¬οΈ Download ZIP",
|
| 612 |
-
data=zip_buffer.getvalue(),
|
| 613 |
-
file_name=f"annotations_{selected_image.replace('.', '_')}.zip",
|
| 614 |
-
mime="application/zip"
|
| 615 |
-
)
|
| 616 |
-
else:
|
| 617 |
-
st.warning("No annotations found. Create your first annotation in the 'Annotate' tab!")
|
| 618 |
-
else:
|
| 619 |
-
st.warning("Annotations directory not found. Create your first annotation to get started!")
|
| 620 |
-
|
| 621 |
-
with tab4:
|
| 622 |
-
st.markdown("""
|
| 623 |
-
## π Annotation Guidelines for Pneumonia Consolidation
|
| 624 |
-
|
| 625 |
-
### What is Pneumonia Consolidation?
|
| 626 |
-
Pneumonia consolidation appears as white/opaque areas in the lung fields on chest X-rays.
|
| 627 |
-
It represents areas where the air spaces are filled with fluid, pus, or cellular debris.
|
| 628 |
-
|
| 629 |
-
### Key Radiologic Signs to Look For:
|
| 630 |
-
|
| 631 |
-
#### 1. **Air Bronchograms** β
|
| 632 |
-
- Dark, branching tubes visible inside white consolidation
|
| 633 |
-
- **100% diagnostic for consolidation**
|
| 634 |
-
- Include entire region surrounding these patterns in your mask
|
| 635 |
-
|
| 636 |
-
#### 2. **Silhouette Sign**
|
| 637 |
-
- Heart or diaphragm border "disappears" into white area
|
| 638 |
-
- Indicates consolidation is touching that structure
|
| 639 |
-
- Include this boundary in your segmentation
|
| 640 |
-
|
| 641 |
-
#### 3. **Border Characteristics**
|
| 642 |
-
- Consolidations have "fuzzy" or poorly defined borders
|
| 643 |
-
- Often blend into surrounding lung tissue
|
| 644 |
-
- Use the enhanced preprocessing images to see borders better
|
| 645 |
-
|
| 646 |
-
### Annotation Best Practices:
|
| 647 |
-
|
| 648 |
-
1. **Use Polygon Tool First**
|
| 649 |
-
- Trace rough outline of consolidation
|
| 650 |
-
- Don't worry about perfect edges initially
|
| 651 |
-
|
| 652 |
-
2. **Refine with Brush Tool**
|
| 653 |
-
- Clean up edges where consolidation fades
|
| 654 |
-
- Use eraser for areas that are too included
|
| 655 |
-
|
| 656 |
-
3. **Avoid Common Mistakes**
|
| 657 |
-
- β Don't include ribs in the mask
|
| 658 |
-
- β Don't over-segment into normal lung
|
| 659 |
-
- β Don't miss subtle ground-glass opacities at edges
|
| 660 |
-
- β Do trace "through" ribs mentally
|
| 661 |
-
- β Do include the full air bronchogram region
|
| 662 |
-
|
| 663 |
-
4. **Different Types to Label**
|
| 664 |
-
- **Solid Consolidation**: Dense white areas
|
| 665 |
-
- **Ground Glass Opacity**: Subtle hazy areas
|
| 666 |
-
- **Air Bronchograms**: The pattern itself confirms consolidation
|
| 667 |
-
|
| 668 |
-
### Quality Metrics Interpretation:
|
| 669 |
-
|
| 670 |
-
- **Dice > 0.85**: Excellent segmentation
|
| 671 |
-
- **Dice 0.70-0.85**: Good segmentation (acceptable for fuzzy borders)
|
| 672 |
-
- **Dice < 0.70**: Needs review (may have missed area or over-segmented)
|
| 673 |
-
|
| 674 |
-
### Tips for Difficult Cases:
|
| 675 |
-
|
| 676 |
-
1. **Behind Ribs**: Mentally interpolate the consolidation boundary through rib shadows
|
| 677 |
-
2. **Near Heart**: Use silhouette sign - if heart border disappears, include that area
|
| 678 |
-
3. **Multiple Patches**: Each separate consolidation should be in the same mask
|
| 679 |
-
4. **Pleural Effusion vs Consolidation**: Effusion is smooth with meniscus; consolidation is irregular
|
| 680 |
-
""")
|
| 681 |
-
|
| 682 |
-
st.info("""
|
| 683 |
-
**Recommended Workflow:**
|
| 684 |
-
1. Preprocess image with enhancement script
|
| 685 |
-
2. Annotate in CVAT or similar tool
|
| 686 |
-
3. Use this app to validate against expert annotations
|
| 687 |
-
4. Iterate until Dice > 0.80
|
| 688 |
-
""")
|
| 689 |
-
|
| 690 |
-
|
| 691 |
-
if __name__ == "__main__":
|
| 692 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/example_usage.md
DELETED
|
@@ -1,389 +0,0 @@
|
|
| 1 |
-
# Example Usage - Pneumonia Consolidation Segmentation
|
| 2 |
-
|
| 3 |
-
This notebook demonstrates how to use the pneumonia consolidation segmentation tools.
|
| 4 |
-
|
| 5 |
-
## Setup
|
| 6 |
-
|
| 7 |
-
```python
|
| 8 |
-
import sys
|
| 9 |
-
import cv2
|
| 10 |
-
import numpy as np
|
| 11 |
-
from pathlib import Path
|
| 12 |
-
import matplotlib.pyplot as plt
|
| 13 |
-
|
| 14 |
-
# Add parent directory to path
|
| 15 |
-
sys.path.append('..')
|
| 16 |
-
|
| 17 |
-
# Import our modules
|
| 18 |
-
from preprocessing_consolidation import enhance_consolidation
|
| 19 |
-
from dice_calculator_app import (
|
| 20 |
-
calculate_dice_coefficient,
|
| 21 |
-
calculate_iou,
|
| 22 |
-
calculate_precision_recall,
|
| 23 |
-
create_overlay_visualization
|
| 24 |
-
)
|
| 25 |
-
```
|
| 26 |
-
|
| 27 |
-
## 1. Preprocessing Images
|
| 28 |
-
|
| 29 |
-
### Enhance a single image to see consolidation better
|
| 30 |
-
|
| 31 |
-
```python
|
| 32 |
-
# Path to your chest X-ray
|
| 33 |
-
input_image = "../data/Pacientes/7035909/7035909_20240326.jpg"
|
| 34 |
-
output_image = "../dice/enhanced_images/7035909_enhanced.jpg"
|
| 35 |
-
|
| 36 |
-
# Enhance the image
|
| 37 |
-
enhanced = enhance_consolidation(input_image, output_image)
|
| 38 |
-
|
| 39 |
-
# Visualize comparison
|
| 40 |
-
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
|
| 41 |
-
|
| 42 |
-
original = cv2.imread(input_image, cv2.IMREAD_GRAYSCALE)
|
| 43 |
-
axes[0].imshow(original, cmap='gray')
|
| 44 |
-
axes[0].set_title('Original X-ray')
|
| 45 |
-
axes[0].axis('off')
|
| 46 |
-
|
| 47 |
-
axes[1].imshow(enhanced, cmap='gray')
|
| 48 |
-
axes[1].set_title('Enhanced (CLAHE + Sharpening)')
|
| 49 |
-
axes[1].axis('off')
|
| 50 |
-
|
| 51 |
-
plt.tight_layout()
|
| 52 |
-
plt.show()
|
| 53 |
-
```
|
| 54 |
-
|
| 55 |
-
### Batch process multiple images
|
| 56 |
-
|
| 57 |
-
```python
|
| 58 |
-
from preprocessing_consolidation import batch_enhance_consolidation
|
| 59 |
-
|
| 60 |
-
# Process all patient images
|
| 61 |
-
input_dir = "../data/Pacientes/"
|
| 62 |
-
output_dir = "../dice/enhanced_images/"
|
| 63 |
-
|
| 64 |
-
batch_enhance_consolidation(input_dir, output_dir, image_extension='.jpg')
|
| 65 |
-
```
|
| 66 |
-
|
| 67 |
-
## 2. Create Sample Masks for Testing
|
| 68 |
-
|
| 69 |
-
Let's create some sample masks to demonstrate the Dice calculation.
|
| 70 |
-
|
| 71 |
-
```python
|
| 72 |
-
def create_sample_masks(image_path):
|
| 73 |
-
"""Create sample ground truth and prediction masks for demo."""
|
| 74 |
-
|
| 75 |
-
# Load image
|
| 76 |
-
img = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
|
| 77 |
-
h, w = img.shape
|
| 78 |
-
|
| 79 |
-
# Create ground truth mask (simulated consolidation in lower right lung)
|
| 80 |
-
ground_truth = np.zeros((h, w), dtype=np.uint8)
|
| 81 |
-
center_y, center_x = int(h * 0.6), int(w * 0.7)
|
| 82 |
-
|
| 83 |
-
# Create irregular shape for consolidation
|
| 84 |
-
for i in range(h):
|
| 85 |
-
for j in range(w):
|
| 86 |
-
dist = np.sqrt((i - center_y)**2 + (j - center_x)**2)
|
| 87 |
-
noise = np.random.randn() * 20
|
| 88 |
-
if dist + noise < 80:
|
| 89 |
-
ground_truth[i, j] = 255
|
| 90 |
-
|
| 91 |
-
# Create predicted mask (similar but slightly different)
|
| 92 |
-
prediction = np.zeros((h, w), dtype=np.uint8)
|
| 93 |
-
center_y_pred = int(h * 0.58) # Slightly shifted
|
| 94 |
-
center_x_pred = int(w * 0.72)
|
| 95 |
-
|
| 96 |
-
for i in range(h):
|
| 97 |
-
for j in range(w):
|
| 98 |
-
dist = np.sqrt((i - center_y_pred)**2 + (j - center_x_pred)**2)
|
| 99 |
-
noise = np.random.randn() * 25
|
| 100 |
-
if dist + noise < 75: # Slightly smaller
|
| 101 |
-
prediction[i, j] = 255
|
| 102 |
-
|
| 103 |
-
return ground_truth, prediction
|
| 104 |
-
|
| 105 |
-
# Create sample masks
|
| 106 |
-
image_path = "../data/Pacientes/7035909/7035909_20240326.jpg"
|
| 107 |
-
gt_mask, pred_mask = create_sample_masks(image_path)
|
| 108 |
-
|
| 109 |
-
# Save masks
|
| 110 |
-
cv2.imwrite("../dice/annotations/ground_truth/sample_gt.png", gt_mask)
|
| 111 |
-
cv2.imwrite("../dice/annotations/predictions/sample_pred.png", pred_mask)
|
| 112 |
-
|
| 113 |
-
print("Sample masks created!")
|
| 114 |
-
```
|
| 115 |
-
|
| 116 |
-
## 3. Calculate Dice Coefficient
|
| 117 |
-
|
| 118 |
-
```python
|
| 119 |
-
# Load masks
|
| 120 |
-
ground_truth = cv2.imread("../dice/annotations/ground_truth/sample_gt.png", cv2.IMREAD_GRAYSCALE)
|
| 121 |
-
prediction = cv2.imread("../dice/annotations/predictions/sample_pred.png", cv2.IMREAD_GRAYSCALE)
|
| 122 |
-
|
| 123 |
-
# Calculate metrics
|
| 124 |
-
dice = calculate_dice_coefficient(ground_truth, prediction)
|
| 125 |
-
iou = calculate_iou(ground_truth, prediction)
|
| 126 |
-
precision, recall = calculate_precision_recall(ground_truth, prediction)
|
| 127 |
-
f1 = 2 * (precision * recall) / (precision + recall) if (precision + recall) > 0 else 0
|
| 128 |
-
|
| 129 |
-
print("Segmentation Metrics:")
|
| 130 |
-
print(f" Dice Coefficient: {dice:.4f}")
|
| 131 |
-
print(f" IoU (Jaccard): {iou:.4f}")
|
| 132 |
-
print(f" Precision: {precision:.4f}")
|
| 133 |
-
print(f" Recall: {recall:.4f}")
|
| 134 |
-
print(f" F1 Score: {f1:.4f}")
|
| 135 |
-
|
| 136 |
-
# Interpretation
|
| 137 |
-
if dice > 0.85:
|
| 138 |
-
quality = "Excellent β"
|
| 139 |
-
elif dice > 0.70:
|
| 140 |
-
quality = "Good (acceptable for fuzzy borders)"
|
| 141 |
-
else:
|
| 142 |
-
quality = "Needs review"
|
| 143 |
-
|
| 144 |
-
print(f"\nQuality Assessment: {quality}")
|
| 145 |
-
```
|
| 146 |
-
|
| 147 |
-
## 4. Visualize Results
|
| 148 |
-
|
| 149 |
-
```python
|
| 150 |
-
# Load original image
|
| 151 |
-
original = cv2.imread(image_path)
|
| 152 |
-
|
| 153 |
-
# Create overlay visualization
|
| 154 |
-
overlay = create_overlay_visualization(original, ground_truth, prediction, alpha=0.5)
|
| 155 |
-
|
| 156 |
-
# Display all views
|
| 157 |
-
fig, axes = plt.subplots(2, 2, figsize=(12, 12))
|
| 158 |
-
|
| 159 |
-
axes[0, 0].imshow(cv2.cvtColor(original, cv2.COLOR_BGR2RGB))
|
| 160 |
-
axes[0, 0].set_title('Original X-ray')
|
| 161 |
-
axes[0, 0].axis('off')
|
| 162 |
-
|
| 163 |
-
axes[0, 1].imshow(ground_truth, cmap='Greens')
|
| 164 |
-
axes[0, 1].set_title('Ground Truth Mask')
|
| 165 |
-
axes[0, 1].axis('off')
|
| 166 |
-
|
| 167 |
-
axes[1, 0].imshow(prediction, cmap='Reds')
|
| 168 |
-
axes[1, 0].set_title('Predicted Mask')
|
| 169 |
-
axes[1, 0].axis('off')
|
| 170 |
-
|
| 171 |
-
axes[1, 1].imshow(overlay)
|
| 172 |
-
axes[1, 1].set_title(f'Overlay (Dice: {dice:.3f})')
|
| 173 |
-
axes[1, 1].axis('off')
|
| 174 |
-
|
| 175 |
-
# Add legend
|
| 176 |
-
legend_elements = [
|
| 177 |
-
plt.Line2D([0], [0], marker='o', color='w', markerfacecolor='g', markersize=10, label='Ground Truth'),
|
| 178 |
-
plt.Line2D([0], [0], marker='o', color='w', markerfacecolor='r', markersize=10, label='Prediction'),
|
| 179 |
-
plt.Line2D([0], [0], marker='o', color='w', markerfacecolor='y', markersize=10, label='Overlap')
|
| 180 |
-
]
|
| 181 |
-
axes[1, 1].legend(handles=legend_elements, loc='upper right')
|
| 182 |
-
|
| 183 |
-
plt.tight_layout()
|
| 184 |
-
plt.savefig('../dice/results/example_visualization.png', dpi=150, bbox_inches='tight')
|
| 185 |
-
plt.show()
|
| 186 |
-
|
| 187 |
-
print("Visualization saved to: dice/results/example_visualization.png")
|
| 188 |
-
```
|
| 189 |
-
|
| 190 |
-
## 5. Batch Calculate Dice Scores
|
| 191 |
-
|
| 192 |
-
Process multiple mask pairs and generate report.
|
| 193 |
-
|
| 194 |
-
```python
|
| 195 |
-
import pandas as pd
|
| 196 |
-
from pathlib import Path
|
| 197 |
-
|
| 198 |
-
def batch_calculate_dice(gt_dir, pred_dir, results_file):
|
| 199 |
-
"""Calculate Dice for all mask pairs in directories."""
|
| 200 |
-
|
| 201 |
-
gt_dir = Path(gt_dir)
|
| 202 |
-
pred_dir = Path(pred_dir)
|
| 203 |
-
|
| 204 |
-
results = []
|
| 205 |
-
|
| 206 |
-
# Find all ground truth masks
|
| 207 |
-
gt_masks = list(gt_dir.glob("*.png")) + list(gt_dir.glob("*.jpg"))
|
| 208 |
-
|
| 209 |
-
for gt_path in gt_masks:
|
| 210 |
-
# Find corresponding prediction
|
| 211 |
-
pred_path = pred_dir / gt_path.name
|
| 212 |
-
|
| 213 |
-
if not pred_path.exists():
|
| 214 |
-
print(f"Warning: No prediction found for {gt_path.name}")
|
| 215 |
-
continue
|
| 216 |
-
|
| 217 |
-
# Load masks
|
| 218 |
-
gt = cv2.imread(str(gt_path), cv2.IMREAD_GRAYSCALE)
|
| 219 |
-
pred = cv2.imread(str(pred_path), cv2.IMREAD_GRAYSCALE)
|
| 220 |
-
|
| 221 |
-
# Calculate metrics
|
| 222 |
-
dice = calculate_dice_coefficient(gt, pred)
|
| 223 |
-
iou = calculate_iou(gt, pred)
|
| 224 |
-
precision, recall = calculate_precision_recall(gt, pred)
|
| 225 |
-
f1 = 2 * (precision * recall) / (precision + recall) if (precision + recall) > 0 else 0
|
| 226 |
-
|
| 227 |
-
results.append({
|
| 228 |
-
'Image': gt_path.name,
|
| 229 |
-
'Dice': dice,
|
| 230 |
-
'IoU': iou,
|
| 231 |
-
'Precision': precision,
|
| 232 |
-
'Recall': recall,
|
| 233 |
-
'F1': f1
|
| 234 |
-
})
|
| 235 |
-
|
| 236 |
-
print(f"Processed: {gt_path.name} - Dice: {dice:.4f}")
|
| 237 |
-
|
| 238 |
-
# Create DataFrame
|
| 239 |
-
df = pd.DataFrame(results)
|
| 240 |
-
|
| 241 |
-
# Calculate summary statistics
|
| 242 |
-
summary = {
|
| 243 |
-
'Metric': ['Mean', 'Std', 'Min', 'Max', 'Median'],
|
| 244 |
-
'Dice': [
|
| 245 |
-
df['Dice'].mean(),
|
| 246 |
-
df['Dice'].std(),
|
| 247 |
-
df['Dice'].min(),
|
| 248 |
-
df['Dice'].max(),
|
| 249 |
-
df['Dice'].median()
|
| 250 |
-
]
|
| 251 |
-
}
|
| 252 |
-
|
| 253 |
-
summary_df = pd.DataFrame(summary)
|
| 254 |
-
|
| 255 |
-
# Save results
|
| 256 |
-
with pd.ExcelWriter(results_file, engine='openpyxl') as writer:
|
| 257 |
-
df.to_excel(writer, sheet_name='Individual Results', index=False)
|
| 258 |
-
summary_df.to_excel(writer, sheet_name='Summary', index=False)
|
| 259 |
-
|
| 260 |
-
print(f"\nResults saved to: {results_file}")
|
| 261 |
-
print("\nSummary Statistics:")
|
| 262 |
-
print(summary_df.to_string(index=False))
|
| 263 |
-
|
| 264 |
-
return df, summary_df
|
| 265 |
-
|
| 266 |
-
# Run batch processing
|
| 267 |
-
gt_directory = "../dice/annotations/ground_truth/"
|
| 268 |
-
pred_directory = "../dice/annotations/predictions/"
|
| 269 |
-
results_excel = "../dice/results/dice_scores_report.xlsx"
|
| 270 |
-
|
| 271 |
-
df_results, df_summary = batch_calculate_dice(gt_directory, pred_directory, results_excel)
|
| 272 |
-
```
|
| 273 |
-
|
| 274 |
-
## 6. Working with Real Patient Data
|
| 275 |
-
|
| 276 |
-
Example of processing actual patient X-rays from your dataset.
|
| 277 |
-
|
| 278 |
-
```python
|
| 279 |
-
# Get list of patient directories
|
| 280 |
-
patients_dir = Path("../data/Pacientes/")
|
| 281 |
-
patient_folders = [d for d in patients_dir.iterdir() if d.is_dir() and d.name.isdigit()]
|
| 282 |
-
|
| 283 |
-
print(f"Found {len(patient_folders)} patient folders")
|
| 284 |
-
|
| 285 |
-
# Process first 5 patients as example
|
| 286 |
-
for patient_dir in patient_folders[:5]:
|
| 287 |
-
patient_id = patient_dir.name
|
| 288 |
-
print(f"\nProcessing Patient: {patient_id}")
|
| 289 |
-
|
| 290 |
-
# Find X-ray image
|
| 291 |
-
images = list(patient_dir.glob("*.jpg"))
|
| 292 |
-
|
| 293 |
-
if images:
|
| 294 |
-
xray_path = images[0]
|
| 295 |
-
print(f" X-ray: {xray_path.name}")
|
| 296 |
-
|
| 297 |
-
# Enhance image
|
| 298 |
-
output_path = f"../dice/enhanced_images/{patient_id}_enhanced.jpg"
|
| 299 |
-
enhanced = enhance_consolidation(str(xray_path), output_path)
|
| 300 |
-
|
| 301 |
-
print(f" Enhanced image saved: {output_path}")
|
| 302 |
-
|
| 303 |
-
# Here you would:
|
| 304 |
-
# 1. Load or create annotations
|
| 305 |
-
# 2. Calculate Dice if annotations exist
|
| 306 |
-
# 3. Generate reports
|
| 307 |
-
else:
|
| 308 |
-
print(f" No images found")
|
| 309 |
-
```
|
| 310 |
-
|
| 311 |
-
## 7. Quality Control Report
|
| 312 |
-
|
| 313 |
-
Generate a comprehensive quality control report.
|
| 314 |
-
|
| 315 |
-
```python
|
| 316 |
-
def generate_qc_report(results_df, output_path):
|
| 317 |
-
"""Generate quality control report with visualizations."""
|
| 318 |
-
|
| 319 |
-
fig, axes = plt.subplots(2, 2, figsize=(14, 10))
|
| 320 |
-
|
| 321 |
-
# 1. Dice score distribution
|
| 322 |
-
axes[0, 0].hist(results_df['Dice'], bins=20, color='steelblue', edgecolor='black')
|
| 323 |
-
axes[0, 0].axvline(0.7, color='orange', linestyle='--', label='Good threshold')
|
| 324 |
-
axes[0, 0].axvline(0.85, color='green', linestyle='--', label='Excellent threshold')
|
| 325 |
-
axes[0, 0].set_xlabel('Dice Coefficient')
|
| 326 |
-
axes[0, 0].set_ylabel('Frequency')
|
| 327 |
-
axes[0, 0].set_title('Distribution of Dice Scores')
|
| 328 |
-
axes[0, 0].legend()
|
| 329 |
-
|
| 330 |
-
# 2. Dice vs IoU scatter
|
| 331 |
-
axes[0, 1].scatter(results_df['Dice'], results_df['IoU'], alpha=0.6)
|
| 332 |
-
axes[0, 1].plot([0, 1], [0, 1], 'r--', label='Perfect correlation')
|
| 333 |
-
axes[0, 1].set_xlabel('Dice Coefficient')
|
| 334 |
-
axes[0, 1].set_ylabel('IoU')
|
| 335 |
-
axes[0, 1].set_title('Dice vs IoU Correlation')
|
| 336 |
-
axes[0, 1].legend()
|
| 337 |
-
|
| 338 |
-
# 3. Precision-Recall scatter
|
| 339 |
-
axes[1, 0].scatter(results_df['Recall'], results_df['Precision'],
|
| 340 |
-
c=results_df['Dice'], cmap='viridis', alpha=0.6)
|
| 341 |
-
axes[1, 0].set_xlabel('Recall')
|
| 342 |
-
axes[1, 0].set_ylabel('Precision')
|
| 343 |
-
axes[1, 0].set_title('Precision vs Recall (colored by Dice)')
|
| 344 |
-
plt.colorbar(axes[1, 0].collections[0], ax=axes[1, 0], label='Dice')
|
| 345 |
-
|
| 346 |
-
# 4. Quality categories
|
| 347 |
-
categories = pd.cut(results_df['Dice'],
|
| 348 |
-
bins=[0, 0.7, 0.85, 1.0],
|
| 349 |
-
labels=['Needs Review', 'Good', 'Excellent'])
|
| 350 |
-
category_counts = categories.value_counts()
|
| 351 |
-
|
| 352 |
-
axes[1, 1].bar(range(len(category_counts)), category_counts.values,
|
| 353 |
-
color=['red', 'orange', 'green'])
|
| 354 |
-
axes[1, 1].set_xticks(range(len(category_counts)))
|
| 355 |
-
axes[1, 1].set_xticklabels(category_counts.index, rotation=45)
|
| 356 |
-
axes[1, 1].set_ylabel('Count')
|
| 357 |
-
axes[1, 1].set_title('Segmentation Quality Distribution')
|
| 358 |
-
|
| 359 |
-
plt.tight_layout()
|
| 360 |
-
plt.savefig(output_path, dpi=150, bbox_inches='tight')
|
| 361 |
-
plt.show()
|
| 362 |
-
|
| 363 |
-
print(f"Quality control report saved: {output_path}")
|
| 364 |
-
|
| 365 |
-
# Print summary
|
| 366 |
-
print("\n=== Quality Control Summary ===")
|
| 367 |
-
print(f"Total cases: {len(results_df)}")
|
| 368 |
-
print(f"\nQuality breakdown:")
|
| 369 |
-
for cat, count in category_counts.items():
|
| 370 |
-
pct = (count / len(results_df)) * 100
|
| 371 |
-
print(f" {cat}: {count} ({pct:.1f}%)")
|
| 372 |
-
|
| 373 |
-
# Generate report if we have results
|
| 374 |
-
if len(df_results) > 0:
|
| 375 |
-
generate_qc_report(df_results, '../dice/results/quality_control_report.png')
|
| 376 |
-
```
|
| 377 |
-
|
| 378 |
-
## Next Steps
|
| 379 |
-
|
| 380 |
-
1. **Annotate Real Data**: Use CVAT or Label Studio to create ground truth masks
|
| 381 |
-
2. **Train ML Model**: Use annotated data to train segmentation model
|
| 382 |
-
3. **Validate**: Use this toolkit to validate model predictions
|
| 383 |
-
4. **Iterate**: Refine annotations and model based on Dice scores
|
| 384 |
-
|
| 385 |
-
## Resources
|
| 386 |
-
|
| 387 |
-
- [CVAT Installation](https://opencv.github.io/cvat/docs/)
|
| 388 |
-
- [SAM Download](https://github.com/facebookresearch/segment-anything)
|
| 389 |
-
- [Medical Image Segmentation Best Practices](https://arxiv.org/abs/1904.03882)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/preprocessing_consolidation.py
DELETED
|
@@ -1,156 +0,0 @@
|
|
| 1 |
-
"""
|
| 2 |
-
Preprocessing script for pneumonia consolidation enhancement.
|
| 3 |
-
This script enhances chest X-ray images to better visualize consolidations,
|
| 4 |
-
air bronchograms, and subtle patterns necessary for accurate segmentation.
|
| 5 |
-
"""
|
| 6 |
-
|
| 7 |
-
import cv2
|
| 8 |
-
import numpy as np
|
| 9 |
-
from pathlib import Path
|
| 10 |
-
import argparse
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
def enhance_consolidation(img_path, output_path=None):
|
| 14 |
-
"""
|
| 15 |
-
Enhance chest X-ray image to better visualize pneumonia consolidation.
|
| 16 |
-
|
| 17 |
-
Args:
|
| 18 |
-
img_path: Path to the input image (JPG/PNG)
|
| 19 |
-
output_path: Path to save the enhanced image (optional)
|
| 20 |
-
|
| 21 |
-
Returns:
|
| 22 |
-
Enhanced image as numpy array
|
| 23 |
-
"""
|
| 24 |
-
# Read image in grayscale
|
| 25 |
-
img = cv2.imread(str(img_path), cv2.IMREAD_GRAYSCALE)
|
| 26 |
-
|
| 27 |
-
if img is None:
|
| 28 |
-
raise ValueError(f"Could not read image from {img_path}")
|
| 29 |
-
|
| 30 |
-
# 1. CLAHE (Contrast Limited Adaptive Histogram Equalization)
|
| 31 |
-
# Enhances local contrast to see subtle consolidations
|
| 32 |
-
clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8, 8))
|
| 33 |
-
enhanced = clahe.apply(img)
|
| 34 |
-
|
| 35 |
-
# 2. Sharpening filter to reveal air bronchograms
|
| 36 |
-
# Air bronchograms are dark branch-like patterns inside consolidation
|
| 37 |
-
kernel = np.array([[-1, -1, -1],
|
| 38 |
-
[-1, 9, -1],
|
| 39 |
-
[-1, -1, -1]])
|
| 40 |
-
sharpened = cv2.filter2D(enhanced, -1, kernel)
|
| 41 |
-
|
| 42 |
-
# 3. Optional: Edge enhancement to see consolidation boundaries
|
| 43 |
-
# Using Laplacian for edge detection
|
| 44 |
-
laplacian = cv2.Laplacian(sharpened, cv2.CV_64F)
|
| 45 |
-
laplacian = np.uint8(np.absolute(laplacian))
|
| 46 |
-
|
| 47 |
-
# Combine sharpened image with edge information
|
| 48 |
-
alpha = 0.8 # Weight for sharpened image
|
| 49 |
-
beta = 0.2 # Weight for edge information
|
| 50 |
-
result = cv2.addWeighted(sharpened, alpha, laplacian, beta, 0)
|
| 51 |
-
|
| 52 |
-
# Save if output path provided
|
| 53 |
-
if output_path:
|
| 54 |
-
cv2.imwrite(str(output_path), result)
|
| 55 |
-
print(f"Enhanced image saved to: {output_path}")
|
| 56 |
-
|
| 57 |
-
return result
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
def batch_enhance_consolidation(input_dir, output_dir, image_extension='.jpg'):
|
| 61 |
-
"""
|
| 62 |
-
Process multiple images in a directory.
|
| 63 |
-
|
| 64 |
-
Args:
|
| 65 |
-
input_dir: Directory containing input images
|
| 66 |
-
output_dir: Directory to save enhanced images
|
| 67 |
-
image_extension: File extension to process (default: .jpg)
|
| 68 |
-
"""
|
| 69 |
-
input_path = Path(input_dir)
|
| 70 |
-
output_path = Path(output_dir)
|
| 71 |
-
output_path.mkdir(parents=True, exist_ok=True)
|
| 72 |
-
|
| 73 |
-
# Find all images with specified extension
|
| 74 |
-
images = list(input_path.rglob(f"*{image_extension}"))
|
| 75 |
-
|
| 76 |
-
print(f"Found {len(images)} images to process")
|
| 77 |
-
|
| 78 |
-
for img_path in images:
|
| 79 |
-
try:
|
| 80 |
-
# Create output path maintaining relative structure
|
| 81 |
-
rel_path = img_path.relative_to(input_path)
|
| 82 |
-
out_path = output_path / rel_path
|
| 83 |
-
out_path.parent.mkdir(parents=True, exist_ok=True)
|
| 84 |
-
|
| 85 |
-
# Process image
|
| 86 |
-
enhance_consolidation(img_path, out_path)
|
| 87 |
-
print(f"Processed: {img_path.name}")
|
| 88 |
-
|
| 89 |
-
except Exception as e:
|
| 90 |
-
print(f"Error processing {img_path}: {e}")
|
| 91 |
-
|
| 92 |
-
print(f"\nProcessing complete! Enhanced images saved to: {output_path}")
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
def create_visualization_comparison(original_path, enhanced_path, output_path):
|
| 96 |
-
"""
|
| 97 |
-
Create a side-by-side comparison of original and enhanced images.
|
| 98 |
-
|
| 99 |
-
Args:
|
| 100 |
-
original_path: Path to original image
|
| 101 |
-
enhanced_path: Path to enhanced image
|
| 102 |
-
output_path: Path to save comparison image
|
| 103 |
-
"""
|
| 104 |
-
original = cv2.imread(str(original_path), cv2.IMREAD_GRAYSCALE)
|
| 105 |
-
enhanced = cv2.imread(str(enhanced_path), cv2.IMREAD_GRAYSCALE)
|
| 106 |
-
|
| 107 |
-
# Resize if needed to match dimensions
|
| 108 |
-
if original.shape != enhanced.shape:
|
| 109 |
-
enhanced = cv2.resize(enhanced, (original.shape[1], original.shape[0]))
|
| 110 |
-
|
| 111 |
-
# Create side-by-side comparison
|
| 112 |
-
comparison = np.hstack([original, enhanced])
|
| 113 |
-
|
| 114 |
-
# Add labels
|
| 115 |
-
font = cv2.FONT_HERSHEY_SIMPLEX
|
| 116 |
-
cv2.putText(comparison, 'Original', (10, 30), font, 1, (255, 255, 255), 2)
|
| 117 |
-
cv2.putText(comparison, 'Enhanced', (original.shape[1] + 10, 30), font, 1, (255, 255, 255), 2)
|
| 118 |
-
|
| 119 |
-
cv2.imwrite(str(output_path), comparison)
|
| 120 |
-
print(f"Comparison saved to: {output_path}")
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
if __name__ == "__main__":
|
| 124 |
-
parser = argparse.ArgumentParser(
|
| 125 |
-
description="Enhance chest X-ray images for pneumonia consolidation segmentation"
|
| 126 |
-
)
|
| 127 |
-
parser.add_argument(
|
| 128 |
-
'--input',
|
| 129 |
-
type=str,
|
| 130 |
-
required=True,
|
| 131 |
-
help='Input image file or directory'
|
| 132 |
-
)
|
| 133 |
-
parser.add_argument(
|
| 134 |
-
'--output',
|
| 135 |
-
type=str,
|
| 136 |
-
required=True,
|
| 137 |
-
help='Output file or directory'
|
| 138 |
-
)
|
| 139 |
-
parser.add_argument(
|
| 140 |
-
'--batch',
|
| 141 |
-
action='store_true',
|
| 142 |
-
help='Process entire directory (batch mode)'
|
| 143 |
-
)
|
| 144 |
-
parser.add_argument(
|
| 145 |
-
'--extension',
|
| 146 |
-
type=str,
|
| 147 |
-
default='.jpg',
|
| 148 |
-
help='Image file extension for batch processing (default: .jpg)'
|
| 149 |
-
)
|
| 150 |
-
|
| 151 |
-
args = parser.parse_args()
|
| 152 |
-
|
| 153 |
-
if args.batch:
|
| 154 |
-
batch_enhance_consolidation(args.input, args.output, args.extension)
|
| 155 |
-
else:
|
| 156 |
-
enhance_consolidation(args.input, args.output)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/quickstart.sh
DELETED
|
@@ -1,88 +0,0 @@
|
|
| 1 |
-
#!/bin/bash
|
| 2 |
-
|
| 3 |
-
# Quick Start Script for Pneumonia Consolidation Segmentation
|
| 4 |
-
# This script sets up the environment and runs initial tests
|
| 5 |
-
|
| 6 |
-
echo "π« Pneumonia Consolidation Segmentation - Quick Start"
|
| 7 |
-
echo "======================================================"
|
| 8 |
-
echo ""
|
| 9 |
-
|
| 10 |
-
# Check if we're in the right directory
|
| 11 |
-
if [ ! -f "dice_calculator_app.py" ]; then
|
| 12 |
-
echo "β Error: Please run this script from the dice/ directory"
|
| 13 |
-
exit 1
|
| 14 |
-
fi
|
| 15 |
-
|
| 16 |
-
echo "β Directory check passed"
|
| 17 |
-
echo ""
|
| 18 |
-
|
| 19 |
-
# Step 1: Install dependencies
|
| 20 |
-
echo "π¦ Step 1: Installing dependencies..."
|
| 21 |
-
echo "Command: pip install -r requirements.txt"
|
| 22 |
-
echo ""
|
| 23 |
-
read -p "Press Enter to continue or Ctrl+C to cancel..."
|
| 24 |
-
pip install -r requirements.txt
|
| 25 |
-
|
| 26 |
-
if [ $? -eq 0 ]; then
|
| 27 |
-
echo "β Dependencies installed successfully"
|
| 28 |
-
else
|
| 29 |
-
echo "β Error installing dependencies"
|
| 30 |
-
exit 1
|
| 31 |
-
fi
|
| 32 |
-
echo ""
|
| 33 |
-
|
| 34 |
-
# Step 2: Test Streamlit installation
|
| 35 |
-
echo "π§ͺ Step 2: Testing Streamlit installation..."
|
| 36 |
-
streamlit --version
|
| 37 |
-
if [ $? -eq 0 ]; then
|
| 38 |
-
echo "β Streamlit is working"
|
| 39 |
-
else
|
| 40 |
-
echo "β Streamlit test failed"
|
| 41 |
-
exit 1
|
| 42 |
-
fi
|
| 43 |
-
echo ""
|
| 44 |
-
|
| 45 |
-
# Step 3: Create sample test images (optional)
|
| 46 |
-
echo "πΈ Step 3: Would you like to preprocess sample images now?"
|
| 47 |
-
echo "This will enhance chest X-rays for better consolidation visibility."
|
| 48 |
-
echo ""
|
| 49 |
-
read -p "Enter 'y' to preprocess images, or 'n' to skip: " preprocess
|
| 50 |
-
|
| 51 |
-
if [ "$preprocess" = "y" ]; then
|
| 52 |
-
echo ""
|
| 53 |
-
echo "Enter the path to your input directory (e.g., ../data/Pacientes/):"
|
| 54 |
-
read input_dir
|
| 55 |
-
|
| 56 |
-
if [ -d "$input_dir" ]; then
|
| 57 |
-
echo "Processing images from: $input_dir"
|
| 58 |
-
echo "Output will be saved to: ./enhanced_images/"
|
| 59 |
-
python preprocessing_consolidation.py \
|
| 60 |
-
--input "$input_dir" \
|
| 61 |
-
--output ./enhanced_images/ \
|
| 62 |
-
--batch \
|
| 63 |
-
--extension .jpg
|
| 64 |
-
|
| 65 |
-
echo "β Preprocessing complete"
|
| 66 |
-
else
|
| 67 |
-
echo "β Directory not found: $input_dir"
|
| 68 |
-
fi
|
| 69 |
-
fi
|
| 70 |
-
echo ""
|
| 71 |
-
|
| 72 |
-
# Step 4: Launch Streamlit app
|
| 73 |
-
echo "π Step 4: Launch Dice Calculator App"
|
| 74 |
-
echo ""
|
| 75 |
-
echo "The Streamlit app will open in your browser."
|
| 76 |
-
echo "You can then upload images and masks to calculate Dice scores."
|
| 77 |
-
echo ""
|
| 78 |
-
read -p "Press Enter to launch the app, or Ctrl+C to exit..."
|
| 79 |
-
|
| 80 |
-
streamlit run dice_calculator_app.py
|
| 81 |
-
|
| 82 |
-
echo ""
|
| 83 |
-
echo "β Setup complete!"
|
| 84 |
-
echo ""
|
| 85 |
-
echo "Next steps:"
|
| 86 |
-
echo "1. Annotate your images using CVAT or Label Studio"
|
| 87 |
-
echo "2. Use the Dice Calculator to validate annotations"
|
| 88 |
-
echo "3. See TODO.md for complete project roadmap"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/requirements.txt
DELETED
|
@@ -1,31 +0,0 @@
|
|
| 1 |
-
# Requirements for Pneumonia Consolidation Annotation Tool
|
| 2 |
-
# For cloud deployment (Hugging Face Spaces / Streamlit Cloud)
|
| 3 |
-
|
| 4 |
-
# Core dependencies
|
| 5 |
-
numpy>=1.24.0
|
| 6 |
-
opencv-python-headless>=4.8.0 # Use headless for cloud deployment!
|
| 7 |
-
Pillow>=10.0.0
|
| 8 |
-
pandas>=2.0.0
|
| 9 |
-
matplotlib>=3.7.0
|
| 10 |
-
|
| 11 |
-
# Streamlit app
|
| 12 |
-
streamlit==1.29.0
|
| 13 |
-
streamlit-drawable-canvas==0.9.3
|
| 14 |
-
altair<5
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
# Optional: SAM (Segment Anything Model)
|
| 18 |
-
# Uncomment to use SAM integration
|
| 19 |
-
# segment-anything
|
| 20 |
-
# torch>=1.7.0
|
| 21 |
-
# torchvision>=0.8.0
|
| 22 |
-
|
| 23 |
-
# Optional: For Hausdorff distance calculation
|
| 24 |
-
scipy>=1.7.0
|
| 25 |
-
|
| 26 |
-
# Optional: For advanced image processing
|
| 27 |
-
scikit-image>=0.18.0
|
| 28 |
-
|
| 29 |
-
# Development
|
| 30 |
-
jupyter>=1.0.0
|
| 31 |
-
ipykernel>=6.0.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/sam_integration.py
DELETED
|
@@ -1,395 +0,0 @@
|
|
| 1 |
-
"""
|
| 2 |
-
SAM (Segment Anything Model) Integration for Pneumonia Consolidation
|
| 3 |
-
This script uses Meta's Segment Anything Model to generate initial segmentation masks
|
| 4 |
-
that can be refined manually.
|
| 5 |
-
"""
|
| 6 |
-
|
| 7 |
-
import numpy as np
|
| 8 |
-
import cv2
|
| 9 |
-
from pathlib import Path
|
| 10 |
-
import matplotlib.pyplot as plt
|
| 11 |
-
import argparse
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
def setup_sam():
|
| 15 |
-
"""
|
| 16 |
-
Setup SAM model. Install with:
|
| 17 |
-
pip install segment-anything
|
| 18 |
-
|
| 19 |
-
Download checkpoint from:
|
| 20 |
-
https://github.com/facebookresearch/segment-anything#model-checkpoints
|
| 21 |
-
"""
|
| 22 |
-
try:
|
| 23 |
-
from segment_anything import sam_model_registry, SamPredictor
|
| 24 |
-
return sam_model_registry, SamPredictor
|
| 25 |
-
except ImportError:
|
| 26 |
-
print("Error: segment-anything not installed.")
|
| 27 |
-
print("Install with: pip install segment-anything")
|
| 28 |
-
print("Then download a model checkpoint from:")
|
| 29 |
-
print("https://github.com/facebookresearch/segment-anything#model-checkpoints")
|
| 30 |
-
return None, None
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
def initialize_sam_predictor(checkpoint_path, model_type="vit_h"):
|
| 34 |
-
"""
|
| 35 |
-
Initialize SAM predictor.
|
| 36 |
-
|
| 37 |
-
Args:
|
| 38 |
-
checkpoint_path: Path to SAM checkpoint (.pth file)
|
| 39 |
-
model_type: Model type ('vit_h', 'vit_l', or 'vit_b')
|
| 40 |
-
|
| 41 |
-
Returns:
|
| 42 |
-
SAM predictor object
|
| 43 |
-
"""
|
| 44 |
-
sam_model_registry, SamPredictor = setup_sam()
|
| 45 |
-
if sam_model_registry is None:
|
| 46 |
-
return None
|
| 47 |
-
|
| 48 |
-
sam = sam_model_registry[model_type](checkpoint=checkpoint_path)
|
| 49 |
-
predictor = SamPredictor(sam)
|
| 50 |
-
|
| 51 |
-
return predictor
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
def predict_consolidation_with_points(image_path, predictor, point_coords, point_labels):
|
| 55 |
-
"""
|
| 56 |
-
Generate segmentation mask using point prompts.
|
| 57 |
-
|
| 58 |
-
Args:
|
| 59 |
-
image_path: Path to chest X-ray image
|
| 60 |
-
predictor: SAM predictor object
|
| 61 |
-
point_coords: Array of [x, y] coordinates for prompts
|
| 62 |
-
point_labels: Array of labels (1 for positive/include, 0 for negative/exclude)
|
| 63 |
-
|
| 64 |
-
Returns:
|
| 65 |
-
mask: Binary segmentation mask
|
| 66 |
-
scores: Confidence scores for each mask
|
| 67 |
-
"""
|
| 68 |
-
# Load image
|
| 69 |
-
image = cv2.imread(str(image_path))
|
| 70 |
-
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
|
| 71 |
-
|
| 72 |
-
# Set image in predictor
|
| 73 |
-
predictor.set_image(image)
|
| 74 |
-
|
| 75 |
-
# Convert points to numpy array
|
| 76 |
-
point_coords = np.array(point_coords)
|
| 77 |
-
point_labels = np.array(point_labels)
|
| 78 |
-
|
| 79 |
-
# Predict
|
| 80 |
-
masks, scores, logits = predictor.predict(
|
| 81 |
-
point_coords=point_coords,
|
| 82 |
-
point_labels=point_labels,
|
| 83 |
-
multimask_output=True
|
| 84 |
-
)
|
| 85 |
-
|
| 86 |
-
return masks, scores, image
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
def predict_consolidation_with_box(image_path, predictor, box_coords):
|
| 90 |
-
"""
|
| 91 |
-
Generate segmentation mask using bounding box prompt.
|
| 92 |
-
|
| 93 |
-
Args:
|
| 94 |
-
image_path: Path to chest X-ray image
|
| 95 |
-
predictor: SAM predictor object
|
| 96 |
-
box_coords: [x1, y1, x2, y2] bounding box coordinates
|
| 97 |
-
|
| 98 |
-
Returns:
|
| 99 |
-
mask: Binary segmentation mask
|
| 100 |
-
"""
|
| 101 |
-
# Load image
|
| 102 |
-
image = cv2.imread(str(image_path))
|
| 103 |
-
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
|
| 104 |
-
|
| 105 |
-
# Set image in predictor
|
| 106 |
-
predictor.set_image(image)
|
| 107 |
-
|
| 108 |
-
# Convert box to numpy array
|
| 109 |
-
box = np.array(box_coords)
|
| 110 |
-
|
| 111 |
-
# Predict
|
| 112 |
-
masks, scores, logits = predictor.predict(
|
| 113 |
-
box=box,
|
| 114 |
-
multimask_output=True
|
| 115 |
-
)
|
| 116 |
-
|
| 117 |
-
return masks, scores, image
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
def automatic_consolidation_detection(image_path, predictor, grid_size=5):
|
| 121 |
-
"""
|
| 122 |
-
Automatically detect potential consolidation regions using grid-based sampling.
|
| 123 |
-
|
| 124 |
-
Args:
|
| 125 |
-
image_path: Path to chest X-ray image
|
| 126 |
-
predictor: SAM predictor object
|
| 127 |
-
grid_size: Number of points in grid (grid_size x grid_size)
|
| 128 |
-
|
| 129 |
-
Returns:
|
| 130 |
-
Combined mask from multiple detections
|
| 131 |
-
"""
|
| 132 |
-
# Load image
|
| 133 |
-
image = cv2.imread(str(image_path))
|
| 134 |
-
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
|
| 135 |
-
h, w = image.shape[:2]
|
| 136 |
-
|
| 137 |
-
# Set image in predictor
|
| 138 |
-
predictor.set_image(image)
|
| 139 |
-
|
| 140 |
-
# Create grid of points in lung region (avoid edges)
|
| 141 |
-
margin_h = int(h * 0.1)
|
| 142 |
-
margin_w = int(w * 0.2)
|
| 143 |
-
|
| 144 |
-
x_coords = np.linspace(margin_w, w - margin_w, grid_size)
|
| 145 |
-
y_coords = np.linspace(margin_h, h - margin_h, grid_size)
|
| 146 |
-
|
| 147 |
-
all_masks = []
|
| 148 |
-
|
| 149 |
-
for x in x_coords:
|
| 150 |
-
for y in y_coords:
|
| 151 |
-
point = np.array([[x, y]])
|
| 152 |
-
label = np.array([1])
|
| 153 |
-
|
| 154 |
-
try:
|
| 155 |
-
masks, scores, _ = predictor.predict(
|
| 156 |
-
point_coords=point,
|
| 157 |
-
point_labels=label,
|
| 158 |
-
multimask_output=False
|
| 159 |
-
)
|
| 160 |
-
|
| 161 |
-
# Only keep masks with high confidence
|
| 162 |
-
if scores[0] > 0.8:
|
| 163 |
-
all_masks.append(masks[0])
|
| 164 |
-
except Exception as e:
|
| 165 |
-
continue
|
| 166 |
-
|
| 167 |
-
if not all_masks:
|
| 168 |
-
return None, image
|
| 169 |
-
|
| 170 |
-
# Combine masks
|
| 171 |
-
combined_mask = np.any(all_masks, axis=0).astype(np.uint8)
|
| 172 |
-
|
| 173 |
-
return combined_mask, image
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
def visualize_sam_results(image, masks, scores, point_coords=None, save_path=None):
|
| 177 |
-
"""
|
| 178 |
-
Visualize SAM segmentation results.
|
| 179 |
-
|
| 180 |
-
Args:
|
| 181 |
-
image: Original image
|
| 182 |
-
masks: Array of masks
|
| 183 |
-
scores: Confidence scores
|
| 184 |
-
point_coords: Optional point prompts to display
|
| 185 |
-
save_path: Optional path to save visualization
|
| 186 |
-
"""
|
| 187 |
-
fig, axes = plt.subplots(1, len(masks) + 1, figsize=(15, 5))
|
| 188 |
-
|
| 189 |
-
# Show original image
|
| 190 |
-
axes[0].imshow(image)
|
| 191 |
-
axes[0].set_title('Original')
|
| 192 |
-
axes[0].axis('off')
|
| 193 |
-
|
| 194 |
-
if point_coords is not None:
|
| 195 |
-
axes[0].scatter(point_coords[:, 0], point_coords[:, 1],
|
| 196 |
-
c='red', s=100, marker='*')
|
| 197 |
-
|
| 198 |
-
# Show each mask
|
| 199 |
-
for idx, (mask, score) in enumerate(zip(masks, scores)):
|
| 200 |
-
axes[idx + 1].imshow(image)
|
| 201 |
-
axes[idx + 1].imshow(mask, alpha=0.5, cmap='jet')
|
| 202 |
-
axes[idx + 1].set_title(f'Mask {idx + 1}\nScore: {score:.3f}')
|
| 203 |
-
axes[idx + 1].axis('off')
|
| 204 |
-
|
| 205 |
-
plt.tight_layout()
|
| 206 |
-
|
| 207 |
-
if save_path:
|
| 208 |
-
plt.savefig(save_path, dpi=150, bbox_inches='tight')
|
| 209 |
-
print(f"Visualization saved to: {save_path}")
|
| 210 |
-
|
| 211 |
-
plt.show()
|
| 212 |
-
|
| 213 |
-
|
| 214 |
-
def save_mask(mask, output_path):
|
| 215 |
-
"""Save binary mask as image."""
|
| 216 |
-
mask_uint8 = (mask * 255).astype(np.uint8)
|
| 217 |
-
cv2.imwrite(str(output_path), mask_uint8)
|
| 218 |
-
print(f"Mask saved to: {output_path}")
|
| 219 |
-
|
| 220 |
-
|
| 221 |
-
def interactive_sam_segmentation(image_path, checkpoint_path):
|
| 222 |
-
"""
|
| 223 |
-
Interactive segmentation where user clicks points to guide SAM.
|
| 224 |
-
This is a simple CLI version - for GUI, integrate with Streamlit.
|
| 225 |
-
"""
|
| 226 |
-
print("Initializing SAM...")
|
| 227 |
-
predictor = initialize_sam_predictor(checkpoint_path)
|
| 228 |
-
|
| 229 |
-
if predictor is None:
|
| 230 |
-
return
|
| 231 |
-
|
| 232 |
-
# Load and display image
|
| 233 |
-
image = cv2.imread(str(image_path))
|
| 234 |
-
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
|
| 235 |
-
|
| 236 |
-
print("\nInstructions:")
|
| 237 |
-
print("1. The image will be displayed")
|
| 238 |
-
print("2. Click on consolidation areas (left click)")
|
| 239 |
-
print("3. Click on background areas to exclude (right click)")
|
| 240 |
-
print("4. Press 'q' when done")
|
| 241 |
-
print("5. Choose best mask from results")
|
| 242 |
-
|
| 243 |
-
point_coords = []
|
| 244 |
-
point_labels = []
|
| 245 |
-
|
| 246 |
-
def mouse_callback(event, x, y, flags, param):
|
| 247 |
-
if event == cv2.EVENT_LBUTTONDOWN:
|
| 248 |
-
point_coords.append([x, y])
|
| 249 |
-
point_labels.append(1)
|
| 250 |
-
print(f"Added positive point at ({x}, {y})")
|
| 251 |
-
elif event == cv2.EVENT_RBUTTONDOWN:
|
| 252 |
-
point_coords.append([x, y])
|
| 253 |
-
point_labels.append(0)
|
| 254 |
-
print(f"Added negative point at ({x}, {y})")
|
| 255 |
-
|
| 256 |
-
# Convert for OpenCV display
|
| 257 |
-
display_img = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
|
| 258 |
-
cv2.namedWindow('Image')
|
| 259 |
-
cv2.setMouseCallback('Image', mouse_callback)
|
| 260 |
-
|
| 261 |
-
while True:
|
| 262 |
-
display = display_img.copy()
|
| 263 |
-
|
| 264 |
-
# Draw points
|
| 265 |
-
for coord, label in zip(point_coords, point_labels):
|
| 266 |
-
color = (0, 255, 0) if label == 1 else (0, 0, 255)
|
| 267 |
-
cv2.circle(display, tuple(coord), 5, color, -1)
|
| 268 |
-
|
| 269 |
-
cv2.imshow('Image', display)
|
| 270 |
-
|
| 271 |
-
key = cv2.waitKey(1) & 0xFF
|
| 272 |
-
if key == ord('q'):
|
| 273 |
-
break
|
| 274 |
-
|
| 275 |
-
cv2.destroyAllWindows()
|
| 276 |
-
|
| 277 |
-
if point_coords:
|
| 278 |
-
print("\nGenerating masks...")
|
| 279 |
-
masks, scores, _ = predict_consolidation_with_points(
|
| 280 |
-
image_path, predictor, point_coords, point_labels
|
| 281 |
-
)
|
| 282 |
-
|
| 283 |
-
# Visualize results
|
| 284 |
-
visualize_sam_results(image, masks, scores, np.array(point_coords))
|
| 285 |
-
|
| 286 |
-
# Save best mask
|
| 287 |
-
best_idx = np.argmax(scores)
|
| 288 |
-
output_path = Path(image_path).parent / f"{Path(image_path).stem}_sam_mask.png"
|
| 289 |
-
save_mask(masks[best_idx], output_path)
|
| 290 |
-
|
| 291 |
-
return masks[best_idx]
|
| 292 |
-
|
| 293 |
-
return None
|
| 294 |
-
|
| 295 |
-
|
| 296 |
-
def batch_process_with_sam(input_dir, output_dir, checkpoint_path, mode='auto'):
|
| 297 |
-
"""
|
| 298 |
-
Batch process images with SAM.
|
| 299 |
-
|
| 300 |
-
Args:
|
| 301 |
-
input_dir: Directory with chest X-ray images
|
| 302 |
-
output_dir: Directory to save masks
|
| 303 |
-
checkpoint_path: Path to SAM checkpoint
|
| 304 |
-
mode: 'auto' for automatic or 'center' for single center point
|
| 305 |
-
"""
|
| 306 |
-
input_path = Path(input_dir)
|
| 307 |
-
output_path = Path(output_dir)
|
| 308 |
-
output_path.mkdir(parents=True, exist_ok=True)
|
| 309 |
-
|
| 310 |
-
print("Initializing SAM...")
|
| 311 |
-
predictor = initialize_sam_predictor(checkpoint_path)
|
| 312 |
-
|
| 313 |
-
if predictor is None:
|
| 314 |
-
return
|
| 315 |
-
|
| 316 |
-
images = list(input_path.glob("*.jpg")) + list(input_path.glob("*.png"))
|
| 317 |
-
print(f"Found {len(images)} images to process")
|
| 318 |
-
|
| 319 |
-
for img_path in images:
|
| 320 |
-
print(f"\nProcessing: {img_path.name}")
|
| 321 |
-
|
| 322 |
-
try:
|
| 323 |
-
if mode == 'auto':
|
| 324 |
-
mask, image = automatic_consolidation_detection(img_path, predictor)
|
| 325 |
-
else:
|
| 326 |
-
# Use center point as prompt
|
| 327 |
-
image = cv2.imread(str(img_path))
|
| 328 |
-
h, w = image.shape[:2]
|
| 329 |
-
center_point = [[w // 2, h // 2]]
|
| 330 |
-
masks, scores, image = predict_consolidation_with_points(
|
| 331 |
-
img_path, predictor, center_point, [1]
|
| 332 |
-
)
|
| 333 |
-
mask = masks[np.argmax(scores)]
|
| 334 |
-
|
| 335 |
-
if mask is not None:
|
| 336 |
-
output_file = output_path / f"{img_path.stem}_mask.png"
|
| 337 |
-
save_mask(mask, output_file)
|
| 338 |
-
else:
|
| 339 |
-
print(f"No mask generated for {img_path.name}")
|
| 340 |
-
|
| 341 |
-
except Exception as e:
|
| 342 |
-
print(f"Error processing {img_path.name}: {e}")
|
| 343 |
-
|
| 344 |
-
print(f"\nBatch processing complete! Masks saved to: {output_path}")
|
| 345 |
-
|
| 346 |
-
|
| 347 |
-
if __name__ == "__main__":
|
| 348 |
-
parser = argparse.ArgumentParser(
|
| 349 |
-
description="Generate pneumonia consolidation masks using SAM"
|
| 350 |
-
)
|
| 351 |
-
parser.add_argument(
|
| 352 |
-
'--checkpoint',
|
| 353 |
-
type=str,
|
| 354 |
-
required=True,
|
| 355 |
-
help='Path to SAM checkpoint file (.pth)'
|
| 356 |
-
)
|
| 357 |
-
parser.add_argument(
|
| 358 |
-
'--image',
|
| 359 |
-
type=str,
|
| 360 |
-
help='Path to single image (for interactive mode)'
|
| 361 |
-
)
|
| 362 |
-
parser.add_argument(
|
| 363 |
-
'--input_dir',
|
| 364 |
-
type=str,
|
| 365 |
-
help='Input directory for batch processing'
|
| 366 |
-
)
|
| 367 |
-
parser.add_argument(
|
| 368 |
-
'--output_dir',
|
| 369 |
-
type=str,
|
| 370 |
-
help='Output directory for batch processing'
|
| 371 |
-
)
|
| 372 |
-
parser.add_argument(
|
| 373 |
-
'--mode',
|
| 374 |
-
type=str,
|
| 375 |
-
default='interactive',
|
| 376 |
-
choices=['interactive', 'auto', 'center'],
|
| 377 |
-
help='Processing mode'
|
| 378 |
-
)
|
| 379 |
-
parser.add_argument(
|
| 380 |
-
'--model_type',
|
| 381 |
-
type=str,
|
| 382 |
-
default='vit_h',
|
| 383 |
-
choices=['vit_h', 'vit_l', 'vit_b'],
|
| 384 |
-
help='SAM model type'
|
| 385 |
-
)
|
| 386 |
-
|
| 387 |
-
args = parser.parse_args()
|
| 388 |
-
|
| 389 |
-
if args.mode == 'interactive' and args.image:
|
| 390 |
-
interactive_sam_segmentation(args.image, args.checkpoint)
|
| 391 |
-
elif args.input_dir and args.output_dir:
|
| 392 |
-
batch_process_with_sam(args.input_dir, args.output_dir,
|
| 393 |
-
args.checkpoint, args.mode)
|
| 394 |
-
else:
|
| 395 |
-
parser.print_help()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/streamlit_app.py
DELETED
|
@@ -1,1207 +0,0 @@
|
|
| 1 |
-
"""
|
| 2 |
-
Ground Truth Annotation Tool for Radiologists - Pneumonia Consolidation
|
| 3 |
-
|
| 4 |
-
Features:
|
| 5 |
-
1. Browse patient X-ray images from Pacientes folder automatically
|
| 6 |
-
2. Annotate consolidation directly in the browser (no external tools)
|
| 7 |
-
3. Multiple consolidation entries for multilobar pneumonia
|
| 8 |
-
4. Save mask + metadata JSON in the same patient folder
|
| 9 |
-
5. Progress tracking, inter-rater comparison, zoom, dark theme
|
| 10 |
-
"""
|
| 11 |
-
|
| 12 |
-
import sys
|
| 13 |
-
import streamlit as st
|
| 14 |
-
import cv2
|
| 15 |
-
import numpy as np
|
| 16 |
-
from PIL import Image
|
| 17 |
-
import pandas as pd
|
| 18 |
-
from pathlib import Path
|
| 19 |
-
import json
|
| 20 |
-
import io
|
| 21 |
-
from datetime import datetime
|
| 22 |
-
from streamlit_drawable_canvas import st_canvas
|
| 23 |
-
import hashlib
|
| 24 |
-
|
| 25 |
-
# ============================================================================
|
| 26 |
-
# AUTHENTICATION
|
| 27 |
-
# ============================================================================
|
| 28 |
-
|
| 29 |
-
# User credentials (username: hashed_password)
|
| 30 |
-
# To add users: hash their password with hashlib.sha256("password".encode()).hexdigest()
|
| 31 |
-
USERS = {
|
| 32 |
-
"admin": "6ef53576c614c35328ec86075d78cde376aa6b87504b39798d2ce4962a5a621a",
|
| 33 |
-
"daniel": "dcabf0e5cfa74308f61ed0bcb7bd5565cfe6d890bb3e2ff7528d588da6f9c623",
|
| 34 |
-
}
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
def hash_password(password: str) -> str:
|
| 38 |
-
"""Hash a password using SHA-256."""
|
| 39 |
-
return hashlib.sha256(password.encode()).hexdigest()
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
def check_credentials(username: str, password: str) -> bool:
|
| 43 |
-
"""Verify username and password."""
|
| 44 |
-
if username in USERS:
|
| 45 |
-
return USERS[username] == hash_password(password)
|
| 46 |
-
return False
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
def login_form():
|
| 50 |
-
"""Display login form and handle authentication."""
|
| 51 |
-
st.set_page_config(
|
| 52 |
-
page_title="Login - Annotation Tool",
|
| 53 |
-
page_icon="π",
|
| 54 |
-
layout="centered",
|
| 55 |
-
)
|
| 56 |
-
|
| 57 |
-
st.title("π Login")
|
| 58 |
-
st.markdown("### Pneumonia Consolidation Annotation Tool")
|
| 59 |
-
st.markdown("---")
|
| 60 |
-
|
| 61 |
-
with st.form("login_form"):
|
| 62 |
-
username = st.text_input("π€ Username", placeholder="Enter your username")
|
| 63 |
-
password = st.text_input("π Password", type="password", placeholder="Enter your password")
|
| 64 |
-
submit = st.form_submit_button("π Login", use_container_width=True)
|
| 65 |
-
|
| 66 |
-
if submit:
|
| 67 |
-
if check_credentials(username, password):
|
| 68 |
-
st.session_state.authenticated = True
|
| 69 |
-
st.session_state.username = username
|
| 70 |
-
st.rerun()
|
| 71 |
-
else:
|
| 72 |
-
st.error("β Invalid username or password")
|
| 73 |
-
|
| 74 |
-
st.markdown("---")
|
| 75 |
-
st.caption("Contact administrator for access credentials.")
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
def logout_button():
|
| 79 |
-
"""Display logout button in sidebar."""
|
| 80 |
-
st.sidebar.markdown("---")
|
| 81 |
-
st.sidebar.markdown(f"π€ Logged in as: **{st.session_state.username}**")
|
| 82 |
-
if st.sidebar.button("πͺ Logout", use_container_width=True):
|
| 83 |
-
st.session_state.authenticated = False
|
| 84 |
-
st.session_state.username = None
|
| 85 |
-
st.rerun()
|
| 86 |
-
|
| 87 |
-
# ============================================================================
|
| 88 |
-
# CONSOLIDATION COLOR PALETTE (one distinct color per site)
|
| 89 |
-
# ============================================================================
|
| 90 |
-
|
| 91 |
-
CONSOLIDATION_COLORS = [
|
| 92 |
-
("#00FF00", "Lime"),
|
| 93 |
-
("#FF4444", "Red"),
|
| 94 |
-
("#4488FF", "Blue"),
|
| 95 |
-
("#FFD700", "Gold"),
|
| 96 |
-
("#FF69B4", "Pink"),
|
| 97 |
-
("#00FFFF", "Cyan"),
|
| 98 |
-
("#FF8C00", "Orange"),
|
| 99 |
-
("#9370DB", "Purple"),
|
| 100 |
-
("#32CD32", "Green2"),
|
| 101 |
-
("#FF1493", "DeepPink"),
|
| 102 |
-
]
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
def get_color_for_index(idx: int) -> tuple:
|
| 106 |
-
"""Return (hex_color, label) for a given consolidation index."""
|
| 107 |
-
return CONSOLIDATION_COLORS[idx % len(CONSOLIDATION_COLORS)]
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
# ============================================================================
|
| 111 |
-
# METRIC FUNCTIONS
|
| 112 |
-
# ============================================================================
|
| 113 |
-
|
| 114 |
-
def calculate_dice_coefficient(mask1, mask2):
|
| 115 |
-
m1 = (mask1 > 0).astype(np.uint8)
|
| 116 |
-
m2 = (mask2 > 0).astype(np.uint8)
|
| 117 |
-
intersection = np.sum(m1 * m2)
|
| 118 |
-
total = np.sum(m1) + np.sum(m2)
|
| 119 |
-
if total == 0:
|
| 120 |
-
return 1.0
|
| 121 |
-
return (2.0 * intersection) / total
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
def calculate_iou(mask1, mask2):
|
| 125 |
-
m1 = (mask1 > 0).astype(np.uint8)
|
| 126 |
-
m2 = (mask2 > 0).astype(np.uint8)
|
| 127 |
-
intersection = np.sum(m1 * m2)
|
| 128 |
-
union = np.sum(m1) + np.sum(m2) - intersection
|
| 129 |
-
if union == 0:
|
| 130 |
-
return 1.0
|
| 131 |
-
return intersection / union
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
def calculate_precision_recall(ground_truth, prediction):
|
| 135 |
-
gt = (ground_truth > 0).astype(np.uint8)
|
| 136 |
-
pred = (prediction > 0).astype(np.uint8)
|
| 137 |
-
tp = np.sum(gt * pred)
|
| 138 |
-
fp = np.sum((1 - gt) * pred)
|
| 139 |
-
fn = np.sum(gt * (1 - pred))
|
| 140 |
-
precision = tp / (tp + fp) if (tp + fp) > 0 else 0.0
|
| 141 |
-
recall = tp / (tp + fn) if (tp + fn) > 0 else 0.0
|
| 142 |
-
return precision, recall
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
# ============================================================================
|
| 146 |
-
# IMAGE & DATA HELPERS
|
| 147 |
-
# ============================================================================
|
| 148 |
-
|
| 149 |
-
def load_image_from_path(image_path):
|
| 150 |
-
"""Load image as RGB numpy array (original, no CLAHE).
|
| 151 |
-
|
| 152 |
-
Uses PIL as primary method for better cloud compatibility,
|
| 153 |
-
with cv2 as fallback for edge cases.
|
| 154 |
-
"""
|
| 155 |
-
image_path = Path(image_path)
|
| 156 |
-
|
| 157 |
-
# Method 1: Use PIL (more reliable for cloud/uploaded files)
|
| 158 |
-
try:
|
| 159 |
-
with Image.open(image_path) as pil_img:
|
| 160 |
-
# Convert to RGB if necessary (handles grayscale, RGBA, etc.)
|
| 161 |
-
if pil_img.mode != 'RGB':
|
| 162 |
-
pil_img = pil_img.convert('RGB')
|
| 163 |
-
img = np.array(pil_img)
|
| 164 |
-
return img
|
| 165 |
-
except Exception as e:
|
| 166 |
-
pass # Fall through to cv2 method
|
| 167 |
-
|
| 168 |
-
# Method 2: Fallback to OpenCV
|
| 169 |
-
try:
|
| 170 |
-
img = cv2.imread(str(image_path))
|
| 171 |
-
if img is not None:
|
| 172 |
-
return cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
|
| 173 |
-
except Exception:
|
| 174 |
-
pass
|
| 175 |
-
|
| 176 |
-
# Method 3: Read bytes directly (for cloud file systems)
|
| 177 |
-
try:
|
| 178 |
-
with open(image_path, 'rb') as f:
|
| 179 |
-
file_bytes = f.read()
|
| 180 |
-
nparr = np.frombuffer(file_bytes, np.uint8)
|
| 181 |
-
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
|
| 182 |
-
if img is not None:
|
| 183 |
-
return cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
|
| 184 |
-
except Exception:
|
| 185 |
-
pass
|
| 186 |
-
|
| 187 |
-
return None
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
def scale_image_preserve_ratio(img, target_width=900):
|
| 191 |
-
"""Scale image so width = target_width, preserving aspect ratio."""
|
| 192 |
-
h, w = img.shape[:2]
|
| 193 |
-
ratio = target_width / w
|
| 194 |
-
new_h = int(h * ratio)
|
| 195 |
-
new_w = target_width
|
| 196 |
-
scaled = cv2.resize(img, (new_w, new_h), interpolation=cv2.INTER_AREA)
|
| 197 |
-
return scaled, ratio
|
| 198 |
-
|
| 199 |
-
|
| 200 |
-
def get_all_patient_images(base_path):
|
| 201 |
-
"""Scan patient folders and collect all JPG/PNG images with annotation status."""
|
| 202 |
-
base = Path(base_path)
|
| 203 |
-
patient_images = []
|
| 204 |
-
if not base.exists():
|
| 205 |
-
return patient_images
|
| 206 |
-
|
| 207 |
-
# Get all subdirectories (including 'uploads' folder for cloud mode)
|
| 208 |
-
folders = [base] + [f for f in base.iterdir() if f.is_dir()]
|
| 209 |
-
|
| 210 |
-
for folder in folders:
|
| 211 |
-
img_files = sorted(
|
| 212 |
-
list(folder.glob("*.jpg")) +
|
| 213 |
-
list(folder.glob("*.JPG")) +
|
| 214 |
-
list(folder.glob("*.jpeg")) +
|
| 215 |
-
list(folder.glob("*.png"))
|
| 216 |
-
)
|
| 217 |
-
for img in img_files:
|
| 218 |
-
# Skip mask files
|
| 219 |
-
if "_mask" in img.name:
|
| 220 |
-
continue
|
| 221 |
-
mask_path = img.parent / f"{img.stem}_mask.png"
|
| 222 |
-
meta_path = img.parent / f"{img.stem}_annotation.json"
|
| 223 |
-
# Use folder name as patient_id, or 'uploaded' for root
|
| 224 |
-
patient_id = folder.name if folder != base else "uploaded"
|
| 225 |
-
patient_images.append({
|
| 226 |
-
"patient_id": patient_id,
|
| 227 |
-
"image_path": img,
|
| 228 |
-
"image_name": img.name,
|
| 229 |
-
"mask_path": mask_path,
|
| 230 |
-
"metadata_path": meta_path,
|
| 231 |
-
"annotated": mask_path.exists(),
|
| 232 |
-
})
|
| 233 |
-
return patient_images
|
| 234 |
-
|
| 235 |
-
|
| 236 |
-
def get_annotation_progress(patient_images):
|
| 237 |
-
total = len(patient_images)
|
| 238 |
-
done = sum(1 for img in patient_images if img["annotated"])
|
| 239 |
-
pct = (done / total * 100) if total > 0 else 0
|
| 240 |
-
return done, total, pct
|
| 241 |
-
|
| 242 |
-
|
| 243 |
-
def save_annotation_in_patient_folder(
|
| 244 |
-
image_path, mask_array, annotator_name, metadata_dict, original_shape
|
| 245 |
-
):
|
| 246 |
-
"""Save mask (rescaled to original size) + metadata JSON in patient folder."""
|
| 247 |
-
image_path = Path(image_path)
|
| 248 |
-
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
| 249 |
-
|
| 250 |
-
# Resize mask back to original image dimensions
|
| 251 |
-
orig_h, orig_w = original_shape[:2]
|
| 252 |
-
mask_resized = cv2.resize(
|
| 253 |
-
mask_array, (orig_w, orig_h), interpolation=cv2.INTER_NEAREST
|
| 254 |
-
)
|
| 255 |
-
|
| 256 |
-
mask_filename = f"{image_path.stem}_mask.png"
|
| 257 |
-
mask_path = image_path.parent / mask_filename
|
| 258 |
-
cv2.imwrite(str(mask_path), mask_resized)
|
| 259 |
-
|
| 260 |
-
metadata = {
|
| 261 |
-
"image_name": image_path.name,
|
| 262 |
-
"patient_id": image_path.parent.name,
|
| 263 |
-
"annotator": annotator_name,
|
| 264 |
-
"timestamp": timestamp,
|
| 265 |
-
"mask_file": mask_filename,
|
| 266 |
-
**metadata_dict,
|
| 267 |
-
}
|
| 268 |
-
meta_path = image_path.parent / f"{image_path.stem}_annotation.json"
|
| 269 |
-
with open(meta_path, "w") as f:
|
| 270 |
-
json.dump(metadata, f, indent=2)
|
| 271 |
-
|
| 272 |
-
return mask_path, meta_path
|
| 273 |
-
|
| 274 |
-
|
| 275 |
-
# ============================================================================
|
| 276 |
-
# MAIN APP
|
| 277 |
-
# ============================================================================
|
| 278 |
-
|
| 279 |
-
def main():
|
| 280 |
-
# ββ Authentication Check βββββββββββββββββββββββββββββββββββββββββββ
|
| 281 |
-
if "authenticated" not in st.session_state:
|
| 282 |
-
st.session_state.authenticated = False
|
| 283 |
-
|
| 284 |
-
if not st.session_state.authenticated:
|
| 285 |
-
login_form()
|
| 286 |
-
return
|
| 287 |
-
|
| 288 |
-
# ββ Authenticated: Show main app βββββββββββββββββββββββββββββββββββ
|
| 289 |
-
st.set_page_config(
|
| 290 |
-
page_title="Ground Truth Annotation Tool",
|
| 291 |
-
page_icon="π«",
|
| 292 |
-
layout="wide",
|
| 293 |
-
)
|
| 294 |
-
|
| 295 |
-
st.title("π« Pneumonia Consolidation β Ground Truth Annotation")
|
| 296 |
-
|
| 297 |
-
# Show logout button
|
| 298 |
-
logout_button()
|
| 299 |
-
|
| 300 |
-
# ββ Sidebar: annotator βββββββββββββββββββββββββββββββββββββββββββββ
|
| 301 |
-
st.sidebar.header("π€ Annotator")
|
| 302 |
-
annotator_name = st.sidebar.text_input("Your Name / ID", value="Radiologist1")
|
| 303 |
-
|
| 304 |
-
# ββ Sidebar: patients path βββββββββββββββββββββββββββββββββββββββββ
|
| 305 |
-
st.sidebar.header("π Patient Data")
|
| 306 |
-
|
| 307 |
-
# Image upload for cloud deployment
|
| 308 |
-
st.sidebar.subheader("π€ Upload X-rays")
|
| 309 |
-
uploaded_files = st.sidebar.file_uploader(
|
| 310 |
-
"Upload chest X-ray images",
|
| 311 |
-
type=["jpg", "jpeg", "png"],
|
| 312 |
-
accept_multiple_files=True,
|
| 313 |
-
help="Upload JPG/PNG chest X-ray images to annotate",
|
| 314 |
-
)
|
| 315 |
-
|
| 316 |
-
# Create upload directory
|
| 317 |
-
upload_dir = Path("./uploaded_images")
|
| 318 |
-
upload_dir.mkdir(parents=True, exist_ok=True)
|
| 319 |
-
|
| 320 |
-
if uploaded_files:
|
| 321 |
-
for uf in uploaded_files:
|
| 322 |
-
# Use filename (without extension) as patient ID
|
| 323 |
-
file_path = upload_dir / uf.name
|
| 324 |
-
with open(file_path, "wb") as f:
|
| 325 |
-
f.write(uf.getbuffer())
|
| 326 |
-
st.sidebar.success(f"β
{len(uploaded_files)} image(s) uploaded!")
|
| 327 |
-
|
| 328 |
-
st.sidebar.divider()
|
| 329 |
-
|
| 330 |
-
patients_path = st.sidebar.text_input(
|
| 331 |
-
"Images Folder Path",
|
| 332 |
-
value="./uploaded_images",
|
| 333 |
-
help="Folder with images (use uploader above for cloud, or local path)",
|
| 334 |
-
)
|
| 335 |
-
|
| 336 |
-
# ββ Load images ββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 337 |
-
patient_images = get_all_patient_images(patients_path)
|
| 338 |
-
|
| 339 |
-
if not patient_images:
|
| 340 |
-
st.error(
|
| 341 |
-
f"No JPG images found under **{patients_path}**. "
|
| 342 |
-
"Check the path and ensure folders contain .jpg files."
|
| 343 |
-
)
|
| 344 |
-
return
|
| 345 |
-
|
| 346 |
-
# ββ Sidebar: progress ββββββββββββββββββββββββββββββββββββββββββββββ
|
| 347 |
-
annotated_count, total_count, progress_pct = get_annotation_progress(
|
| 348 |
-
patient_images
|
| 349 |
-
)
|
| 350 |
-
st.sidebar.header("π Progress")
|
| 351 |
-
st.sidebar.progress(progress_pct / 100)
|
| 352 |
-
st.sidebar.metric("Annotated", f"{annotated_count} / {total_count}")
|
| 353 |
-
st.sidebar.metric("Completion", f"{progress_pct:.1f}%")
|
| 354 |
-
|
| 355 |
-
# ββ Sidebar: filter ββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 356 |
-
st.sidebar.header("π Filter")
|
| 357 |
-
show_filter = st.sidebar.radio(
|
| 358 |
-
"Show",
|
| 359 |
-
["All Images", "Not Annotated", "Annotated"],
|
| 360 |
-
index=1,
|
| 361 |
-
)
|
| 362 |
-
if show_filter == "Not Annotated":
|
| 363 |
-
filtered_images = [i for i in patient_images if not i["annotated"]]
|
| 364 |
-
elif show_filter == "Annotated":
|
| 365 |
-
filtered_images = [i for i in patient_images if i["annotated"]]
|
| 366 |
-
else:
|
| 367 |
-
filtered_images = patient_images
|
| 368 |
-
|
| 369 |
-
if not filtered_images:
|
| 370 |
-
st.warning(f"No images match filter **{show_filter}**.")
|
| 371 |
-
return
|
| 372 |
-
|
| 373 |
-
# ββ Navigation state βββββββββββββββββββββββββββββββββββββββββββββββ
|
| 374 |
-
if "current_index" not in st.session_state:
|
| 375 |
-
st.session_state.current_index = 0
|
| 376 |
-
if st.session_state.current_index >= len(filtered_images):
|
| 377 |
-
st.session_state.current_index = 0
|
| 378 |
-
current_image = filtered_images[st.session_state.current_index]
|
| 379 |
-
|
| 380 |
-
# ββ Sidebar: drawing settings ββββββββββββββββββββββββββββββββββββββ
|
| 381 |
-
st.sidebar.header("π¨ Drawing Settings")
|
| 382 |
-
stroke_width = st.sidebar.slider("Brush Size", 1, 50, 15)
|
| 383 |
-
|
| 384 |
-
drawing_mode = st.sidebar.selectbox(
|
| 385 |
-
"Drawing Tool",
|
| 386 |
-
["freedraw", "rect", "circle", "line"],
|
| 387 |
-
index=0,
|
| 388 |
-
help="freedraw: freehand brush Β· rect/circle/line: shapes",
|
| 389 |
-
)
|
| 390 |
-
|
| 391 |
-
canvas_width = st.sidebar.slider(
|
| 392 |
-
"Canvas Width (px)", 600, 1400, 900, step=50,
|
| 393 |
-
help="Adjust to fit your screen. Aspect ratio is always preserved.",
|
| 394 |
-
)
|
| 395 |
-
|
| 396 |
-
# ββ Sidebar: zoom controls βββββββββββββββββββββββββββββββββββββββββ
|
| 397 |
-
st.sidebar.header("π Zoom & Pan")
|
| 398 |
-
|
| 399 |
-
# Initialise zoom state
|
| 400 |
-
if "zoom_level" not in st.session_state:
|
| 401 |
-
st.session_state.zoom_level = 1.0
|
| 402 |
-
if "zoom_pan_x" not in st.session_state:
|
| 403 |
-
st.session_state.zoom_pan_x = 0.5
|
| 404 |
-
if "zoom_pan_y" not in st.session_state:
|
| 405 |
-
st.session_state.zoom_pan_y = 0.5
|
| 406 |
-
|
| 407 |
-
# Quick zoom buttons
|
| 408 |
-
zb1, zb2, zb3, zb4 = st.sidebar.columns(4)
|
| 409 |
-
with zb1:
|
| 410 |
-
if st.button("β", key="zoom_out", help="Zoom out",
|
| 411 |
-
use_container_width=True):
|
| 412 |
-
st.session_state.zoom_level = max(
|
| 413 |
-
1.0, round(st.session_state.zoom_level - 0.25, 2)
|
| 414 |
-
)
|
| 415 |
-
st.rerun()
|
| 416 |
-
with zb2:
|
| 417 |
-
if st.button("β", key="zoom_in", help="Zoom in",
|
| 418 |
-
use_container_width=True):
|
| 419 |
-
st.session_state.zoom_level = min(
|
| 420 |
-
5.0, round(st.session_state.zoom_level + 0.25, 2)
|
| 421 |
-
)
|
| 422 |
-
st.rerun()
|
| 423 |
-
with zb3:
|
| 424 |
-
if st.button("π", key="zoom_reset", help="Reset zoom",
|
| 425 |
-
use_container_width=True):
|
| 426 |
-
st.session_state.zoom_level = 1.0
|
| 427 |
-
st.session_state.zoom_pan_x = 0.5
|
| 428 |
-
st.session_state.zoom_pan_y = 0.5
|
| 429 |
-
st.rerun()
|
| 430 |
-
with zb4:
|
| 431 |
-
st.write(f"**{st.session_state.zoom_level:.1f}x**")
|
| 432 |
-
|
| 433 |
-
zoom_level = st.sidebar.slider(
|
| 434 |
-
"Zoom Level", 1.0, 5.0, st.session_state.zoom_level, step=0.25,
|
| 435 |
-
help="Drag or use β/β buttons above.",
|
| 436 |
-
key="zoom_slider",
|
| 437 |
-
)
|
| 438 |
-
if zoom_level != st.session_state.zoom_level:
|
| 439 |
-
st.session_state.zoom_level = zoom_level
|
| 440 |
-
|
| 441 |
-
if st.session_state.zoom_level > 1.0:
|
| 442 |
-
# Pan controls β arrows + sliders
|
| 443 |
-
st.sidebar.caption("**Pan (arrow buttons or sliders)**")
|
| 444 |
-
pa1, pa2, pa3 = st.sidebar.columns([1, 1, 1])
|
| 445 |
-
pan_step = 0.08
|
| 446 |
-
with pa1:
|
| 447 |
-
if st.button("β¬
οΈ", key="pan_left", use_container_width=True):
|
| 448 |
-
st.session_state.zoom_pan_x = max(
|
| 449 |
-
0.0, round(st.session_state.zoom_pan_x - pan_step, 2)
|
| 450 |
-
)
|
| 451 |
-
st.rerun()
|
| 452 |
-
if st.button("β¬οΈ", key="pan_up", use_container_width=True):
|
| 453 |
-
st.session_state.zoom_pan_y = max(
|
| 454 |
-
0.0, round(st.session_state.zoom_pan_y - pan_step, 2)
|
| 455 |
-
)
|
| 456 |
-
st.rerun()
|
| 457 |
-
with pa2:
|
| 458 |
-
if st.button("β‘οΈ", key="pan_right", use_container_width=True):
|
| 459 |
-
st.session_state.zoom_pan_x = min(
|
| 460 |
-
1.0, round(st.session_state.zoom_pan_x + pan_step, 2)
|
| 461 |
-
)
|
| 462 |
-
st.rerun()
|
| 463 |
-
if st.button("β¬οΈ", key="pan_down", use_container_width=True):
|
| 464 |
-
st.session_state.zoom_pan_y = min(
|
| 465 |
-
1.0, round(st.session_state.zoom_pan_y + pan_step, 2)
|
| 466 |
-
)
|
| 467 |
-
st.rerun()
|
| 468 |
-
with pa3:
|
| 469 |
-
st.write(
|
| 470 |
-
f"x={st.session_state.zoom_pan_x:.2f}\n"
|
| 471 |
-
f"y={st.session_state.zoom_pan_y:.2f}"
|
| 472 |
-
)
|
| 473 |
-
|
| 474 |
-
zoom_pan_x = st.sidebar.slider(
|
| 475 |
-
"Pan H", 0.0, 1.0, st.session_state.zoom_pan_x, step=0.05,
|
| 476 |
-
key="pan_h_slider",
|
| 477 |
-
)
|
| 478 |
-
zoom_pan_y = st.sidebar.slider(
|
| 479 |
-
"Pan V", 0.0, 1.0, st.session_state.zoom_pan_y, step=0.05,
|
| 480 |
-
key="pan_v_slider",
|
| 481 |
-
)
|
| 482 |
-
if zoom_pan_x != st.session_state.zoom_pan_x:
|
| 483 |
-
st.session_state.zoom_pan_x = zoom_pan_x
|
| 484 |
-
if zoom_pan_y != st.session_state.zoom_pan_y:
|
| 485 |
-
st.session_state.zoom_pan_y = zoom_pan_y
|
| 486 |
-
else:
|
| 487 |
-
st.session_state.zoom_pan_x = 0.5
|
| 488 |
-
st.session_state.zoom_pan_y = 0.5
|
| 489 |
-
|
| 490 |
-
zoom_pan_x = st.session_state.zoom_pan_x
|
| 491 |
-
zoom_pan_y = st.session_state.zoom_pan_y
|
| 492 |
-
zoom_level = st.session_state.zoom_level
|
| 493 |
-
|
| 494 |
-
# ββ Tabs βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 495 |
-
tab1, tab2, tab3 = st.tabs(["π Annotate", "π Compare", "π Guidelines"])
|
| 496 |
-
|
| 497 |
-
# ================================================================
|
| 498 |
-
# TAB 1 β ANNOTATE
|
| 499 |
-
# ================================================================
|
| 500 |
-
with tab1:
|
| 501 |
-
# Navigation bar
|
| 502 |
-
nav1, nav2, nav3, nav4, nav5 = st.columns([1, 1, 3, 1, 1])
|
| 503 |
-
|
| 504 |
-
with nav1:
|
| 505 |
-
if st.button("β¬
οΈ Previous", use_container_width=True):
|
| 506 |
-
if st.session_state.current_index > 0:
|
| 507 |
-
st.session_state.current_index -= 1
|
| 508 |
-
st.rerun()
|
| 509 |
-
with nav2:
|
| 510 |
-
if st.button("Next β‘οΈ", use_container_width=True):
|
| 511 |
-
if st.session_state.current_index < len(filtered_images) - 1:
|
| 512 |
-
st.session_state.current_index += 1
|
| 513 |
-
st.rerun()
|
| 514 |
-
with nav3:
|
| 515 |
-
st.info(
|
| 516 |
-
f"Image **{st.session_state.current_index + 1}** of "
|
| 517 |
-
f"**{len(filtered_images)}** Β· Patient "
|
| 518 |
-
f"**{current_image['patient_id']}**"
|
| 519 |
-
)
|
| 520 |
-
with nav4:
|
| 521 |
-
jump_to = st.number_input(
|
| 522 |
-
"Go to #",
|
| 523 |
-
min_value=1,
|
| 524 |
-
max_value=len(filtered_images),
|
| 525 |
-
value=st.session_state.current_index + 1,
|
| 526 |
-
key="jump",
|
| 527 |
-
)
|
| 528 |
-
if jump_to - 1 != st.session_state.current_index:
|
| 529 |
-
st.session_state.current_index = jump_to - 1
|
| 530 |
-
st.rerun()
|
| 531 |
-
with nav5:
|
| 532 |
-
if current_image["annotated"]:
|
| 533 |
-
st.success("β
Done")
|
| 534 |
-
else:
|
| 535 |
-
st.warning("β³ Pending")
|
| 536 |
-
|
| 537 |
-
st.divider()
|
| 538 |
-
|
| 539 |
-
# Load original image (NO CLAHE)
|
| 540 |
-
img_rgb = load_image_from_path(current_image["image_path"])
|
| 541 |
-
if img_rgb is None:
|
| 542 |
-
st.error(f"Cannot load image: {current_image['image_path']}")
|
| 543 |
-
# Debug info for cloud troubleshooting
|
| 544 |
-
with st.expander("π§ Debug Info"):
|
| 545 |
-
st.write(f"**Path:** `{current_image['image_path']}`")
|
| 546 |
-
st.write(f"**Exists:** {Path(current_image['image_path']).exists()}")
|
| 547 |
-
if Path(current_image['image_path']).exists():
|
| 548 |
-
st.write(f"**Size:** {Path(current_image['image_path']).stat().st_size} bytes")
|
| 549 |
-
st.write(f"**Python:** {sys.version}")
|
| 550 |
-
st.write(f"**OpenCV:** {cv2.__version__}")
|
| 551 |
-
import PIL
|
| 552 |
-
st.write(f"**Pillow:** {PIL.__version__}")
|
| 553 |
-
return
|
| 554 |
-
|
| 555 |
-
# Debug: Show image info (can be removed in production)
|
| 556 |
-
# st.caption(f"Image loaded: {img_rgb.shape}, dtype={img_rgb.dtype}")
|
| 557 |
-
|
| 558 |
-
# Scale image to canvas_width preserving aspect ratio
|
| 559 |
-
img_scaled, scale_ratio = scale_image_preserve_ratio(img_rgb, canvas_width)
|
| 560 |
-
|
| 561 |
-
# Apply zoom: crop a region of the scaled image and enlarge it
|
| 562 |
-
if zoom_level > 1.0:
|
| 563 |
-
zh, zw = img_scaled.shape[:2]
|
| 564 |
-
crop_h = int(zh / zoom_level)
|
| 565 |
-
crop_w = int(zw / zoom_level)
|
| 566 |
-
# Calculate crop origin from pan sliders
|
| 567 |
-
max_y = zh - crop_h
|
| 568 |
-
max_x = zw - crop_w
|
| 569 |
-
start_y = int(zoom_pan_y * max_y)
|
| 570 |
-
start_x = int(zoom_pan_x * max_x)
|
| 571 |
-
img_cropped = img_scaled[
|
| 572 |
-
start_y : start_y + crop_h,
|
| 573 |
-
start_x : start_x + crop_w,
|
| 574 |
-
]
|
| 575 |
-
# Resize cropped region back to canvas dimensions
|
| 576 |
-
img_for_canvas = cv2.resize(
|
| 577 |
-
img_cropped, (zw, zh), interpolation=cv2.INTER_LINEAR
|
| 578 |
-
)
|
| 579 |
-
else:
|
| 580 |
-
img_for_canvas = img_scaled
|
| 581 |
-
start_x, start_y, crop_w, crop_h = (
|
| 582 |
-
0, 0, img_scaled.shape[1], img_scaled.shape[0]
|
| 583 |
-
)
|
| 584 |
-
|
| 585 |
-
canvas_h, canvas_w = img_for_canvas.shape[:2]
|
| 586 |
-
|
| 587 |
-
# Ensure image is in correct format for PIL/canvas (uint8 RGB)
|
| 588 |
-
if img_for_canvas.dtype != np.uint8:
|
| 589 |
-
img_for_canvas = img_for_canvas.astype(np.uint8)
|
| 590 |
-
if len(img_for_canvas.shape) == 2: # Grayscale
|
| 591 |
-
img_for_canvas = cv2.cvtColor(img_for_canvas, cv2.COLOR_GRAY2RGB)
|
| 592 |
-
elif img_for_canvas.shape[2] == 4: # RGBA
|
| 593 |
-
img_for_canvas = cv2.cvtColor(img_for_canvas, cv2.COLOR_RGBA2RGB)
|
| 594 |
-
|
| 595 |
-
# Create PIL Image for canvas background
|
| 596 |
-
pil_background = Image.fromarray(img_for_canvas, mode='RGB')
|
| 597 |
-
|
| 598 |
-
st.subheader(
|
| 599 |
-
f"Patient {current_image['patient_id']} β "
|
| 600 |
-
f"{current_image['image_name']}"
|
| 601 |
-
)
|
| 602 |
-
|
| 603 |
-
col_canvas, col_meta = st.columns([3, 1])
|
| 604 |
-
|
| 605 |
-
# ββ Canvas βββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 606 |
-
with col_canvas:
|
| 607 |
-
# How many consolidation sites exist?
|
| 608 |
-
state_key_preview = (
|
| 609 |
-
f"consol_{current_image['patient_id']}_"
|
| 610 |
-
f"{current_image['image_name']}"
|
| 611 |
-
)
|
| 612 |
-
n_sites = 1
|
| 613 |
-
if state_key_preview in st.session_state:
|
| 614 |
-
n_sites = max(1, len(st.session_state[state_key_preview]))
|
| 615 |
-
|
| 616 |
-
# ββ Site picker (controls stroke colour only) ββββββββββββββ
|
| 617 |
-
# Always render the selectbox (even with 1 site) so
|
| 618 |
-
# that the widget tree structure stays stable and the
|
| 619 |
-
# canvas below never gets remounted / loses drawings.
|
| 620 |
-
if "active_site" not in st.session_state:
|
| 621 |
-
st.session_state.active_site = 0
|
| 622 |
-
|
| 623 |
-
active_site = st.selectbox(
|
| 624 |
-
"π« Active Consolidation Site (pick colour to draw)",
|
| 625 |
-
list(range(n_sites)),
|
| 626 |
-
format_func=lambda i: (
|
| 627 |
-
f"Site {i + 1} β {get_color_for_index(i)[1]}"
|
| 628 |
-
),
|
| 629 |
-
index=min(
|
| 630 |
-
st.session_state.active_site, n_sites - 1
|
| 631 |
-
),
|
| 632 |
-
key="site_picker",
|
| 633 |
-
)
|
| 634 |
-
st.session_state.active_site = active_site
|
| 635 |
-
|
| 636 |
-
# Active site colour
|
| 637 |
-
active_hex, active_label = get_color_for_index(active_site)
|
| 638 |
-
r_c = int(active_hex[1:3], 16)
|
| 639 |
-
g_c = int(active_hex[3:5], 16)
|
| 640 |
-
b_c = int(active_hex[5:7], 16)
|
| 641 |
-
fill_rgba = f"rgba({r_c}, {g_c}, {b_c}, 0.3)"
|
| 642 |
-
|
| 643 |
-
# Build colour legend
|
| 644 |
-
color_legend_parts = []
|
| 645 |
-
for ci in range(n_sites):
|
| 646 |
-
hex_c, label = get_color_for_index(ci)
|
| 647 |
-
marker = "βΆ" if ci == active_site else "⬀"
|
| 648 |
-
color_legend_parts.append(
|
| 649 |
-
f'<span style="color:{hex_c};font-weight:bold;">'
|
| 650 |
-
f'{marker} Site {ci + 1}</span>'
|
| 651 |
-
)
|
| 652 |
-
st.markdown(
|
| 653 |
-
" ".join(color_legend_parts),
|
| 654 |
-
unsafe_allow_html=True,
|
| 655 |
-
)
|
| 656 |
-
|
| 657 |
-
if zoom_level > 1.0:
|
| 658 |
-
st.write(
|
| 659 |
-
f"**π¨ Drawing with {active_label} colour** "
|
| 660 |
-
f"(π {zoom_level:.1f}x β Scroll β to zoom, "
|
| 661 |
-
f"use arrow buttons to pan)"
|
| 662 |
-
)
|
| 663 |
-
else:
|
| 664 |
-
st.write(
|
| 665 |
-
f"**π¨ Drawing with {active_label} colour** "
|
| 666 |
-
f"(Scroll β over image to zoom)"
|
| 667 |
-
)
|
| 668 |
-
|
| 669 |
-
# ONE canvas per image β all sites draw here.
|
| 670 |
-
# Only zoom/pan changes the key; switching active site
|
| 671 |
-
# just changes the stroke colour, keeping all drawings.
|
| 672 |
-
canvas_result = st_canvas(
|
| 673 |
-
fill_color=fill_rgba,
|
| 674 |
-
stroke_width=stroke_width,
|
| 675 |
-
stroke_color=active_hex,
|
| 676 |
-
background_image=pil_background,
|
| 677 |
-
background_color="#000000",
|
| 678 |
-
update_streamlit=True,
|
| 679 |
-
height=canvas_h,
|
| 680 |
-
width=canvas_w,
|
| 681 |
-
drawing_mode=drawing_mode,
|
| 682 |
-
key=f"canvas_{current_image['patient_id']}_"
|
| 683 |
-
f"{current_image['image_name']}_z{zoom_level}_"
|
| 684 |
-
f"x{zoom_pan_x}_y{zoom_pan_y}",
|
| 685 |
-
)
|
| 686 |
-
|
| 687 |
-
# --- Mouse-wheel zoom via JS injection ------------------
|
| 688 |
-
import streamlit.components.v1 as components
|
| 689 |
-
components.html(
|
| 690 |
-
"""
|
| 691 |
-
<script>
|
| 692 |
-
(function() {
|
| 693 |
-
// Find the Streamlit canvas elements
|
| 694 |
-
const doc = window.parent.document;
|
| 695 |
-
const canvases = doc.querySelectorAll(
|
| 696 |
-
'canvas[id*="canvas"]'
|
| 697 |
-
);
|
| 698 |
-
// Also listen on the overall app container
|
| 699 |
-
const appContainer = doc.querySelector(
|
| 700 |
-
'[data-testid="stAppViewContainer"]'
|
| 701 |
-
) || doc.body;
|
| 702 |
-
|
| 703 |
-
function handleWheel(e) {
|
| 704 |
-
// Only act when scrolling over the canvas area
|
| 705 |
-
const target = e.target;
|
| 706 |
-
const isCanvas = (
|
| 707 |
-
target.tagName === 'CANVAS' ||
|
| 708 |
-
target.closest('.stCanvasContainer') ||
|
| 709 |
-
target.closest('[data-testid="stImage"]')
|
| 710 |
-
);
|
| 711 |
-
if (!isCanvas) return;
|
| 712 |
-
|
| 713 |
-
e.preventDefault();
|
| 714 |
-
e.stopPropagation();
|
| 715 |
-
|
| 716 |
-
// deltaY > 0 = scroll down = zoom out
|
| 717 |
-
const direction = e.deltaY > 0 ? 'out' : 'in';
|
| 718 |
-
|
| 719 |
-
// Find the zoom +/- buttons
|
| 720 |
-
const buttons = doc.querySelectorAll('button');
|
| 721 |
-
let targetBtn = null;
|
| 722 |
-
for (const btn of buttons) {
|
| 723 |
-
const txt = btn.textContent.trim();
|
| 724 |
-
if (direction === 'in' && txt === 'β') {
|
| 725 |
-
targetBtn = btn;
|
| 726 |
-
break;
|
| 727 |
-
}
|
| 728 |
-
if (direction === 'out' && txt === 'β') {
|
| 729 |
-
targetBtn = btn;
|
| 730 |
-
break;
|
| 731 |
-
}
|
| 732 |
-
}
|
| 733 |
-
if (targetBtn) {
|
| 734 |
-
targetBtn.click();
|
| 735 |
-
}
|
| 736 |
-
}
|
| 737 |
-
|
| 738 |
-
// Attach with capture to intercept before scroll
|
| 739 |
-
appContainer.addEventListener(
|
| 740 |
-
'wheel', handleWheel, {passive: false, capture: true}
|
| 741 |
-
);
|
| 742 |
-
})();
|
| 743 |
-
</script>
|
| 744 |
-
""",
|
| 745 |
-
height=0,
|
| 746 |
-
)
|
| 747 |
-
|
| 748 |
-
# Show thumbnail with zoom rectangle when zoomed in
|
| 749 |
-
if zoom_level > 1.0:
|
| 750 |
-
st.caption("π Overview β red box shows current zoom region")
|
| 751 |
-
thumb_w = 250
|
| 752 |
-
thumb, _ = scale_image_preserve_ratio(img_scaled, thumb_w)
|
| 753 |
-
thumb_h_actual = thumb.shape[0]
|
| 754 |
-
# Draw rectangle on thumbnail showing zoomed area
|
| 755 |
-
th_ratio = thumb_w / img_scaled.shape[1]
|
| 756 |
-
rx1 = int(start_x * th_ratio)
|
| 757 |
-
ry1 = int(start_y * th_ratio)
|
| 758 |
-
rx2 = int((start_x + crop_w) * th_ratio)
|
| 759 |
-
ry2 = int((start_y + crop_h) * th_ratio)
|
| 760 |
-
thumb_copy = thumb.copy()
|
| 761 |
-
cv2.rectangle(thumb_copy, (rx1, ry1), (rx2, ry2),
|
| 762 |
-
(255, 0, 0), 2)
|
| 763 |
-
st.image(thumb_copy, width=thumb_w)
|
| 764 |
-
|
| 765 |
-
# ββ Metadata column βββββοΏ½οΏ½οΏ½βββββββββββββββββββββββββββββββββββββ
|
| 766 |
-
with col_meta:
|
| 767 |
-
st.write("**π Annotation Metadata**")
|
| 768 |
-
|
| 769 |
-
# Load existing metadata if any
|
| 770 |
-
existing_metadata = {}
|
| 771 |
-
if current_image["metadata_path"].exists():
|
| 772 |
-
try:
|
| 773 |
-
with open(current_image["metadata_path"], "r") as f:
|
| 774 |
-
existing_metadata = json.load(f)
|
| 775 |
-
except Exception:
|
| 776 |
-
pass
|
| 777 |
-
|
| 778 |
-
# ββ Multilobar consolidations ββββββββββββββββββββββββββββββ
|
| 779 |
-
st.write("**π« Consolidation Sites**")
|
| 780 |
-
|
| 781 |
-
location_options = [
|
| 782 |
-
"Right Upper Lobe",
|
| 783 |
-
"Right Middle Lobe",
|
| 784 |
-
"Right Lower Lobe",
|
| 785 |
-
"Left Upper Lobe",
|
| 786 |
-
"Left Lower Lobe",
|
| 787 |
-
"Lingula",
|
| 788 |
-
]
|
| 789 |
-
type_options = [
|
| 790 |
-
"Solid Consolidation",
|
| 791 |
-
"Ground Glass Opacity",
|
| 792 |
-
"Air Bronchograms",
|
| 793 |
-
"Pleural Effusion",
|
| 794 |
-
"Mixed",
|
| 795 |
-
]
|
| 796 |
-
|
| 797 |
-
# Initialise session-state list for consolidations
|
| 798 |
-
state_key = (
|
| 799 |
-
f"consol_{current_image['patient_id']}_"
|
| 800 |
-
f"{current_image['image_name']}"
|
| 801 |
-
)
|
| 802 |
-
if state_key not in st.session_state:
|
| 803 |
-
# Pre-fill from existing metadata
|
| 804 |
-
saved = existing_metadata.get("consolidations", [])
|
| 805 |
-
if saved:
|
| 806 |
-
st.session_state[state_key] = saved
|
| 807 |
-
else:
|
| 808 |
-
st.session_state[state_key] = [
|
| 809 |
-
{"location": "Right Lower Lobe",
|
| 810 |
-
"type": "Solid Consolidation"}
|
| 811 |
-
]
|
| 812 |
-
|
| 813 |
-
consolidations = st.session_state[state_key]
|
| 814 |
-
|
| 815 |
-
# Render each consolidation entry
|
| 816 |
-
for idx, entry in enumerate(consolidations):
|
| 817 |
-
site_hex, site_label = get_color_for_index(idx)
|
| 818 |
-
with st.expander(
|
| 819 |
-
f"⬀ Site {idx + 1}: {entry['location']} "
|
| 820 |
-
f"({site_label})",
|
| 821 |
-
expanded=True,
|
| 822 |
-
):
|
| 823 |
-
loc = st.selectbox(
|
| 824 |
-
"Location",
|
| 825 |
-
location_options,
|
| 826 |
-
index=(
|
| 827 |
-
location_options.index(entry["location"])
|
| 828 |
-
if entry["location"] in location_options
|
| 829 |
-
else 0
|
| 830 |
-
),
|
| 831 |
-
key=f"loc_{state_key}_{idx}",
|
| 832 |
-
)
|
| 833 |
-
ctype = st.selectbox(
|
| 834 |
-
"Type",
|
| 835 |
-
type_options,
|
| 836 |
-
index=(
|
| 837 |
-
type_options.index(entry["type"])
|
| 838 |
-
if entry["type"] in type_options
|
| 839 |
-
else 0
|
| 840 |
-
),
|
| 841 |
-
key=f"type_{state_key}_{idx}",
|
| 842 |
-
)
|
| 843 |
-
consolidations[idx] = {"location": loc, "type": ctype}
|
| 844 |
-
|
| 845 |
-
if len(consolidations) > 1:
|
| 846 |
-
if st.button(
|
| 847 |
-
"ποΈ Remove", key=f"rm_{state_key}_{idx}",
|
| 848 |
-
use_container_width=True,
|
| 849 |
-
):
|
| 850 |
-
consolidations.pop(idx)
|
| 851 |
-
st.rerun()
|
| 852 |
-
|
| 853 |
-
if st.button("β Add Another Consolidation Site",
|
| 854 |
-
use_container_width=True):
|
| 855 |
-
consolidations.append(
|
| 856 |
-
{"location": "Left Lower Lobe",
|
| 857 |
-
"type": "Solid Consolidation"}
|
| 858 |
-
)
|
| 859 |
-
# Auto-switch to the new site so the next strokes
|
| 860 |
-
# use the new colour immediately
|
| 861 |
-
st.session_state.active_site = len(consolidations) - 1
|
| 862 |
-
st.rerun()
|
| 863 |
-
|
| 864 |
-
st.divider()
|
| 865 |
-
|
| 866 |
-
# Pattern summary
|
| 867 |
-
involved_lobes = list({c["location"] for c in consolidations})
|
| 868 |
-
if len(involved_lobes) >= 2:
|
| 869 |
-
st.info(
|
| 870 |
-
f"π΄ **Multilobar** pneumonia β "
|
| 871 |
-
f"{len(involved_lobes)} lobes involved"
|
| 872 |
-
)
|
| 873 |
-
else:
|
| 874 |
-
st.info(f"π‘ **Unilobar** β {involved_lobes[0]}")
|
| 875 |
-
|
| 876 |
-
confidence = st.slider(
|
| 877 |
-
"Confidence",
|
| 878 |
-
min_value=1,
|
| 879 |
-
max_value=5,
|
| 880 |
-
value=existing_metadata.get("confidence", 5),
|
| 881 |
-
)
|
| 882 |
-
notes = st.text_area(
|
| 883 |
-
"Clinical Notes",
|
| 884 |
-
value=existing_metadata.get("clinical_notes", ""),
|
| 885 |
-
placeholder="E.g., Silhouette sign present, bilateral involvement",
|
| 886 |
-
)
|
| 887 |
-
|
| 888 |
-
# Drawn area stats
|
| 889 |
-
if canvas_result.image_data is not None:
|
| 890 |
-
alpha = canvas_result.image_data[:, :, 3]
|
| 891 |
-
drawn_px = int(np.sum(alpha > 0))
|
| 892 |
-
total_px = alpha.shape[0] * alpha.shape[1]
|
| 893 |
-
if drawn_px > 0:
|
| 894 |
-
st.metric(
|
| 895 |
-
"Drawn Area",
|
| 896 |
-
f"{(drawn_px / total_px) * 100:.2f}%",
|
| 897 |
-
)
|
| 898 |
-
st.metric("Pixels", f"{drawn_px:,}")
|
| 899 |
-
|
| 900 |
-
st.divider()
|
| 901 |
-
|
| 902 |
-
# ββ Save / Delete ββββββββββββββββββββββββββββββββββββββββββ
|
| 903 |
-
b1, b2 = st.columns(2)
|
| 904 |
-
|
| 905 |
-
def _build_metadata():
|
| 906 |
-
return {
|
| 907 |
-
"consolidations": consolidations,
|
| 908 |
-
"involved_lobes": involved_lobes,
|
| 909 |
-
"multilobar": len(involved_lobes) >= 2,
|
| 910 |
-
"confidence": confidence,
|
| 911 |
-
"clinical_notes": notes,
|
| 912 |
-
}
|
| 913 |
-
|
| 914 |
-
with b1:
|
| 915 |
-
if st.button(
|
| 916 |
-
"πΎ Save & Next", type="primary",
|
| 917 |
-
use_container_width=True,
|
| 918 |
-
):
|
| 919 |
-
if (
|
| 920 |
-
canvas_result.image_data is not None
|
| 921 |
-
and np.sum(canvas_result.image_data[:, :, 3] > 0) > 0
|
| 922 |
-
):
|
| 923 |
-
mask = canvas_result.image_data[:, :, 3]
|
| 924 |
-
save_annotation_in_patient_folder(
|
| 925 |
-
current_image["image_path"],
|
| 926 |
-
mask,
|
| 927 |
-
annotator_name,
|
| 928 |
-
_build_metadata(),
|
| 929 |
-
img_rgb.shape,
|
| 930 |
-
)
|
| 931 |
-
st.success("β
Saved!")
|
| 932 |
-
if (
|
| 933 |
-
st.session_state.current_index
|
| 934 |
-
< len(filtered_images) - 1
|
| 935 |
-
):
|
| 936 |
-
st.session_state.current_index += 1
|
| 937 |
-
st.rerun()
|
| 938 |
-
else:
|
| 939 |
-
st.error("Please draw an annotation first!")
|
| 940 |
-
|
| 941 |
-
with b2:
|
| 942 |
-
if st.button("πΎ Save Only", use_container_width=True):
|
| 943 |
-
if (
|
| 944 |
-
canvas_result.image_data is not None
|
| 945 |
-
and np.sum(canvas_result.image_data[:, :, 3] > 0) > 0
|
| 946 |
-
):
|
| 947 |
-
mask = canvas_result.image_data[:, :, 3]
|
| 948 |
-
save_annotation_in_patient_folder(
|
| 949 |
-
current_image["image_path"],
|
| 950 |
-
mask,
|
| 951 |
-
annotator_name,
|
| 952 |
-
_build_metadata(),
|
| 953 |
-
img_rgb.shape,
|
| 954 |
-
)
|
| 955 |
-
st.success("β
Saved!")
|
| 956 |
-
else:
|
| 957 |
-
st.error("Please draw an annotation first!")
|
| 958 |
-
|
| 959 |
-
if current_image["annotated"]:
|
| 960 |
-
if st.button("ποΈ Delete Annotation",
|
| 961 |
-
use_container_width=True):
|
| 962 |
-
if current_image["mask_path"].exists():
|
| 963 |
-
current_image["mask_path"].unlink()
|
| 964 |
-
if current_image["metadata_path"].exists():
|
| 965 |
-
current_image["metadata_path"].unlink()
|
| 966 |
-
st.success("Annotation deleted!")
|
| 967 |
-
st.rerun()
|
| 968 |
-
|
| 969 |
-
# ββ Download Buttons βββββββββββββββββββββββββββββββββββββββ
|
| 970 |
-
st.divider()
|
| 971 |
-
st.write("**π₯ Download Annotation**")
|
| 972 |
-
|
| 973 |
-
# Generate file ID from patient_id and image name
|
| 974 |
-
file_id = f"{current_image['patient_id']}_{current_image['image_path'].stem}"
|
| 975 |
-
|
| 976 |
-
# Download mask
|
| 977 |
-
if (
|
| 978 |
-
canvas_result.image_data is not None
|
| 979 |
-
and np.sum(canvas_result.image_data[:, :, 3] > 0) > 0
|
| 980 |
-
):
|
| 981 |
-
# Create mask from current canvas
|
| 982 |
-
mask_data = canvas_result.image_data[:, :, 3]
|
| 983 |
-
# Resize to original image dimensions
|
| 984 |
-
orig_h, orig_w = img_rgb.shape[:2]
|
| 985 |
-
mask_resized = cv2.resize(
|
| 986 |
-
mask_data, (orig_w, orig_h), interpolation=cv2.INTER_NEAREST
|
| 987 |
-
)
|
| 988 |
-
|
| 989 |
-
# Encode mask as PNG
|
| 990 |
-
_, mask_buffer = cv2.imencode(".png", mask_resized)
|
| 991 |
-
mask_bytes = mask_buffer.tobytes()
|
| 992 |
-
|
| 993 |
-
# Create JSON metadata
|
| 994 |
-
metadata_download = {
|
| 995 |
-
"image_id": file_id,
|
| 996 |
-
"image_name": current_image["image_name"],
|
| 997 |
-
"patient_id": current_image["patient_id"],
|
| 998 |
-
"annotator": annotator_name,
|
| 999 |
-
"timestamp": datetime.now().strftime("%Y%m%d_%H%M%S"),
|
| 1000 |
-
"consolidations": consolidations,
|
| 1001 |
-
"involved_lobes": involved_lobes,
|
| 1002 |
-
"multilobar": len(involved_lobes) >= 2,
|
| 1003 |
-
"confidence": confidence,
|
| 1004 |
-
"clinical_notes": notes,
|
| 1005 |
-
"mask_dimensions": {"width": orig_w, "height": orig_h},
|
| 1006 |
-
"annotated_pixels": int(np.sum(mask_resized > 0)),
|
| 1007 |
-
"annotated_area_percent": float(
|
| 1008 |
-
np.sum(mask_resized > 0) / (orig_w * orig_h) * 100
|
| 1009 |
-
),
|
| 1010 |
-
}
|
| 1011 |
-
json_bytes = json.dumps(metadata_download, indent=2).encode("utf-8")
|
| 1012 |
-
|
| 1013 |
-
dl1, dl2 = st.columns(2)
|
| 1014 |
-
with dl1:
|
| 1015 |
-
st.download_button(
|
| 1016 |
-
label="π₯ Download Mask (PNG)",
|
| 1017 |
-
data=mask_bytes,
|
| 1018 |
-
file_name=f"{file_id}_mask.png",
|
| 1019 |
-
mime="image/png",
|
| 1020 |
-
use_container_width=True,
|
| 1021 |
-
)
|
| 1022 |
-
with dl2:
|
| 1023 |
-
st.download_button(
|
| 1024 |
-
label="π₯ Download JSON",
|
| 1025 |
-
data=json_bytes,
|
| 1026 |
-
file_name=f"{file_id}_annotation.json",
|
| 1027 |
-
mime="application/json",
|
| 1028 |
-
use_container_width=True,
|
| 1029 |
-
)
|
| 1030 |
-
|
| 1031 |
-
st.caption(f"Files will be named: `{file_id}_mask.png` and `{file_id}_annotation.json`")
|
| 1032 |
-
|
| 1033 |
-
elif current_image["annotated"] and current_image["mask_path"].exists():
|
| 1034 |
-
# Load existing saved annotation for download
|
| 1035 |
-
existing_mask = cv2.imread(
|
| 1036 |
-
str(current_image["mask_path"]), cv2.IMREAD_GRAYSCALE
|
| 1037 |
-
)
|
| 1038 |
-
if existing_mask is not None:
|
| 1039 |
-
_, mask_buffer = cv2.imencode(".png", existing_mask)
|
| 1040 |
-
mask_bytes = mask_buffer.tobytes()
|
| 1041 |
-
|
| 1042 |
-
# Load existing JSON
|
| 1043 |
-
if current_image["metadata_path"].exists():
|
| 1044 |
-
with open(current_image["metadata_path"], "r") as f:
|
| 1045 |
-
existing_json = json.load(f)
|
| 1046 |
-
json_bytes = json.dumps(existing_json, indent=2).encode("utf-8")
|
| 1047 |
-
else:
|
| 1048 |
-
json_bytes = b"{}"
|
| 1049 |
-
|
| 1050 |
-
dl1, dl2 = st.columns(2)
|
| 1051 |
-
with dl1:
|
| 1052 |
-
st.download_button(
|
| 1053 |
-
label="π₯ Download Saved Mask",
|
| 1054 |
-
data=mask_bytes,
|
| 1055 |
-
file_name=f"{file_id}_mask.png",
|
| 1056 |
-
mime="image/png",
|
| 1057 |
-
use_container_width=True,
|
| 1058 |
-
)
|
| 1059 |
-
with dl2:
|
| 1060 |
-
st.download_button(
|
| 1061 |
-
label="π₯ Download Saved JSON",
|
| 1062 |
-
data=json_bytes,
|
| 1063 |
-
file_name=f"{file_id}_annotation.json",
|
| 1064 |
-
mime="application/json",
|
| 1065 |
-
use_container_width=True,
|
| 1066 |
-
)
|
| 1067 |
-
st.caption(f"Files: `{file_id}_mask.png` / `{file_id}_annotation.json`")
|
| 1068 |
-
else:
|
| 1069 |
-
st.info("Draw an annotation to enable downloads")
|
| 1070 |
-
|
| 1071 |
-
# ================================================================
|
| 1072 |
-
# TAB 2 β COMPARE
|
| 1073 |
-
# ================================================================
|
| 1074 |
-
with tab2:
|
| 1075 |
-
st.header("Compare Annotations Between Radiologists")
|
| 1076 |
-
|
| 1077 |
-
cmp1, cmp2 = st.columns(2)
|
| 1078 |
-
with cmp1:
|
| 1079 |
-
st.subheader("Radiologist 1")
|
| 1080 |
-
mask1_file = st.file_uploader(
|
| 1081 |
-
"Upload Mask 1", type=["png"], key="comp1"
|
| 1082 |
-
)
|
| 1083 |
-
name1 = st.text_input("Name", value="Radiologist 1", key="name1")
|
| 1084 |
-
with cmp2:
|
| 1085 |
-
st.subheader("Radiologist 2")
|
| 1086 |
-
mask2_file = st.file_uploader(
|
| 1087 |
-
"Upload Mask 2", type=["png"], key="comp2"
|
| 1088 |
-
)
|
| 1089 |
-
name2 = st.text_input("Name", value="Radiologist 2", key="name2")
|
| 1090 |
-
|
| 1091 |
-
if mask1_file and mask2_file:
|
| 1092 |
-
mask1 = cv2.imdecode(
|
| 1093 |
-
np.frombuffer(mask1_file.read(), np.uint8),
|
| 1094 |
-
cv2.IMREAD_GRAYSCALE,
|
| 1095 |
-
)
|
| 1096 |
-
mask1_file.seek(0)
|
| 1097 |
-
mask2 = cv2.imdecode(
|
| 1098 |
-
np.frombuffer(mask2_file.read(), np.uint8),
|
| 1099 |
-
cv2.IMREAD_GRAYSCALE,
|
| 1100 |
-
)
|
| 1101 |
-
mask2_file.seek(0)
|
| 1102 |
-
|
| 1103 |
-
if mask1.shape != mask2.shape:
|
| 1104 |
-
mask2 = cv2.resize(mask2, (mask1.shape[1], mask1.shape[0]))
|
| 1105 |
-
|
| 1106 |
-
dice = calculate_dice_coefficient(mask1, mask2)
|
| 1107 |
-
iou = calculate_iou(mask1, mask2)
|
| 1108 |
-
precision, recall = calculate_precision_recall(mask1, mask2)
|
| 1109 |
-
|
| 1110 |
-
st.subheader("π Inter-Rater Agreement")
|
| 1111 |
-
m1, m2, m3, m4 = st.columns(4)
|
| 1112 |
-
m1.metric("Dice", f"{dice:.4f}")
|
| 1113 |
-
m2.metric("IoU", f"{iou:.4f}")
|
| 1114 |
-
m3.metric("Precision", f"{precision:.4f}")
|
| 1115 |
-
m4.metric("Recall", f"{recall:.4f}")
|
| 1116 |
-
|
| 1117 |
-
if dice >= 0.80:
|
| 1118 |
-
st.success("β
Excellent Agreement")
|
| 1119 |
-
elif dice >= 0.70:
|
| 1120 |
-
st.info("βΉοΈ Good Agreement")
|
| 1121 |
-
elif dice >= 0.50:
|
| 1122 |
-
st.warning("β οΈ Fair Agreement β Review recommended")
|
| 1123 |
-
else:
|
| 1124 |
-
st.error("β Poor Agreement β Consensus needed")
|
| 1125 |
-
|
| 1126 |
-
st.subheader("Visual Comparison")
|
| 1127 |
-
overlay = np.zeros(
|
| 1128 |
-
(mask1.shape[0], mask1.shape[1], 3), dtype=np.uint8
|
| 1129 |
-
)
|
| 1130 |
-
overlay[mask1 > 0] = [0, 255, 0]
|
| 1131 |
-
overlap = (mask1 > 0) & (mask2 > 0)
|
| 1132 |
-
overlay[mask2 > 0] = [255, 0, 0]
|
| 1133 |
-
overlay[overlap] = [255, 255, 0]
|
| 1134 |
-
st.image(
|
| 1135 |
-
overlay,
|
| 1136 |
-
caption=(
|
| 1137 |
-
f"Green: {name1} | Red: {name2} | Yellow: Agreement"
|
| 1138 |
-
),
|
| 1139 |
-
use_column_width=True,
|
| 1140 |
-
)
|
| 1141 |
-
|
| 1142 |
-
# ================================================================
|
| 1143 |
-
# TAB 3 β GUIDELINES
|
| 1144 |
-
# ================================================================
|
| 1145 |
-
with tab3:
|
| 1146 |
-
st.header("π Annotation Guidelines")
|
| 1147 |
-
st.markdown(
|
| 1148 |
-
"""
|
| 1149 |
-
### What to Annotate
|
| 1150 |
-
|
| 1151 |
-
**Pneumonia consolidation** appears as white / opaque areas where air
|
| 1152 |
-
spaces are filled with fluid.
|
| 1153 |
-
|
| 1154 |
-
### Multilobar Pneumonia
|
| 1155 |
-
|
| 1156 |
-
When consolidation is present in **more than one lobe**, add a separate
|
| 1157 |
-
consolidation entry for each affected site using the **"β Add Another
|
| 1158 |
-
Consolidation Site"** button. This lets us track multilobar involvement
|
| 1159 |
-
accurately.
|
| 1160 |
-
|
| 1161 |
-
### Key Radiologic Signs
|
| 1162 |
-
|
| 1163 |
-
#### β
Include in Your Mask
|
| 1164 |
-
1. **Air Bronchograms** β Dark branching tubes inside consolidation
|
| 1165 |
-
2. **Silhouette Sign** β Heart / diaphragm border lost in consolidation
|
| 1166 |
-
3. **Solid Consolidation** β Dense white opaque areas
|
| 1167 |
-
4. **Ground Glass Opacity** β Subtle hazy areas at edges
|
| 1168 |
-
|
| 1169 |
-
#### β Exclude from Your Mask
|
| 1170 |
-
1. **Ribs** β Trace "through" rib shadows
|
| 1171 |
-
2. **Normal lung tissue** β Don't over-segment
|
| 1172 |
-
3. **Pleural effusion** (unless asked) β Smooth meniscus sign
|
| 1173 |
-
|
| 1174 |
-
### Drawing Tools
|
| 1175 |
-
| Tool | Use for |
|
| 1176 |
-
|---|---|
|
| 1177 |
-
| **freedraw** | Freehand tracing of consolidation borders |
|
| 1178 |
-
| **rect** | Quick rectangular ROI |
|
| 1179 |
-
| **circle** | Circular / oval regions |
|
| 1180 |
-
| **line** | Straight edge tracing |
|
| 1181 |
-
|
| 1182 |
-
### Colors
|
| 1183 |
-
Each consolidation site is automatically assigned a **unique colour**
|
| 1184 |
-
(Lime, Red, Blue, Gold, β¦). Select the active site before drawing
|
| 1185 |
-
so annotations are visually distinguishable.
|
| 1186 |
-
|
| 1187 |
-
### Tips
|
| 1188 |
-
1. **Draw directly** on the canvas β no external tools needed
|
| 1189 |
-
2. **Adjust brush size** with the sidebar slider
|
| 1190 |
-
3. **Zoom**: scroll β your mouse wheel over the image, or use the
|
| 1191 |
-
β / β buttons in the sidebar
|
| 1192 |
-
4. **Pan**: when zoomed in, use the arrow buttons (β¬
οΈβ‘οΈβ¬οΈβ¬οΈ) or
|
| 1193 |
-
sliders to navigate
|
| 1194 |
-
5. **Be consistent** β same criteria for every image
|
| 1195 |
-
|
| 1196 |
-
### Quality Metrics
|
| 1197 |
-
| Dice Score | Interpretation |
|
| 1198 |
-
|---|---|
|
| 1199 |
-
| > 0.80 | β
Excellent agreement |
|
| 1200 |
-
| 0.70 β 0.80 | Good agreement |
|
| 1201 |
-
| < 0.70 | β οΈ Needs review / consensus |
|
| 1202 |
-
"""
|
| 1203 |
-
)
|
| 1204 |
-
|
| 1205 |
-
|
| 1206 |
-
if __name__ == "__main__":
|
| 1207 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/test_image_loading.py
DELETED
|
@@ -1,121 +0,0 @@
|
|
| 1 |
-
#!/usr/bin/env python3
|
| 2 |
-
"""
|
| 3 |
-
Test script to verify image loading works correctly.
|
| 4 |
-
Run this to diagnose JPG rendering issues.
|
| 5 |
-
"""
|
| 6 |
-
|
| 7 |
-
import sys
|
| 8 |
-
from pathlib import Path
|
| 9 |
-
|
| 10 |
-
# Check Python version
|
| 11 |
-
print(f"Python version: {sys.version}")
|
| 12 |
-
print(f"Python executable: {sys.executable}")
|
| 13 |
-
print("-" * 50)
|
| 14 |
-
|
| 15 |
-
# Test imports
|
| 16 |
-
print("Testing imports...")
|
| 17 |
-
try:
|
| 18 |
-
import numpy as np
|
| 19 |
-
print(f"β
numpy: {np.__version__}")
|
| 20 |
-
except ImportError as e:
|
| 21 |
-
print(f"β numpy: {e}")
|
| 22 |
-
|
| 23 |
-
try:
|
| 24 |
-
import cv2
|
| 25 |
-
print(f"β
opencv: {cv2.__version__}")
|
| 26 |
-
except ImportError as e:
|
| 27 |
-
print(f"β opencv: {e}")
|
| 28 |
-
|
| 29 |
-
try:
|
| 30 |
-
from PIL import Image
|
| 31 |
-
import PIL
|
| 32 |
-
print(f"β
Pillow: {PIL.__version__}")
|
| 33 |
-
except ImportError as e:
|
| 34 |
-
print(f"β Pillow: {e}")
|
| 35 |
-
|
| 36 |
-
try:
|
| 37 |
-
import streamlit as st
|
| 38 |
-
print(f"β
streamlit: {st.__version__}")
|
| 39 |
-
except ImportError as e:
|
| 40 |
-
print(f"β streamlit: {e}")
|
| 41 |
-
|
| 42 |
-
print("-" * 50)
|
| 43 |
-
|
| 44 |
-
# Test image loading functions
|
| 45 |
-
def test_load_image(image_path):
|
| 46 |
-
"""Test different methods of loading an image."""
|
| 47 |
-
image_path = Path(image_path)
|
| 48 |
-
print(f"\nTesting: {image_path}")
|
| 49 |
-
print(f" Exists: {image_path.exists()}")
|
| 50 |
-
|
| 51 |
-
if not image_path.exists():
|
| 52 |
-
print(" β File not found!")
|
| 53 |
-
return None
|
| 54 |
-
|
| 55 |
-
# Method 1: PIL
|
| 56 |
-
try:
|
| 57 |
-
with Image.open(image_path) as pil_img:
|
| 58 |
-
if pil_img.mode != 'RGB':
|
| 59 |
-
pil_img = pil_img.convert('RGB')
|
| 60 |
-
img = np.array(pil_img)
|
| 61 |
-
print(f" β
PIL loaded: shape={img.shape}, dtype={img.dtype}")
|
| 62 |
-
except Exception as e:
|
| 63 |
-
print(f" β PIL failed: {e}")
|
| 64 |
-
|
| 65 |
-
# Method 2: cv2.imread
|
| 66 |
-
try:
|
| 67 |
-
img = cv2.imread(str(image_path))
|
| 68 |
-
if img is not None:
|
| 69 |
-
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
|
| 70 |
-
print(f" β
cv2.imread loaded: shape={img_rgb.shape}, dtype={img_rgb.dtype}")
|
| 71 |
-
else:
|
| 72 |
-
print(f" β cv2.imread returned None")
|
| 73 |
-
except Exception as e:
|
| 74 |
-
print(f" β cv2.imread failed: {e}")
|
| 75 |
-
|
| 76 |
-
# Method 3: cv2.imdecode (bytes)
|
| 77 |
-
try:
|
| 78 |
-
with open(image_path, 'rb') as f:
|
| 79 |
-
file_bytes = f.read()
|
| 80 |
-
nparr = np.frombuffer(file_bytes, np.uint8)
|
| 81 |
-
img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
|
| 82 |
-
if img is not None:
|
| 83 |
-
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
|
| 84 |
-
print(f" β
cv2.imdecode loaded: shape={img_rgb.shape}, dtype={img_rgb.dtype}")
|
| 85 |
-
else:
|
| 86 |
-
print(f" β cv2.imdecode returned None")
|
| 87 |
-
except Exception as e:
|
| 88 |
-
print(f" β cv2.imdecode failed: {e}")
|
| 89 |
-
|
| 90 |
-
return img
|
| 91 |
-
|
| 92 |
-
# Find and test images
|
| 93 |
-
print("\nSearching for test images...")
|
| 94 |
-
base_dirs = [
|
| 95 |
-
Path("./uploaded_images"),
|
| 96 |
-
Path("./Pacientes"),
|
| 97 |
-
Path("."),
|
| 98 |
-
]
|
| 99 |
-
|
| 100 |
-
test_images = []
|
| 101 |
-
for base in base_dirs:
|
| 102 |
-
if base.exists():
|
| 103 |
-
for ext in ["*.jpg", "*.JPG", "*.jpeg", "*.png"]:
|
| 104 |
-
test_images.extend(list(base.glob(f"**/{ext}"))[:2])
|
| 105 |
-
|
| 106 |
-
if test_images:
|
| 107 |
-
print(f"Found {len(test_images)} images to test")
|
| 108 |
-
for img_path in test_images[:5]: # Test up to 5 images
|
| 109 |
-
test_load_image(img_path)
|
| 110 |
-
else:
|
| 111 |
-
print("No images found. Creating a test image...")
|
| 112 |
-
# Create a simple test image
|
| 113 |
-
test_img = np.random.randint(0, 255, (100, 100, 3), dtype=np.uint8)
|
| 114 |
-
test_path = Path("./test_image.jpg")
|
| 115 |
-
cv2.imwrite(str(test_path), test_img)
|
| 116 |
-
print(f"Created test image: {test_path}")
|
| 117 |
-
test_load_image(test_path)
|
| 118 |
-
|
| 119 |
-
print("\n" + "=" * 50)
|
| 120 |
-
print("Test complete!")
|
| 121 |
-
print("=" * 50)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
src/uploaded_images/7035909_20240326.jpg
DELETED
Git LFS Details
|
src/uploaded_images/7043276_20240403.jpg
DELETED
Git LFS Details
|