Spaces:
Sleeping
Sleeping
Upload folder using huggingface_hub
Browse files- README.md +44 -7
- app.py +672 -0
- requirements.txt +6 -0
README.md
CHANGED
|
@@ -1,12 +1,49 @@
|
|
| 1 |
---
|
| 2 |
-
title: Microscopy
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: gradio
|
| 7 |
-
sdk_version:
|
| 8 |
app_file: app.py
|
| 9 |
-
pinned:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Microscopy CV Toolkit
|
| 3 |
+
emoji: "\U0001F52C"
|
| 4 |
+
colorFrom: green
|
| 5 |
+
colorTo: blue
|
| 6 |
sdk: gradio
|
| 7 |
+
sdk_version: "5.0"
|
| 8 |
app_file: app.py
|
| 9 |
+
pinned: true
|
| 10 |
+
license: apache-2.0
|
| 11 |
+
tags:
|
| 12 |
+
- microscopy
|
| 13 |
+
- computer-vision
|
| 14 |
+
- image-quality
|
| 15 |
+
- opencv
|
| 16 |
---
|
| 17 |
|
| 18 |
+
# Microscopy CV Toolkit
|
| 19 |
+
|
| 20 |
+
I needed a quick way to check if my microscopy images are in focus before feeding them into ML pipelines. Couldn't find anything lightweight that just does the basics without pulling in a 2GB model, so I built this. Pure OpenCV, no models, runs instant.
|
| 21 |
+
|
| 22 |
+
## What it does
|
| 23 |
+
|
| 24 |
+
Four tools in one space:
|
| 25 |
+
|
| 26 |
+
**Focus Quality** - Runs Tenengrad, Laplacian variance, normalized variance, and Vollath F4 on your image. Generates a heatmap overlay so you can see which regions are sharp and which are mush. I use this to filter out bad acquisitions before they waste training time.
|
| 27 |
+
|
| 28 |
+
**Illumination Analysis** - Checks brightness distribution, detects clipping (blown highlights / crushed blacks), measures dynamic range, and flags vignetting. Splits the image into a 3x3 zone grid so you can see if your Kohler illumination is actually aligned or if you've been lying to yourself.
|
| 29 |
+
|
| 30 |
+
**Microscopy Type Detection** - Tries to figure out if you're looking at brightfield, darkfield, phase contrast, fluorescence, or polarized light. It's histogram-based, not magic. Works surprisingly well for standard preparations but don't expect miracles on weird edge cases.
|
| 31 |
+
|
| 32 |
+
**Image Enhancement** - CLAHE, unsharp mask, non-local means denoising, and auto white balance. Side-by-side before/after so you can see what each one does. Good for quick previews before you commit to a processing pipeline. The CLAHE configration is tuned for typical microscopy contrast ranges but you can adjust the parameters.
|
| 33 |
+
|
| 34 |
+
## How it works
|
| 35 |
+
|
| 36 |
+
Everything is classical CV. Focus metrics are computed on grayscale using gradient operators and statistical measures. Illumination analysis uses histogram statistics and zone-based sampling. Type detection looks at histogram shape, mean intensity, contrast ratios, and edge density to classify teh imaging modality. Enhancement is standard OpenCV stuff.
|
| 37 |
+
|
| 38 |
+
No GPU needed. No model downloads. Should run in under a second for most images.
|
| 39 |
+
|
| 40 |
+
## Running locally
|
| 41 |
+
|
| 42 |
+
```bash
|
| 43 |
+
pip install -r requirements.txt
|
| 44 |
+
python app.py
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
## License
|
| 48 |
+
|
| 49 |
+
Apache 2.0. Do whatever you want with it.
|
app.py
ADDED
|
@@ -0,0 +1,672 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Microscopy CV Toolkit — classical computer-vision tools for microscopy image QC.
|
| 3 |
+
No ML models, pure OpenCV + NumPy + SciPy.
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import cv2
|
| 7 |
+
import numpy as np
|
| 8 |
+
import gradio as gr
|
| 9 |
+
import matplotlib
|
| 10 |
+
matplotlib.use("Agg")
|
| 11 |
+
import matplotlib.pyplot as plt
|
| 12 |
+
from matplotlib.colors import Normalize
|
| 13 |
+
from scipy import ndimage
|
| 14 |
+
from PIL import Image
|
| 15 |
+
import io
|
| 16 |
+
|
| 17 |
+
# ---------------------------------------------------------------------------
|
| 18 |
+
# Utilities
|
| 19 |
+
# ---------------------------------------------------------------------------
|
| 20 |
+
|
| 21 |
+
def _to_gray(img: np.ndarray) -> np.ndarray:
|
| 22 |
+
if len(img.shape) == 2:
|
| 23 |
+
return img
|
| 24 |
+
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
def _fig_to_image(fig) -> np.ndarray:
|
| 28 |
+
buf = io.BytesIO()
|
| 29 |
+
fig.savefig(buf, format="png", bbox_inches="tight", dpi=120)
|
| 30 |
+
plt.close(fig)
|
| 31 |
+
buf.seek(0)
|
| 32 |
+
return np.array(Image.open(buf))
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
# ---------------------------------------------------------------------------
|
| 36 |
+
# Tab 1 — Focus Quality
|
| 37 |
+
# ---------------------------------------------------------------------------
|
| 38 |
+
|
| 39 |
+
def _tenengrad(gray: np.ndarray) -> float:
|
| 40 |
+
gx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=3)
|
| 41 |
+
gy = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=3)
|
| 42 |
+
return float(np.mean(gx ** 2 + gy ** 2))
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
def _laplacian_variance(gray: np.ndarray) -> float:
|
| 46 |
+
lap = cv2.Laplacian(gray, cv2.CV_64F)
|
| 47 |
+
return float(lap.var())
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
def _normalized_variance(gray: np.ndarray) -> float:
|
| 51 |
+
mean = gray.mean()
|
| 52 |
+
if mean < 1e-6:
|
| 53 |
+
return 0.0
|
| 54 |
+
return float(gray.astype(np.float64).var() / mean)
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
def _vollath_f4(gray: np.ndarray) -> float:
|
| 58 |
+
g = gray.astype(np.float64)
|
| 59 |
+
h, w = g.shape
|
| 60 |
+
t1 = np.sum(g[:, :w - 1] * g[:, 1:w])
|
| 61 |
+
t2 = np.sum(g[:, :w - 2] * g[:, 2:w])
|
| 62 |
+
return float(t1 - t2)
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
def _focus_heatmap(gray: np.ndarray, block: int = 64) -> np.ndarray:
|
| 66 |
+
h, w = gray.shape
|
| 67 |
+
rows = h // block
|
| 68 |
+
cols = w // block
|
| 69 |
+
hmap = np.zeros((rows, cols), dtype=np.float64)
|
| 70 |
+
for r in range(rows):
|
| 71 |
+
for c in range(cols):
|
| 72 |
+
patch = gray[r * block:(r + 1) * block, c * block:(c + 1) * block]
|
| 73 |
+
lap = cv2.Laplacian(patch, cv2.CV_64F)
|
| 74 |
+
hmap[r, c] = lap.var()
|
| 75 |
+
return hmap
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
def _score_label(val: float, low: float, high: float) -> str:
|
| 79 |
+
if val >= high:
|
| 80 |
+
return "PASS (sharp)"
|
| 81 |
+
elif val >= low:
|
| 82 |
+
return "MARGINAL"
|
| 83 |
+
return "FAIL (blurry)"
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
def analyze_focus(image: np.ndarray):
|
| 87 |
+
if image is None:
|
| 88 |
+
return None, "Upload an image first."
|
| 89 |
+
|
| 90 |
+
gray = _to_gray(image)
|
| 91 |
+
|
| 92 |
+
tenen = _tenengrad(gray)
|
| 93 |
+
lap_var = _laplacian_variance(gray)
|
| 94 |
+
norm_var = _normalized_variance(gray)
|
| 95 |
+
vollath = _vollath_f4(gray)
|
| 96 |
+
|
| 97 |
+
# Thresholds (heuristic, tuned for typical microscopy)
|
| 98 |
+
tenen_verdict = _score_label(tenen, 200, 1000)
|
| 99 |
+
lap_verdict = _score_label(lap_var, 50, 300)
|
| 100 |
+
norm_verdict = _score_label(norm_var, 5, 20)
|
| 101 |
+
vollath_verdict = _score_label(vollath, 1e5, 1e6)
|
| 102 |
+
|
| 103 |
+
overall_sharp = sum([
|
| 104 |
+
tenen >= 1000,
|
| 105 |
+
lap_var >= 300,
|
| 106 |
+
norm_var >= 20,
|
| 107 |
+
vollath >= 1e6,
|
| 108 |
+
])
|
| 109 |
+
if overall_sharp >= 3:
|
| 110 |
+
overall = "PASS — image is in focus"
|
| 111 |
+
elif overall_sharp >= 1:
|
| 112 |
+
overall = "MARGINAL — some metrics indicate softness"
|
| 113 |
+
else:
|
| 114 |
+
overall = "FAIL — image appears out of focus"
|
| 115 |
+
|
| 116 |
+
report = (
|
| 117 |
+
f"## Focus Quality Report\n\n"
|
| 118 |
+
f"| Metric | Value | Verdict |\n"
|
| 119 |
+
f"|--------|-------|---------|\n"
|
| 120 |
+
f"| Tenengrad | {tenen:.1f} | {tenen_verdict} |\n"
|
| 121 |
+
f"| Laplacian Variance | {lap_var:.1f} | {lap_verdict} |\n"
|
| 122 |
+
f"| Normalized Variance | {norm_var:.2f} | {norm_verdict} |\n"
|
| 123 |
+
f"| Vollath F4 | {vollath:.0f} | {vollath_verdict} |\n\n"
|
| 124 |
+
f"**Overall: {overall}**"
|
| 125 |
+
)
|
| 126 |
+
|
| 127 |
+
# Heatmap overlay
|
| 128 |
+
hmap = _focus_heatmap(gray, block=max(32, min(gray.shape) // 16))
|
| 129 |
+
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
|
| 130 |
+
|
| 131 |
+
axes[0].imshow(cv2.cvtColor(image, cv2.COLOR_RGB2BGR) if len(image.shape) == 3 else gray, cmap="gray")
|
| 132 |
+
axes[0].set_title("Original", fontsize=12, fontweight="bold")
|
| 133 |
+
axes[0].axis("off")
|
| 134 |
+
|
| 135 |
+
im = axes[1].imshow(hmap, cmap="inferno", interpolation="bilinear")
|
| 136 |
+
axes[1].set_title("Focus Heatmap (Laplacian Var per block)", fontsize=12, fontweight="bold")
|
| 137 |
+
axes[1].axis("off")
|
| 138 |
+
fig.colorbar(im, ax=axes[1], fraction=0.046, pad=0.04, label="Sharpness")
|
| 139 |
+
fig.tight_layout()
|
| 140 |
+
|
| 141 |
+
overlay_img = _fig_to_image(fig)
|
| 142 |
+
return overlay_img, report
|
| 143 |
+
|
| 144 |
+
|
| 145 |
+
# ---------------------------------------------------------------------------
|
| 146 |
+
# Tab 2 — Illumination Analysis
|
| 147 |
+
# ---------------------------------------------------------------------------
|
| 148 |
+
|
| 149 |
+
def _nine_zone_map(gray: np.ndarray) -> np.ndarray:
|
| 150 |
+
h, w = gray.shape
|
| 151 |
+
rh, rw = h // 3, w // 3
|
| 152 |
+
zones = np.zeros((3, 3), dtype=np.float64)
|
| 153 |
+
for r in range(3):
|
| 154 |
+
for c in range(3):
|
| 155 |
+
patch = gray[r * rh:(r + 1) * rh, c * rw:(c + 1) * rw]
|
| 156 |
+
zones[r, c] = patch.mean()
|
| 157 |
+
return zones
|
| 158 |
+
|
| 159 |
+
|
| 160 |
+
def analyze_illumination(image: np.ndarray):
|
| 161 |
+
if image is None:
|
| 162 |
+
return None, "Upload an image first."
|
| 163 |
+
|
| 164 |
+
gray = _to_gray(image)
|
| 165 |
+
h, w = gray.shape
|
| 166 |
+
|
| 167 |
+
mean_b = float(gray.mean())
|
| 168 |
+
std_b = float(gray.std())
|
| 169 |
+
min_b = int(gray.min())
|
| 170 |
+
max_b = int(gray.max())
|
| 171 |
+
dynamic_range = max_b - min_b
|
| 172 |
+
|
| 173 |
+
# Clipping detection
|
| 174 |
+
total_px = h * w
|
| 175 |
+
clipped_low = int(np.sum(gray <= 5))
|
| 176 |
+
clipped_high = int(np.sum(gray >= 250))
|
| 177 |
+
pct_low = 100.0 * clipped_low / total_px
|
| 178 |
+
pct_high = 100.0 * clipped_high / total_px
|
| 179 |
+
|
| 180 |
+
clip_warning = ""
|
| 181 |
+
if pct_low > 5:
|
| 182 |
+
clip_warning += f" - {pct_low:.1f}% pixels crushed to black (underexposed regions)\n"
|
| 183 |
+
if pct_high > 5:
|
| 184 |
+
clip_warning += f" - {pct_high:.1f}% pixels blown to white (overexposed regions)\n"
|
| 185 |
+
if not clip_warning:
|
| 186 |
+
clip_warning = " - No significant clipping detected\n"
|
| 187 |
+
|
| 188 |
+
# Vignetting: compare center vs corners
|
| 189 |
+
cy, cx = h // 2, w // 2
|
| 190 |
+
r = min(h, w) // 8
|
| 191 |
+
center_mean = float(gray[cy - r:cy + r, cx - r:cx + r].mean())
|
| 192 |
+
|
| 193 |
+
corner_vals = []
|
| 194 |
+
for yr, xr in [(0, 0), (0, w - r * 2), (h - r * 2, 0), (h - r * 2, w - r * 2)]:
|
| 195 |
+
corner_vals.append(float(gray[yr:yr + r * 2, xr:xr + r * 2].mean()))
|
| 196 |
+
corner_mean = np.mean(corner_vals)
|
| 197 |
+
|
| 198 |
+
if center_mean > 1e-3:
|
| 199 |
+
vig_ratio = corner_mean / center_mean
|
| 200 |
+
else:
|
| 201 |
+
vig_ratio = 1.0
|
| 202 |
+
|
| 203 |
+
if vig_ratio < 0.75:
|
| 204 |
+
vig_verdict = f"SIGNIFICANT vignetting (corner/center = {vig_ratio:.2f})"
|
| 205 |
+
elif vig_ratio < 0.90:
|
| 206 |
+
vig_verdict = f"Mild vignetting (corner/center = {vig_ratio:.2f})"
|
| 207 |
+
else:
|
| 208 |
+
vig_verdict = f"No significant vignetting (corner/center = {vig_ratio:.2f})"
|
| 209 |
+
|
| 210 |
+
zones = _nine_zone_map(gray)
|
| 211 |
+
|
| 212 |
+
# Build figure: histogram + zone map
|
| 213 |
+
fig, axes = plt.subplots(1, 3, figsize=(18, 5))
|
| 214 |
+
|
| 215 |
+
# Histogram
|
| 216 |
+
axes[0].hist(gray.ravel(), bins=256, range=(0, 256), color="#448AFF", alpha=0.85, edgecolor="none")
|
| 217 |
+
axes[0].axvline(mean_b, color="#FF1744", linestyle="--", linewidth=1.5, label=f"Mean={mean_b:.0f}")
|
| 218 |
+
axes[0].set_title("Brightness Histogram", fontsize=12, fontweight="bold")
|
| 219 |
+
axes[0].set_xlabel("Pixel value")
|
| 220 |
+
axes[0].set_ylabel("Count")
|
| 221 |
+
axes[0].legend()
|
| 222 |
+
|
| 223 |
+
# Zone brightness map
|
| 224 |
+
im = axes[1].imshow(zones, cmap="YlOrRd", vmin=0, vmax=255, interpolation="nearest")
|
| 225 |
+
for r in range(3):
|
| 226 |
+
for c in range(3):
|
| 227 |
+
axes[1].text(c, r, f"{zones[r, c]:.0f}", ha="center", va="center",
|
| 228 |
+
fontsize=14, fontweight="bold",
|
| 229 |
+
color="black" if zones[r, c] > 128 else "white")
|
| 230 |
+
axes[1].set_title("9-Zone Brightness Map", fontsize=12, fontweight="bold")
|
| 231 |
+
axes[1].set_xticks([0, 1, 2])
|
| 232 |
+
axes[1].set_xticklabels(["L", "C", "R"])
|
| 233 |
+
axes[1].set_yticks([0, 1, 2])
|
| 234 |
+
axes[1].set_yticklabels(["T", "M", "B"])
|
| 235 |
+
fig.colorbar(im, ax=axes[1], fraction=0.046, pad=0.04)
|
| 236 |
+
|
| 237 |
+
# Original image
|
| 238 |
+
axes[2].imshow(image if len(image.shape) == 3 else gray, cmap="gray")
|
| 239 |
+
axes[2].set_title("Original", fontsize=12, fontweight="bold")
|
| 240 |
+
axes[2].axis("off")
|
| 241 |
+
|
| 242 |
+
fig.tight_layout()
|
| 243 |
+
vis = _fig_to_image(fig)
|
| 244 |
+
|
| 245 |
+
report = (
|
| 246 |
+
f"## Illumination Analysis\n\n"
|
| 247 |
+
f"| Metric | Value |\n"
|
| 248 |
+
f"|--------|-------|\n"
|
| 249 |
+
f"| Mean Brightness | {mean_b:.1f} / 255 |\n"
|
| 250 |
+
f"| Std Dev | {std_b:.1f} |\n"
|
| 251 |
+
f"| Min / Max | {min_b} / {max_b} |\n"
|
| 252 |
+
f"| Dynamic Range | {dynamic_range} |\n"
|
| 253 |
+
f"| Clipped Low (<=5) | {clipped_low} px ({pct_low:.2f}%) |\n"
|
| 254 |
+
f"| Clipped High (>=250) | {clipped_high} px ({pct_high:.2f}%) |\n\n"
|
| 255 |
+
f"**Clipping:**\n{clip_warning}\n"
|
| 256 |
+
f"**Vignetting:** {vig_verdict}\n\n"
|
| 257 |
+
f"**Zone Brightness (3x3 grid):**\n"
|
| 258 |
+
f"```\n"
|
| 259 |
+
f" {zones[0,0]:6.1f} {zones[0,1]:6.1f} {zones[0,2]:6.1f}\n"
|
| 260 |
+
f" {zones[1,0]:6.1f} {zones[1,1]:6.1f} {zones[1,2]:6.1f}\n"
|
| 261 |
+
f" {zones[2,0]:6.1f} {zones[2,1]:6.1f} {zones[2,2]:6.1f}\n"
|
| 262 |
+
f"```"
|
| 263 |
+
)
|
| 264 |
+
|
| 265 |
+
return vis, report
|
| 266 |
+
|
| 267 |
+
|
| 268 |
+
# ---------------------------------------------------------------------------
|
| 269 |
+
# Tab 3 — Microscopy Type Detection
|
| 270 |
+
# ---------------------------------------------------------------------------
|
| 271 |
+
|
| 272 |
+
def _histogram_features(gray: np.ndarray) -> dict:
|
| 273 |
+
hist = cv2.calcHist([gray], [0], None, [256], [0, 256]).ravel()
|
| 274 |
+
hist_norm = hist / hist.sum()
|
| 275 |
+
|
| 276 |
+
mean_int = float(gray.mean())
|
| 277 |
+
std_int = float(gray.std())
|
| 278 |
+
median_int = float(np.median(gray))
|
| 279 |
+
|
| 280 |
+
# Skewness
|
| 281 |
+
if std_int > 1e-6:
|
| 282 |
+
skew = float(np.mean(((gray.astype(np.float64) - mean_int) / std_int) ** 3))
|
| 283 |
+
else:
|
| 284 |
+
skew = 0.0
|
| 285 |
+
|
| 286 |
+
# Peak count (modes)
|
| 287 |
+
from scipy.signal import find_peaks
|
| 288 |
+
smoothed = ndimage.gaussian_filter1d(hist_norm, sigma=5)
|
| 289 |
+
peaks, props = find_peaks(smoothed, height=0.002, distance=20)
|
| 290 |
+
n_peaks = len(peaks)
|
| 291 |
+
|
| 292 |
+
# Edge density
|
| 293 |
+
edges = cv2.Canny(gray, 50, 150)
|
| 294 |
+
edge_density = float(np.sum(edges > 0)) / (gray.shape[0] * gray.shape[1])
|
| 295 |
+
|
| 296 |
+
# Fraction of dark pixels
|
| 297 |
+
dark_frac = float(np.sum(gray < 40)) / gray.size
|
| 298 |
+
bright_frac = float(np.sum(gray > 215)) / gray.size
|
| 299 |
+
|
| 300 |
+
return {
|
| 301 |
+
"mean": mean_int,
|
| 302 |
+
"std": std_int,
|
| 303 |
+
"median": median_int,
|
| 304 |
+
"skew": skew,
|
| 305 |
+
"n_peaks": n_peaks,
|
| 306 |
+
"edge_density": edge_density,
|
| 307 |
+
"dark_frac": dark_frac,
|
| 308 |
+
"bright_frac": bright_frac,
|
| 309 |
+
"hist_norm": hist_norm,
|
| 310 |
+
}
|
| 311 |
+
|
| 312 |
+
|
| 313 |
+
_MODALITIES = ["Brightfield", "Darkfield", "Phase Contrast", "Fluorescence", "Polarized Light"]
|
| 314 |
+
|
| 315 |
+
|
| 316 |
+
def _classify_modality(feats: dict) -> list[tuple[str, float]]:
|
| 317 |
+
scores = {m: 0.0 for m in _MODALITIES}
|
| 318 |
+
|
| 319 |
+
mean = feats["mean"]
|
| 320 |
+
std = feats["std"]
|
| 321 |
+
skew = feats["skew"]
|
| 322 |
+
dark_frac = feats["dark_frac"]
|
| 323 |
+
bright_frac = feats["bright_frac"]
|
| 324 |
+
edge_density = feats["edge_density"]
|
| 325 |
+
n_peaks = feats["n_peaks"]
|
| 326 |
+
|
| 327 |
+
# Brightfield: medium-high mean, moderate std, near-zero skew, low dark fraction
|
| 328 |
+
if 80 < mean < 200:
|
| 329 |
+
scores["Brightfield"] += 2.0
|
| 330 |
+
if std < 60:
|
| 331 |
+
scores["Brightfield"] += 1.0
|
| 332 |
+
if abs(skew) < 1.0:
|
| 333 |
+
scores["Brightfield"] += 1.0
|
| 334 |
+
if dark_frac < 0.15:
|
| 335 |
+
scores["Brightfield"] += 1.5
|
| 336 |
+
|
| 337 |
+
# Darkfield: low mean, high dark fraction, positive skew
|
| 338 |
+
if mean < 60:
|
| 339 |
+
scores["Darkfield"] += 2.5
|
| 340 |
+
if dark_frac > 0.5:
|
| 341 |
+
scores["Darkfield"] += 2.0
|
| 342 |
+
if skew > 1.0:
|
| 343 |
+
scores["Darkfield"] += 1.5
|
| 344 |
+
if bright_frac < 0.05:
|
| 345 |
+
scores["Darkfield"] += 0.5
|
| 346 |
+
|
| 347 |
+
# Phase contrast: bimodal histogram, halos (high edge density), medium mean
|
| 348 |
+
if n_peaks >= 2:
|
| 349 |
+
scores["Phase Contrast"] += 2.0
|
| 350 |
+
if edge_density > 0.08:
|
| 351 |
+
scores["Phase Contrast"] += 2.0
|
| 352 |
+
if 50 < mean < 160:
|
| 353 |
+
scores["Phase Contrast"] += 1.0
|
| 354 |
+
if std > 40:
|
| 355 |
+
scores["Phase Contrast"] += 0.5
|
| 356 |
+
|
| 357 |
+
# Fluorescence: very dark background, sparse bright spots, very high skew
|
| 358 |
+
if mean < 40:
|
| 359 |
+
scores["Fluorescence"] += 2.0
|
| 360 |
+
if dark_frac > 0.7:
|
| 361 |
+
scores["Fluorescence"] += 2.0
|
| 362 |
+
if skew > 2.0:
|
| 363 |
+
scores["Fluorescence"] += 2.5
|
| 364 |
+
if bright_frac > 0.001 and bright_frac < 0.15:
|
| 365 |
+
scores["Fluorescence"] += 1.0
|
| 366 |
+
|
| 367 |
+
# Polarized: high contrast, possible birefringence colors (high std in color)
|
| 368 |
+
if std > 50:
|
| 369 |
+
scores["Polarized Light"] += 1.0
|
| 370 |
+
if n_peaks >= 2:
|
| 371 |
+
scores["Polarized Light"] += 0.5
|
| 372 |
+
if 40 < mean < 140:
|
| 373 |
+
scores["Polarized Light"] += 0.5
|
| 374 |
+
|
| 375 |
+
total = sum(scores.values())
|
| 376 |
+
if total < 1e-6:
|
| 377 |
+
return [(m, 1.0 / len(_MODALITIES)) for m in _MODALITIES]
|
| 378 |
+
|
| 379 |
+
confidences = [(m, scores[m] / total) for m in _MODALITIES]
|
| 380 |
+
confidences.sort(key=lambda x: -x[1])
|
| 381 |
+
return confidences
|
| 382 |
+
|
| 383 |
+
|
| 384 |
+
def detect_microscopy_type(image: np.ndarray):
|
| 385 |
+
if image is None:
|
| 386 |
+
return None, "Upload an image first."
|
| 387 |
+
|
| 388 |
+
gray = _to_gray(image)
|
| 389 |
+
feats = _histogram_features(gray)
|
| 390 |
+
confidences = _classify_modality(feats)
|
| 391 |
+
|
| 392 |
+
best_name, best_conf = confidences[0]
|
| 393 |
+
|
| 394 |
+
# Build histogram plot
|
| 395 |
+
fig, axes = plt.subplots(1, 2, figsize=(14, 5))
|
| 396 |
+
|
| 397 |
+
axes[0].bar(range(256), feats["hist_norm"], color="#448AFF", width=1.0, edgecolor="none")
|
| 398 |
+
axes[0].set_title("Intensity Histogram", fontsize=12, fontweight="bold")
|
| 399 |
+
axes[0].set_xlabel("Pixel value")
|
| 400 |
+
axes[0].set_ylabel("Normalized frequency")
|
| 401 |
+
|
| 402 |
+
# Confidence bar chart
|
| 403 |
+
names = [c[0] for c in confidences]
|
| 404 |
+
vals = [c[1] * 100 for c in confidences]
|
| 405 |
+
colors = ["#00E676" if i == 0 else "#448AFF" for i in range(len(names))]
|
| 406 |
+
bars = axes[1].barh(names[::-1], vals[::-1], color=colors[::-1], edgecolor="none")
|
| 407 |
+
axes[1].set_title("Modality Confidence", fontsize=12, fontweight="bold")
|
| 408 |
+
axes[1].set_xlabel("Confidence (%)")
|
| 409 |
+
axes[1].set_xlim(0, 100)
|
| 410 |
+
for bar, v in zip(bars, vals[::-1]):
|
| 411 |
+
axes[1].text(bar.get_width() + 1, bar.get_y() + bar.get_height() / 2,
|
| 412 |
+
f"{v:.1f}%", va="center", fontsize=10)
|
| 413 |
+
|
| 414 |
+
fig.tight_layout()
|
| 415 |
+
vis = _fig_to_image(fig)
|
| 416 |
+
|
| 417 |
+
report = (
|
| 418 |
+
f"## Microscopy Type Detection\n\n"
|
| 419 |
+
f"**Detected: {best_name}** (confidence: {best_conf * 100:.1f}%)\n\n"
|
| 420 |
+
f"| Modality | Confidence |\n"
|
| 421 |
+
f"|----------|------------|\n"
|
| 422 |
+
)
|
| 423 |
+
for name, conf in confidences:
|
| 424 |
+
marker = " <<" if name == best_name else ""
|
| 425 |
+
report += f"| {name} | {conf * 100:.1f}%{marker} |\n"
|
| 426 |
+
|
| 427 |
+
report += (
|
| 428 |
+
f"\n**Image Features:**\n"
|
| 429 |
+
f"- Mean intensity: {feats['mean']:.1f}\n"
|
| 430 |
+
f"- Std deviation: {feats['std']:.1f}\n"
|
| 431 |
+
f"- Skewness: {feats['skew']:.2f}\n"
|
| 432 |
+
f"- Histogram peaks: {feats['n_peaks']}\n"
|
| 433 |
+
f"- Edge density: {feats['edge_density']:.4f}\n"
|
| 434 |
+
f"- Dark pixel fraction: {feats['dark_frac']:.3f}\n"
|
| 435 |
+
f"- Bright pixel fraction: {feats['bright_frac']:.3f}\n"
|
| 436 |
+
)
|
| 437 |
+
|
| 438 |
+
return vis, report
|
| 439 |
+
|
| 440 |
+
|
| 441 |
+
# ---------------------------------------------------------------------------
|
| 442 |
+
# Tab 4 — Image Enhancement
|
| 443 |
+
# ---------------------------------------------------------------------------
|
| 444 |
+
|
| 445 |
+
def _apply_clahe(img: np.ndarray, clip_limit: float = 3.0, grid_size: int = 8) -> np.ndarray:
|
| 446 |
+
if len(img.shape) == 3:
|
| 447 |
+
lab = cv2.cvtColor(img, cv2.COLOR_RGB2LAB)
|
| 448 |
+
clahe = cv2.createCLAHE(clipLimit=clip_limit, tileGridSize=(grid_size, grid_size))
|
| 449 |
+
lab[:, :, 0] = clahe.apply(lab[:, :, 0])
|
| 450 |
+
return cv2.cvtColor(lab, cv2.COLOR_LAB2RGB)
|
| 451 |
+
else:
|
| 452 |
+
clahe = cv2.createCLAHE(clipLimit=clip_limit, tileGridSize=(grid_size, grid_size))
|
| 453 |
+
return clahe.apply(img)
|
| 454 |
+
|
| 455 |
+
|
| 456 |
+
def _apply_unsharp(img: np.ndarray, sigma: float = 2.0, strength: float = 1.5) -> np.ndarray:
|
| 457 |
+
blurred = cv2.GaussianBlur(img, (0, 0), sigma)
|
| 458 |
+
sharpened = cv2.addWeighted(img, 1.0 + strength, blurred, -strength, 0)
|
| 459 |
+
return np.clip(sharpened, 0, 255).astype(np.uint8)
|
| 460 |
+
|
| 461 |
+
|
| 462 |
+
def _apply_denoise(img: np.ndarray, h: float = 10.0) -> np.ndarray:
|
| 463 |
+
if len(img.shape) == 3:
|
| 464 |
+
return cv2.fastNlMeansDenoisingColored(img, None, h, h, 7, 21)
|
| 465 |
+
else:
|
| 466 |
+
return cv2.fastNlMeansDenoising(img, None, h, 7, 21)
|
| 467 |
+
|
| 468 |
+
|
| 469 |
+
def _apply_white_balance(img: np.ndarray) -> np.ndarray:
|
| 470 |
+
if len(img.shape) != 3:
|
| 471 |
+
return img
|
| 472 |
+
result = img.copy().astype(np.float64)
|
| 473 |
+
for c in range(3):
|
| 474 |
+
ch = result[:, :, c]
|
| 475 |
+
low = np.percentile(ch, 1)
|
| 476 |
+
high = np.percentile(ch, 99)
|
| 477 |
+
if high - low < 1:
|
| 478 |
+
continue
|
| 479 |
+
ch = (ch - low) / (high - low) * 255.0
|
| 480 |
+
result[:, :, c] = ch
|
| 481 |
+
return np.clip(result, 0, 255).astype(np.uint8)
|
| 482 |
+
|
| 483 |
+
|
| 484 |
+
def enhance_image(image: np.ndarray, method: str,
|
| 485 |
+
clahe_clip: float = 3.0, clahe_grid: int = 8,
|
| 486 |
+
unsharp_sigma: float = 2.0, unsharp_strength: float = 1.5,
|
| 487 |
+
denoise_h: float = 10.0):
|
| 488 |
+
if image is None:
|
| 489 |
+
return None, "Upload an image first."
|
| 490 |
+
|
| 491 |
+
if method == "CLAHE (Contrast Enhancement)":
|
| 492 |
+
enhanced = _apply_clahe(image, clip_limit=clahe_clip, grid_size=int(clahe_grid))
|
| 493 |
+
desc = f"CLAHE — clipLimit={clahe_clip}, gridSize={int(clahe_grid)}"
|
| 494 |
+
elif method == "Unsharp Mask (Sharpening)":
|
| 495 |
+
enhanced = _apply_unsharp(image, sigma=unsharp_sigma, strength=unsharp_strength)
|
| 496 |
+
desc = f"Unsharp Mask — sigma={unsharp_sigma}, strength={unsharp_strength}"
|
| 497 |
+
elif method == "NLM Denoising":
|
| 498 |
+
enhanced = _apply_denoise(image, h=denoise_h)
|
| 499 |
+
desc = f"Non-Local Means Denoising — h={denoise_h}"
|
| 500 |
+
elif method == "Auto White Balance":
|
| 501 |
+
enhanced = _apply_white_balance(image)
|
| 502 |
+
desc = "Auto White Balance (percentile stretch per channel)"
|
| 503 |
+
else:
|
| 504 |
+
enhanced = image
|
| 505 |
+
desc = "No method selected"
|
| 506 |
+
|
| 507 |
+
# Side-by-side
|
| 508 |
+
fig, axes = plt.subplots(1, 2, figsize=(14, 6))
|
| 509 |
+
axes[0].imshow(image)
|
| 510 |
+
axes[0].set_title("Before", fontsize=14, fontweight="bold")
|
| 511 |
+
axes[0].axis("off")
|
| 512 |
+
axes[1].imshow(enhanced)
|
| 513 |
+
axes[1].set_title("After", fontsize=14, fontweight="bold")
|
| 514 |
+
axes[1].axis("off")
|
| 515 |
+
fig.suptitle(desc, fontsize=12, y=0.02)
|
| 516 |
+
fig.tight_layout()
|
| 517 |
+
comparison = _fig_to_image(fig)
|
| 518 |
+
|
| 519 |
+
report = (
|
| 520 |
+
f"## Enhancement Applied\n\n"
|
| 521 |
+
f"**Method:** {desc}\n\n"
|
| 522 |
+
f"| Metric | Before | After |\n"
|
| 523 |
+
f"|--------|--------|-------|\n"
|
| 524 |
+
)
|
| 525 |
+
|
| 526 |
+
for label, a, b in [
|
| 527 |
+
("Mean", image, enhanced),
|
| 528 |
+
("Std", image, enhanced),
|
| 529 |
+
]:
|
| 530 |
+
ga = _to_gray(a).astype(np.float64)
|
| 531 |
+
gb = _to_gray(b).astype(np.float64)
|
| 532 |
+
if label == "Mean":
|
| 533 |
+
report += f"| Mean Brightness | {ga.mean():.1f} | {gb.mean():.1f} |\n"
|
| 534 |
+
else:
|
| 535 |
+
report += f"| Std Dev | {ga.std():.1f} | {gb.std():.1f} |\n"
|
| 536 |
+
|
| 537 |
+
# Sharpness comparison
|
| 538 |
+
ga = _to_gray(image)
|
| 539 |
+
gb = _to_gray(enhanced)
|
| 540 |
+
lap_before = cv2.Laplacian(ga, cv2.CV_64F).var()
|
| 541 |
+
lap_after = cv2.Laplacian(gb, cv2.CV_64F).var()
|
| 542 |
+
report += f"| Laplacian Var (sharpness) | {lap_before:.1f} | {lap_after:.1f} |\n"
|
| 543 |
+
|
| 544 |
+
return comparison, enhanced, report
|
| 545 |
+
|
| 546 |
+
|
| 547 |
+
# ---------------------------------------------------------------------------
|
| 548 |
+
# Gradio UI
|
| 549 |
+
# ---------------------------------------------------------------------------
|
| 550 |
+
|
| 551 |
+
css = """
|
| 552 |
+
.gr-block { border-radius: 12px !important; }
|
| 553 |
+
footer { display: none !important; }
|
| 554 |
+
"""
|
| 555 |
+
|
| 556 |
+
with gr.Blocks(title="Microscopy CV Toolkit", css=css, theme=gr.themes.Base()) as demo:
|
| 557 |
+
gr.Markdown(
|
| 558 |
+
"# Microscopy CV Toolkit\n"
|
| 559 |
+
"Classical computer-vision tools for microscopy image quality analysis. "
|
| 560 |
+
"No ML models — pure OpenCV, NumPy, SciPy. Upload any microscopy image to get started."
|
| 561 |
+
)
|
| 562 |
+
|
| 563 |
+
with gr.Tabs():
|
| 564 |
+
# ---- Tab 1: Focus Quality ----
|
| 565 |
+
with gr.Tab("Focus Quality"):
|
| 566 |
+
gr.Markdown(
|
| 567 |
+
"Measures image sharpness using four complementary metrics. "
|
| 568 |
+
"The heatmap shows per-block focus quality across the field of view."
|
| 569 |
+
)
|
| 570 |
+
with gr.Row():
|
| 571 |
+
with gr.Column(scale=1):
|
| 572 |
+
focus_input = gr.Image(label="Upload Image", type="numpy")
|
| 573 |
+
focus_btn = gr.Button("Analyze Focus", variant="primary", size="lg")
|
| 574 |
+
with gr.Column(scale=2):
|
| 575 |
+
focus_output = gr.Image(label="Focus Heatmap", type="numpy")
|
| 576 |
+
focus_report = gr.Markdown(label="Report")
|
| 577 |
+
|
| 578 |
+
focus_btn.click(
|
| 579 |
+
fn=analyze_focus,
|
| 580 |
+
inputs=[focus_input],
|
| 581 |
+
outputs=[focus_output, focus_report],
|
| 582 |
+
)
|
| 583 |
+
|
| 584 |
+
# ---- Tab 2: Illumination Analysis ----
|
| 585 |
+
with gr.Tab("Illumination Analysis"):
|
| 586 |
+
gr.Markdown(
|
| 587 |
+
"Checks brightness distribution, clipping, dynamic range, vignetting, "
|
| 588 |
+
"and displays a 9-zone brightness map for Kohler illumination assessment."
|
| 589 |
+
)
|
| 590 |
+
with gr.Row():
|
| 591 |
+
with gr.Column(scale=1):
|
| 592 |
+
illum_input = gr.Image(label="Upload Image", type="numpy")
|
| 593 |
+
illum_btn = gr.Button("Analyze Illumination", variant="primary", size="lg")
|
| 594 |
+
with gr.Column(scale=2):
|
| 595 |
+
illum_output = gr.Image(label="Analysis", type="numpy")
|
| 596 |
+
illum_report = gr.Markdown(label="Report")
|
| 597 |
+
|
| 598 |
+
illum_btn.click(
|
| 599 |
+
fn=analyze_illumination,
|
| 600 |
+
inputs=[illum_input],
|
| 601 |
+
outputs=[illum_output, illum_report],
|
| 602 |
+
)
|
| 603 |
+
|
| 604 |
+
# ---- Tab 3: Microscopy Type Detection ----
|
| 605 |
+
with gr.Tab("Microscopy Type Detection"):
|
| 606 |
+
gr.Markdown(
|
| 607 |
+
"Auto-detects imaging modality based on histogram shape, intensity statistics, "
|
| 608 |
+
"contrast, and edge density. Works best on standard preparations."
|
| 609 |
+
)
|
| 610 |
+
with gr.Row():
|
| 611 |
+
with gr.Column(scale=1):
|
| 612 |
+
type_input = gr.Image(label="Upload Image", type="numpy")
|
| 613 |
+
type_btn = gr.Button("Detect Type", variant="primary", size="lg")
|
| 614 |
+
with gr.Column(scale=2):
|
| 615 |
+
type_output = gr.Image(label="Analysis", type="numpy")
|
| 616 |
+
type_report = gr.Markdown(label="Report")
|
| 617 |
+
|
| 618 |
+
type_btn.click(
|
| 619 |
+
fn=detect_microscopy_type,
|
| 620 |
+
inputs=[type_input],
|
| 621 |
+
outputs=[type_output, type_report],
|
| 622 |
+
)
|
| 623 |
+
|
| 624 |
+
# ---- Tab 4: Image Enhancement ----
|
| 625 |
+
with gr.Tab("Image Enhancement"):
|
| 626 |
+
gr.Markdown(
|
| 627 |
+
"Apply classical enhancement techniques. Adjust parameters and compare side-by-side."
|
| 628 |
+
)
|
| 629 |
+
with gr.Row():
|
| 630 |
+
with gr.Column(scale=1):
|
| 631 |
+
enhance_input = gr.Image(label="Upload Image", type="numpy")
|
| 632 |
+
enhance_method = gr.Radio(
|
| 633 |
+
choices=[
|
| 634 |
+
"CLAHE (Contrast Enhancement)",
|
| 635 |
+
"Unsharp Mask (Sharpening)",
|
| 636 |
+
"NLM Denoising",
|
| 637 |
+
"Auto White Balance",
|
| 638 |
+
],
|
| 639 |
+
value="CLAHE (Contrast Enhancement)",
|
| 640 |
+
label="Enhancement Method",
|
| 641 |
+
)
|
| 642 |
+
with gr.Accordion("Parameters", open=False):
|
| 643 |
+
clahe_clip = gr.Slider(0.5, 10.0, value=3.0, step=0.5, label="CLAHE Clip Limit")
|
| 644 |
+
clahe_grid = gr.Slider(2, 16, value=8, step=1, label="CLAHE Grid Size")
|
| 645 |
+
unsharp_sigma = gr.Slider(0.5, 5.0, value=2.0, step=0.5, label="Unsharp Sigma")
|
| 646 |
+
unsharp_strength = gr.Slider(0.5, 5.0, value=1.5, step=0.5, label="Unsharp Strength")
|
| 647 |
+
denoise_h = gr.Slider(1.0, 30.0, value=10.0, step=1.0, label="Denoise Strength (h)")
|
| 648 |
+
enhance_btn = gr.Button("Enhance", variant="primary", size="lg")
|
| 649 |
+
with gr.Column(scale=2):
|
| 650 |
+
enhance_comparison = gr.Image(label="Before / After", type="numpy")
|
| 651 |
+
enhance_result = gr.Image(label="Enhanced Image (downloadable)", type="numpy")
|
| 652 |
+
enhance_report = gr.Markdown(label="Report")
|
| 653 |
+
|
| 654 |
+
enhance_btn.click(
|
| 655 |
+
fn=enhance_image,
|
| 656 |
+
inputs=[enhance_input, enhance_method,
|
| 657 |
+
clahe_clip, clahe_grid,
|
| 658 |
+
unsharp_sigma, unsharp_strength,
|
| 659 |
+
denoise_h],
|
| 660 |
+
outputs=[enhance_comparison, enhance_result, enhance_report],
|
| 661 |
+
)
|
| 662 |
+
|
| 663 |
+
gr.Markdown(
|
| 664 |
+
"<center style='color:#888;font-size:0.85em;'>"
|
| 665 |
+
"Microscopy CV Toolkit | Pure OpenCV, no ML models | "
|
| 666 |
+
"<a href='https://huggingface.co/spaces/Laborator/microscopy-cv-toolkit'>HuggingFace Space</a>"
|
| 667 |
+
"</center>"
|
| 668 |
+
)
|
| 669 |
+
|
| 670 |
+
|
| 671 |
+
if __name__ == "__main__":
|
| 672 |
+
demo.launch()
|
requirements.txt
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
gradio
|
| 2 |
+
opencv-python-headless
|
| 3 |
+
numpy
|
| 4 |
+
scipy
|
| 5 |
+
matplotlib
|
| 6 |
+
Pillow
|