Spaces:
Sleeping
Sleeping
Deploy from GitHub Actions (branch: main, sha: 83d61fdf)
Browse files- CLAUDE.md +107 -0
- README.md +6 -2
- app.py +65 -381
- docs/api-usage.md +242 -0
CLAUDE.md
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CLAUDE.md - Project Instructions for Claude Code
|
| 2 |
+
|
| 3 |
+
## Project Overview
|
| 4 |
+
|
| 5 |
+
EleFind is an aerial elephant detection web application using YOLOv11 and SAHI (Slicing Aided Hyper Inference). It provides a Gradio-based UI for detecting elephants in drone/aerial imagery for wildlife conservation.
|
| 6 |
+
|
| 7 |
+
## Tech Stack
|
| 8 |
+
|
| 9 |
+
- **Language:** Python 3.10
|
| 10 |
+
- **UI Framework:** Gradio (5.50.0 on HF Spaces, >=4.44 locally)
|
| 11 |
+
- **ML Framework:** Ultralytics YOLOv11, SAHI, PyTorch
|
| 12 |
+
- **Deployment:** HuggingFace Spaces
|
| 13 |
+
|
| 14 |
+
## Project Structure
|
| 15 |
+
|
| 16 |
+
- `app.py` β Main application (Gradio UI, detection pipeline, heatmap generation)
|
| 17 |
+
- `test_detection.py` β Pytest test suite
|
| 18 |
+
- `requirements.txt` β Python dependencies
|
| 19 |
+
- `packages.txt` β System-level dependencies for HF Spaces
|
| 20 |
+
- `README.md` β Project docs (contains HF Spaces frontmatter with `sdk_version`)
|
| 21 |
+
- `MODEL_CARD.md` β Model card and usage guide
|
| 22 |
+
- `examples/` β Sample aerial images for demo
|
| 23 |
+
- `assets/` β Training visualization images
|
| 24 |
+
- `meeting_materials/` β Local-only directory (gitignored), contains models and docs
|
| 25 |
+
|
| 26 |
+
## Running Locally
|
| 27 |
+
|
| 28 |
+
```bash
|
| 29 |
+
pip install -r requirements.txt
|
| 30 |
+
python app.py
|
| 31 |
+
# Access at http://127.0.0.1:7860
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
## Testing
|
| 35 |
+
|
| 36 |
+
```bash
|
| 37 |
+
# Run all tests
|
| 38 |
+
pytest test_detection.py -v
|
| 39 |
+
|
| 40 |
+
# Skip slow inference tests
|
| 41 |
+
pytest test_detection.py -v -m "not slow"
|
| 42 |
+
|
| 43 |
+
# Run specific test
|
| 44 |
+
pytest test_detection.py -v -k "test_model"
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
## Deployment to HuggingFace Spaces
|
| 48 |
+
|
| 49 |
+
HF Space is NOT synced with GitHub. Upload manually:
|
| 50 |
+
|
| 51 |
+
```python
|
| 52 |
+
from huggingface_hub import upload_folder
|
| 53 |
+
|
| 54 |
+
upload_folder(
|
| 55 |
+
folder_path='.',
|
| 56 |
+
repo_id='iamhelitha/EleFind-gradio-ui',
|
| 57 |
+
repo_type='space',
|
| 58 |
+
ignore_patterns=[
|
| 59 |
+
'.git/*', '.git',
|
| 60 |
+
'meeting_materials/*', 'meeting_materials',
|
| 61 |
+
'.DS_Store',
|
| 62 |
+
'__pycache__/*', '*.pyc',
|
| 63 |
+
'.claude/*', '.claude',
|
| 64 |
+
],
|
| 65 |
+
commit_message='Your commit message here',
|
| 66 |
+
)
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
## Important Rules
|
| 70 |
+
|
| 71 |
+
- **Do NOT pin gradio in `requirements.txt`** β the version is controlled by `sdk_version` in `README.md` frontmatter for HF Spaces. Pinning causes build conflicts.
|
| 72 |
+
- **Max image dimension is 6000px** β images are downscaled to avoid CPU inference timeouts on free HF Spaces tier.
|
| 73 |
+
- **Model auto-downloads from HuggingFace Hub** β configured via `HF_MODEL_REPO` and `HF_MODEL_FILE` env vars, falls back to local paths.
|
| 74 |
+
- **`meeting_materials/` is gitignored** β never commit files from this directory.
|
| 75 |
+
|
| 76 |
+
## Environment Variables
|
| 77 |
+
|
| 78 |
+
| Variable | Default | Description |
|
| 79 |
+
|----------|---------|-------------|
|
| 80 |
+
| `HF_MODEL_REPO` | `iamhelitha/EleFind-yolo11-elephant` | HuggingFace model repository |
|
| 81 |
+
| `HF_MODEL_FILE` | `best.pt` | Model filename in the repo |
|
| 82 |
+
|
| 83 |
+
## Default SAHI Parameters
|
| 84 |
+
|
| 85 |
+
- Confidence threshold: `0.30`
|
| 86 |
+
- Slice size: `1024 x 1024`
|
| 87 |
+
- Overlap ratio: `0.30`
|
| 88 |
+
- IoU threshold (NMS): `0.40`
|
| 89 |
+
|
| 90 |
+
## Code Conventions
|
| 91 |
+
|
| 92 |
+
- Single main file (`app.py`) contains the full application
|
| 93 |
+
- Use OpenCV headless (`opencv-python-headless`) β no GUI dependencies
|
| 94 |
+
- Green bounding boxes for detections, Gaussian density heatmaps for XAI
|
| 95 |
+
- Gradio Soft theme with emerald/green primary colors
|
| 96 |
+
|
| 97 |
+
## Git Commit Style
|
| 98 |
+
|
| 99 |
+
Follow the existing commit message style β short imperative sentences describing the change:
|
| 100 |
+
- `Fix HF Space build: remove gradio pin from requirements.txt`
|
| 101 |
+
- `Redesign UI with tabbed layout, stats dashboard, and confidence charts`
|
| 102 |
+
- `Add deployment instructions for HuggingFace Space`
|
| 103 |
+
|
| 104 |
+
## Claude Settings
|
| 105 |
+
|
| 106 |
+
- **`includeCoAuthoredBy`: false** β Do not add Claude as co-author to commits
|
| 107 |
+
- **`gitAttribution`: false** β Disable git attribution for Claude
|
README.md
CHANGED
|
@@ -4,7 +4,7 @@ emoji: "\U0001F418"
|
|
| 4 |
colorFrom: green
|
| 5 |
colorTo: blue
|
| 6 |
sdk: gradio
|
| 7 |
-
sdk_version:
|
| 8 |
app_file: app.py
|
| 9 |
python_version: "3.10"
|
| 10 |
suggested_hardware: cpu-basic
|
|
@@ -23,6 +23,10 @@ pinned: false
|
|
| 23 |
|
| 24 |
# EleFind β Aerial Elephant Detection
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
A web application for detecting elephants in aerial and drone imagery using [YOLOv11](https://docs.ultralytics.com/) with [SAHI](https://github.com/obss/sahi) (Slicing Aided Hyper Inference) and explainable AI heatmap visualizations.
|
| 27 |
|
| 28 |
## Features
|
|
@@ -79,7 +83,7 @@ A web application for detecting elephants in aerial and drone imagery using [YOL
|
|
| 79 |
## Getting Started
|
| 80 |
|
| 81 |
```bash
|
| 82 |
-
git clone https://github.com/
|
| 83 |
cd EleFind-gradio-ui
|
| 84 |
pip install -r requirements.txt
|
| 85 |
|
|
|
|
| 4 |
colorFrom: green
|
| 5 |
colorTo: blue
|
| 6 |
sdk: gradio
|
| 7 |
+
sdk_version: 6.8.0
|
| 8 |
app_file: app.py
|
| 9 |
python_version: "3.10"
|
| 10 |
suggested_hardware: cpu-basic
|
|
|
|
| 23 |
|
| 24 |
# EleFind β Aerial Elephant Detection
|
| 25 |
|
| 26 |
+
[](https://huggingface.co/spaces/iamhelitha/EleFind-gradio-ui)
|
| 27 |
+
[](https://huggingface.co/iamhelitha/EleFind-yolo11-elephant)
|
| 28 |
+
[](https://github.com/iamhelitha/EleFind-gradio-ui)
|
| 29 |
+
|
| 30 |
A web application for detecting elephants in aerial and drone imagery using [YOLOv11](https://docs.ultralytics.com/) with [SAHI](https://github.com/obss/sahi) (Slicing Aided Hyper Inference) and explainable AI heatmap visualizations.
|
| 31 |
|
| 32 |
## Features
|
|
|
|
| 83 |
## Getting Started
|
| 84 |
|
| 85 |
```bash
|
| 86 |
+
git clone https://github.com/iamhelitha/EleFind-gradio-ui.git
|
| 87 |
cd EleFind-gradio-ui
|
| 88 |
pip install -r requirements.txt
|
| 89 |
|
app.py
CHANGED
|
@@ -1,17 +1,15 @@
|
|
| 1 |
"""
|
| 2 |
-
EleFind - Aerial Elephant Detection
|
| 3 |
-
=====================================
|
| 4 |
|
| 5 |
-
A Gradio
|
| 6 |
using YOLOv11 with SAHI (Slicing Aided Hyper Inference).
|
| 7 |
|
| 8 |
Features:
|
| 9 |
- Upload aerial images and detect elephants with bounding boxes
|
| 10 |
-
- XAI heatmap visualization showing detection density
|
| 11 |
- Adjustable SAHI parameters (confidence, slice size, overlap)
|
| 12 |
- Automatic model download from HuggingFace Hub
|
| 13 |
- Confidence bar chart and detection data table
|
| 14 |
-
- Tabbed output with download buttons on every result image
|
| 15 |
|
| 16 |
Author: Helitha Guruge
|
| 17 |
Project: EleFind (Undergraduate Research Project)
|
|
@@ -173,7 +171,7 @@ def run_detection(
|
|
| 173 |
slice_size: int = DEFAULT_SLICE,
|
| 174 |
overlap_ratio: float = DEFAULT_OVERLAP,
|
| 175 |
iou_threshold: float = DEFAULT_IOU,
|
| 176 |
-
) -> list
|
| 177 |
"""Run SAHI sliced prediction and return a list of detection dicts."""
|
| 178 |
|
| 179 |
# Update model confidence threshold
|
|
@@ -215,7 +213,7 @@ def run_detection(
|
|
| 215 |
return predictions
|
| 216 |
|
| 217 |
|
| 218 |
-
def draw_detections(image: np.ndarray, predictions: list
|
| 219 |
"""Draw bounding boxes and labels on the image."""
|
| 220 |
img = image.copy()
|
| 221 |
|
|
@@ -239,52 +237,6 @@ def draw_detections(image: np.ndarray, predictions: list[dict]) -> np.ndarray:
|
|
| 239 |
return img
|
| 240 |
|
| 241 |
|
| 242 |
-
def create_heatmap(image: np.ndarray, predictions: list[dict]) -> np.ndarray:
|
| 243 |
-
"""Create a Gaussian-blurred density heatmap of detections."""
|
| 244 |
-
h, w = image.shape[:2]
|
| 245 |
-
heatmap = np.zeros((h, w), dtype=np.float32)
|
| 246 |
-
|
| 247 |
-
for pred in predictions:
|
| 248 |
-
x1, y1, x2, y2 = pred["x1"], pred["y1"], pred["x2"], pred["y2"]
|
| 249 |
-
conf = pred["confidence"]
|
| 250 |
-
cx, cy = (x1 + x2) // 2, (y1 + y2) // 2
|
| 251 |
-
bw, bh = x2 - x1, y2 - y1
|
| 252 |
-
sx, sy = max(bw / 2, 1), max(bh / 2, 1)
|
| 253 |
-
|
| 254 |
-
yr = np.arange(max(0, y1 - bh), min(h, y2 + bh))
|
| 255 |
-
xr = np.arange(max(0, x1 - bw), min(w, x2 + bw))
|
| 256 |
-
|
| 257 |
-
if len(xr) > 0 and len(yr) > 0:
|
| 258 |
-
xx, yy = np.meshgrid(xr, yr)
|
| 259 |
-
gaussian = np.exp(
|
| 260 |
-
-((xx - cx) ** 2 / (2 * sx**2) + (yy - cy) ** 2 / (2 * sy**2))
|
| 261 |
-
)
|
| 262 |
-
gaussian *= conf
|
| 263 |
-
heatmap[yr[0] : yr[-1] + 1, xr[0] : xr[-1] + 1] += gaussian
|
| 264 |
-
|
| 265 |
-
if heatmap.max() > 0:
|
| 266 |
-
heatmap /= heatmap.max()
|
| 267 |
-
|
| 268 |
-
heatmap_color = cv2.applyColorMap(
|
| 269 |
-
(heatmap * 255).astype(np.uint8), cv2.COLORMAP_JET
|
| 270 |
-
)
|
| 271 |
-
heatmap_color = cv2.cvtColor(heatmap_color, cv2.COLOR_BGR2RGB)
|
| 272 |
-
|
| 273 |
-
blended = cv2.addWeighted(image, 0.4, heatmap_color, 0.6, 0)
|
| 274 |
-
|
| 275 |
-
# White detection outlines on heatmap
|
| 276 |
-
for pred in predictions:
|
| 277 |
-
cv2.rectangle(
|
| 278 |
-
blended,
|
| 279 |
-
(pred["x1"], pred["y1"]),
|
| 280 |
-
(pred["x2"], pred["y2"]),
|
| 281 |
-
(255, 255, 255),
|
| 282 |
-
2,
|
| 283 |
-
)
|
| 284 |
-
|
| 285 |
-
return blended
|
| 286 |
-
|
| 287 |
-
|
| 288 |
# ---------------------------------------------------------------------------
|
| 289 |
# Normalisation helpers (handle various Gradio input types)
|
| 290 |
# ---------------------------------------------------------------------------
|
|
@@ -318,15 +270,14 @@ def process_image(
|
|
| 318 |
slice_size: int,
|
| 319 |
overlap_ratio: float,
|
| 320 |
iou_threshold: float,
|
| 321 |
-
enable_heatmap: bool = False,
|
| 322 |
progress=gr.Progress(),
|
| 323 |
):
|
| 324 |
-
"""Run detection pipeline and return annotated image,
|
| 325 |
-
confidence chart data, and detection table data."""
|
| 326 |
|
| 327 |
image_np = _to_numpy_rgb(image)
|
| 328 |
if image_np is None:
|
| 329 |
-
return None,
|
| 330 |
|
| 331 |
try:
|
| 332 |
progress(0.05, desc="Validating image")
|
|
@@ -345,26 +296,24 @@ def process_image(
|
|
| 345 |
progress(0.80, desc="Drawing detections")
|
| 346 |
det_image = draw_detections(image_np, predictions)
|
| 347 |
|
| 348 |
-
heatmap_image = None
|
| 349 |
-
if enable_heatmap:
|
| 350 |
-
progress(0.90, desc="Generating XAI heatmap")
|
| 351 |
-
heatmap_image = create_heatmap(image_np, predictions)
|
| 352 |
-
|
| 353 |
except Exception as e:
|
| 354 |
import traceback
|
| 355 |
-
|
| 356 |
-
|
| 357 |
-
f"<strong>Error during detection:</strong><br><pre>{e}</pre>"
|
| 358 |
-
f"<details><summary>Traceback</summary><pre>{traceback.format_exc()}</pre></details>"
|
| 359 |
-
f"</div>"
|
| 360 |
-
)
|
| 361 |
-
return None, None, err_html, None, None
|
| 362 |
|
| 363 |
-
|
| 364 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 365 |
|
| 366 |
progress(1.0, desc="Done")
|
| 367 |
-
heatmap_out = Image.fromarray(heatmap_image.astype(np.uint8)) if heatmap_image is not None else None
|
| 368 |
|
| 369 |
# Build chart / table data (pandas optional)
|
| 370 |
conf_chart = None
|
|
@@ -391,8 +340,11 @@ def process_image(
|
|
| 391 |
|
| 392 |
return (
|
| 393 |
Image.fromarray(det_image.astype(np.uint8)),
|
| 394 |
-
|
| 395 |
-
|
|
|
|
|
|
|
|
|
|
| 396 |
conf_chart,
|
| 397 |
det_table,
|
| 398 |
)
|
|
@@ -408,235 +360,23 @@ _THEME = gr.themes.Soft(
|
|
| 408 |
neutral_hue="gray",
|
| 409 |
font=[gr.themes.GoogleFont("Inter"), "ui-sans-serif", "system-ui", "sans-serif"],
|
| 410 |
font_mono=[gr.themes.GoogleFont("JetBrains Mono"), "ui-monospace", "monospace"],
|
| 411 |
-
).set(
|
| 412 |
-
button_primary_background_fill="*primary_500",
|
| 413 |
-
button_primary_background_fill_hover="*primary_600",
|
| 414 |
-
button_primary_text_color="white",
|
| 415 |
-
block_label_text_weight="600",
|
| 416 |
-
block_title_text_weight="700",
|
| 417 |
)
|
| 418 |
|
| 419 |
-
CSS = """
|
| 420 |
-
/* ββ Global container ββββββββββββββββββββββββββββββββββββββββ */
|
| 421 |
-
.gradio-container {
|
| 422 |
-
max-width: 1200px !important;
|
| 423 |
-
margin: 0 auto !important;
|
| 424 |
-
}
|
| 425 |
-
|
| 426 |
-
/* ββ Hero banner ββββββββββββββββββββββββββββββββββββββββββββββ */
|
| 427 |
-
.elefind-hero {
|
| 428 |
-
background: linear-gradient(135deg, #064e3b 0%, #065f46 40%, #0d9488 100%);
|
| 429 |
-
border-radius: 16px;
|
| 430 |
-
padding: 32px 40px;
|
| 431 |
-
margin-bottom: 8px;
|
| 432 |
-
color: white;
|
| 433 |
-
position: relative;
|
| 434 |
-
overflow: hidden;
|
| 435 |
-
}
|
| 436 |
-
.elefind-hero::before {
|
| 437 |
-
content: "π";
|
| 438 |
-
position: absolute;
|
| 439 |
-
right: 32px;
|
| 440 |
-
top: 50%;
|
| 441 |
-
transform: translateY(-50%);
|
| 442 |
-
font-size: 80px;
|
| 443 |
-
opacity: 0.18;
|
| 444 |
-
}
|
| 445 |
-
.elefind-hero h1 {
|
| 446 |
-
font-size: 2rem !important;
|
| 447 |
-
font-weight: 800 !important;
|
| 448 |
-
margin: 0 0 6px 0 !important;
|
| 449 |
-
color: white !important;
|
| 450 |
-
}
|
| 451 |
-
.elefind-hero p {
|
| 452 |
-
font-size: 1rem;
|
| 453 |
-
opacity: 0.88;
|
| 454 |
-
margin: 0;
|
| 455 |
-
max-width: 640px;
|
| 456 |
-
}
|
| 457 |
-
.elefind-hero .badge {
|
| 458 |
-
display: inline-block;
|
| 459 |
-
background: rgba(255,255,255,0.15);
|
| 460 |
-
border: 1px solid rgba(255,255,255,0.3);
|
| 461 |
-
border-radius: 20px;
|
| 462 |
-
padding: 2px 12px;
|
| 463 |
-
font-size: 0.75rem;
|
| 464 |
-
margin-right: 6px;
|
| 465 |
-
margin-top: 12px;
|
| 466 |
-
}
|
| 467 |
-
|
| 468 |
-
/* ββ Stat cards ββββββββββββββββββββββββββββββββββββββββββββββββ */
|
| 469 |
-
.stat-cards {
|
| 470 |
-
display: flex;
|
| 471 |
-
gap: 12px;
|
| 472 |
-
flex-wrap: wrap;
|
| 473 |
-
margin: 4px 0;
|
| 474 |
-
}
|
| 475 |
-
.stat-card {
|
| 476 |
-
flex: 1;
|
| 477 |
-
min-width: 120px;
|
| 478 |
-
background: var(--background-fill-primary, #f9fafb);
|
| 479 |
-
border: 1px solid var(--border-color-primary, #e5e7eb);
|
| 480 |
-
border-radius: 12px;
|
| 481 |
-
padding: 16px 20px;
|
| 482 |
-
text-align: center;
|
| 483 |
-
}
|
| 484 |
-
.stat-card .value {
|
| 485 |
-
font-size: 2rem;
|
| 486 |
-
font-weight: 800;
|
| 487 |
-
color: #065f46;
|
| 488 |
-
line-height: 1;
|
| 489 |
-
display: block;
|
| 490 |
-
}
|
| 491 |
-
.stat-card .label {
|
| 492 |
-
font-size: 0.78rem;
|
| 493 |
-
color: var(--body-text-color-subdued, #6b7280);
|
| 494 |
-
margin-top: 4px;
|
| 495 |
-
display: block;
|
| 496 |
-
}
|
| 497 |
-
.stat-card.highlight .value { color: #0d9488; }
|
| 498 |
-
|
| 499 |
-
/* ββ Param card ββββββββββββββββββββββββββββββββββββββββββββββββ */
|
| 500 |
-
.param-card {
|
| 501 |
-
background: var(--background-fill-secondary, #f3f4f6);
|
| 502 |
-
border-radius: 10px;
|
| 503 |
-
padding: 12px 16px;
|
| 504 |
-
margin-top: 8px;
|
| 505 |
-
font-size: 0.85rem;
|
| 506 |
-
line-height: 1.8;
|
| 507 |
-
}
|
| 508 |
-
.param-card strong { color: #065f46; }
|
| 509 |
-
|
| 510 |
-
/* ββ Tips box ββββββββββββββββββββββββββββββββββββββββββββββββββ */
|
| 511 |
-
.tips-box {
|
| 512 |
-
background: #ecfdf5;
|
| 513 |
-
border-left: 4px solid #10b981;
|
| 514 |
-
border-radius: 0 8px 8px 0;
|
| 515 |
-
padding: 10px 14px;
|
| 516 |
-
font-size: 0.84rem;
|
| 517 |
-
line-height: 1.7;
|
| 518 |
-
color: #064e3b;
|
| 519 |
-
}
|
| 520 |
-
|
| 521 |
-
/* ββ Tab active indicator ββββββββββββββββββββββββββββββββββββ */
|
| 522 |
-
.gradio-tabs .tab-nav button.selected {
|
| 523 |
-
border-bottom-color: #10b981 !important;
|
| 524 |
-
color: #065f46 !important;
|
| 525 |
-
font-weight: 700 !important;
|
| 526 |
-
}
|
| 527 |
-
|
| 528 |
-
/* ββ About accordion ββββββββββββββββββββββββββββββββββββββββ */
|
| 529 |
-
.about-section { font-size: 0.88rem; line-height: 1.7; }
|
| 530 |
-
.about-section a { color: #10b981; }
|
| 531 |
-
|
| 532 |
-
/* ββ Hide empty heatmap placeholder ββββββββββββββββββββββββ */
|
| 533 |
-
.heatmap-placeholder {
|
| 534 |
-
background: var(--background-fill-secondary, #f3f4f6);
|
| 535 |
-
border-radius: 12px;
|
| 536 |
-
padding: 48px 24px;
|
| 537 |
-
text-align: center;
|
| 538 |
-
color: var(--body-text-color-subdued, #9ca3af);
|
| 539 |
-
font-size: 0.9rem;
|
| 540 |
-
}
|
| 541 |
-
"""
|
| 542 |
-
|
| 543 |
-
# ---------------------------------------------------------------------------
|
| 544 |
-
# Helper: build stats HTML
|
| 545 |
-
# ---------------------------------------------------------------------------
|
| 546 |
-
def _stats_html(predictions: list[dict], w: int, h: int, slice_size: int,
|
| 547 |
-
overlap_ratio: float, conf_threshold: float,
|
| 548 |
-
iou_threshold: float) -> str:
|
| 549 |
-
n = len(predictions)
|
| 550 |
-
avg_conf = sum(p["confidence"] for p in predictions) / n if n else 0.0
|
| 551 |
-
max_conf = max((p["confidence"] for p in predictions), default=0.0)
|
| 552 |
-
min_conf = min((p["confidence"] for p in predictions), default=0.0)
|
| 553 |
-
|
| 554 |
-
if n > 0:
|
| 555 |
-
cards_html = f"""
|
| 556 |
-
<div class="stat-cards">
|
| 557 |
-
<div class="stat-card">
|
| 558 |
-
<span class="value">{n}</span>
|
| 559 |
-
<span class="label">Elephants<br>Detected</span>
|
| 560 |
-
</div>
|
| 561 |
-
<div class="stat-card highlight">
|
| 562 |
-
<span class="value">{avg_conf:.0%}</span>
|
| 563 |
-
<span class="label">Average<br>Confidence</span>
|
| 564 |
-
</div>
|
| 565 |
-
<div class="stat-card">
|
| 566 |
-
<span class="value">{max_conf:.0%}</span>
|
| 567 |
-
<span class="label">Highest<br>Confidence</span>
|
| 568 |
-
</div>
|
| 569 |
-
<div class="stat-card">
|
| 570 |
-
<span class="value">{min_conf:.0%}</span>
|
| 571 |
-
<span class="label">Lowest<br>Confidence</span>
|
| 572 |
-
</div>
|
| 573 |
-
</div>
|
| 574 |
-
"""
|
| 575 |
-
conf_list = ", ".join(f'<code>{p["confidence"]:.0%}</code>' for p in predictions[:20])
|
| 576 |
-
if n > 20:
|
| 577 |
-
conf_list += f" <em>+ {n - 20} more</em>"
|
| 578 |
-
det_detail = f"""
|
| 579 |
-
<p style="margin:12px 0 4px; font-size:0.85rem; color:var(--body-text-color-subdued)">
|
| 580 |
-
Individual confidences: {conf_list}
|
| 581 |
-
</p>
|
| 582 |
-
"""
|
| 583 |
-
else:
|
| 584 |
-
cards_html = """
|
| 585 |
-
<div class="stat-cards">
|
| 586 |
-
<div class="stat-card">
|
| 587 |
-
<span class="value">0</span>
|
| 588 |
-
<span class="label">Elephants<br>Detected</span>
|
| 589 |
-
</div>
|
| 590 |
-
</div>
|
| 591 |
-
<p style="margin:12px 0 4px; color:#6b7280; font-size:0.88rem;">
|
| 592 |
-
No elephants found β try lowering the confidence threshold or upload
|
| 593 |
-
an aerial image with visible elephants.
|
| 594 |
-
</p>
|
| 595 |
-
"""
|
| 596 |
-
det_detail = ""
|
| 597 |
-
|
| 598 |
-
param_html = f"""
|
| 599 |
-
<div class="param-card">
|
| 600 |
-
<strong>Parameters used</strong><br>
|
| 601 |
-
Slice {int(slice_size)} Γ {int(slice_size)} px
|
| 602 |
-
Β· Overlap {overlap_ratio:.0%}
|
| 603 |
-
Β· Confidence β₯ {conf_threshold:.0%}
|
| 604 |
-
Β· IoU {iou_threshold:.0%}
|
| 605 |
-
Β· Image {w} Γ {h} px
|
| 606 |
-
Β· Device <code>{DEVICE}</code>
|
| 607 |
-
</div>
|
| 608 |
-
"""
|
| 609 |
-
|
| 610 |
-
return cards_html + det_detail + param_html
|
| 611 |
-
|
| 612 |
-
|
| 613 |
-
# ---------------------------------------------------------------------------
|
| 614 |
-
# Updated process_image (returns 5 outputs)
|
| 615 |
-
# ---------------------------------------------------------------------------
|
| 616 |
def build_ui() -> gr.Blocks:
|
| 617 |
-
"""Construct the Gradio Blocks interface
|
| 618 |
|
| 619 |
with gr.Blocks(
|
| 620 |
-
css=CSS,
|
| 621 |
theme=_THEME,
|
| 622 |
title="EleFind β Aerial Elephant Detection",
|
| 623 |
fill_width=False,
|
| 624 |
) as demo:
|
| 625 |
|
| 626 |
# ββ Hero banner βββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 627 |
-
gr.
|
| 628 |
-
""
|
| 629 |
-
|
| 630 |
-
|
| 631 |
-
|
| 632 |
-
(Slicing Aided Hyper Inference). Upload a drone or satellite image
|
| 633 |
-
and get instant detection results with XAI heatmaps.</p>
|
| 634 |
-
<span class="badge">YOLOv11</span>
|
| 635 |
-
<span class="badge">SAHI</span>
|
| 636 |
-
<span class="badge">XAI Heatmaps</span>
|
| 637 |
-
<span class="badge">Conservation AI</span>
|
| 638 |
-
</div>
|
| 639 |
-
"""
|
| 640 |
)
|
| 641 |
|
| 642 |
# ββ Main two-column layout βββββββββββββββββββββββββββββββββββββββββ
|
|
@@ -650,11 +390,10 @@ def build_ui() -> gr.Blocks:
|
|
| 650 |
type="pil",
|
| 651 |
sources=["upload", "clipboard"],
|
| 652 |
show_fullscreen_button=True,
|
| 653 |
-
show_download_button=False,
|
| 654 |
height=320,
|
| 655 |
)
|
| 656 |
|
| 657 |
-
with gr.Accordion("
|
| 658 |
conf_slider = gr.Slider(
|
| 659 |
minimum=0.05,
|
| 660 |
maximum=0.95,
|
|
@@ -688,29 +427,12 @@ def build_ui() -> gr.Blocks:
|
|
| 688 |
info="Suppress duplicate boxes above this overlap",
|
| 689 |
)
|
| 690 |
|
| 691 |
-
heatmap_toggle = gr.Checkbox(
|
| 692 |
-
label="Generate XAI Density Heatmap",
|
| 693 |
-
value=False,
|
| 694 |
-
info="Overlay a Gaussian density map on the detection image",
|
| 695 |
-
)
|
| 696 |
-
|
| 697 |
detect_btn = gr.Button(
|
| 698 |
-
"
|
| 699 |
variant="primary",
|
| 700 |
size="lg",
|
| 701 |
)
|
| 702 |
|
| 703 |
-
gr.HTML(
|
| 704 |
-
"""
|
| 705 |
-
<div class="tips-box">
|
| 706 |
-
<strong>Tips for best results</strong><br>
|
| 707 |
-
β’ Use high-resolution aerial / drone images (β₯ 4K)<br>
|
| 708 |
-
β’ Optimal source resolution: ~5472 Γ 3648 px<br>
|
| 709 |
-
β’ Lower confidence threshold β more detections (noisier)<br>
|
| 710 |
-
β’ Increase slice size for larger, spread-out herds
|
| 711 |
-
</div>
|
| 712 |
-
"""
|
| 713 |
-
)
|
| 714 |
|
| 715 |
# ββ RIGHT: Output tabs ββββββββββββββββββββββββββββββββββββββββ
|
| 716 |
with gr.Column(scale=6, min_width=400):
|
|
@@ -718,43 +440,38 @@ def build_ui() -> gr.Blocks:
|
|
| 718 |
with gr.Tabs() as result_tabs:
|
| 719 |
|
| 720 |
# ββ Tab 1: Detection image ββββββββββββββββββββββββββββ
|
| 721 |
-
with gr.Tab("
|
| 722 |
detection_output = gr.Image(
|
| 723 |
label="Annotated detections",
|
| 724 |
type="pil",
|
| 725 |
interactive=False,
|
| 726 |
-
show_fullscreen_button=True,
|
| 727 |
show_download_button=True,
|
| 728 |
-
height=420,
|
| 729 |
-
)
|
| 730 |
-
|
| 731 |
-
# ββ Tab 2: XAI Heatmap ββββββββββββββββββββββββββββββββ
|
| 732 |
-
with gr.Tab("π‘οΈ XAI Heatmap", id="tab_hm"):
|
| 733 |
-
heatmap_output = gr.Image(
|
| 734 |
-
label="Gaussian density heatmap",
|
| 735 |
-
type="pil",
|
| 736 |
-
interactive=False,
|
| 737 |
show_fullscreen_button=True,
|
| 738 |
-
show_download_button=True,
|
| 739 |
height=420,
|
| 740 |
)
|
| 741 |
-
gr.HTML(
|
| 742 |
-
"""
|
| 743 |
-
<p style="font-size:0.8rem; color:#6b7280; margin:4px 0 0;">
|
| 744 |
-
Enable <em>Generate XAI Density Heatmap</em> in the left panel
|
| 745 |
-
then run detection to see the heatmap.
|
| 746 |
-
</p>
|
| 747 |
-
"""
|
| 748 |
-
)
|
| 749 |
|
| 750 |
-
# ββ Tab
|
| 751 |
-
with gr.Tab("
|
| 752 |
-
|
| 753 |
-
|
| 754 |
-
|
| 755 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 756 |
|
| 757 |
-
with gr.Accordion("
|
| 758 |
det_table_out = gr.Dataframe(
|
| 759 |
headers=["ID", "Confidence", "BBox (x1,y1,x2,y2)",
|
| 760 |
"Width (px)", "Height (px)"],
|
|
@@ -763,7 +480,7 @@ def build_ui() -> gr.Blocks:
|
|
| 763 |
wrap=True,
|
| 764 |
)
|
| 765 |
|
| 766 |
-
with gr.Accordion("
|
| 767 |
if _PANDAS:
|
| 768 |
conf_chart_out = gr.BarPlot(
|
| 769 |
value=_EMPTY_CHART,
|
|
@@ -786,70 +503,37 @@ def build_ui() -> gr.Blocks:
|
|
| 786 |
example_dir = Path(__file__).parent / "examples"
|
| 787 |
example_files = sorted(example_dir.glob("*.jpg")) if example_dir.exists() else []
|
| 788 |
if example_files:
|
| 789 |
-
with gr.Accordion("
|
| 790 |
gr.Examples(
|
| 791 |
examples=[[str(f)] for f in example_files],
|
| 792 |
inputs=[input_image],
|
| 793 |
label=None,
|
| 794 |
)
|
| 795 |
|
| 796 |
-
# ββ About accordion ββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 797 |
-
with gr.Accordion("βΉοΈ About EleFind", open=False):
|
| 798 |
-
gr.HTML(
|
| 799 |
-
"""
|
| 800 |
-
<div class="about-section">
|
| 801 |
-
<p><strong>EleFind</strong> is an undergraduate research project for
|
| 802 |
-
automated elephant detection in aerial imagery to support wildlife
|
| 803 |
-
conservation efforts.</p>
|
| 804 |
-
<ul>
|
| 805 |
-
<li><strong>Model:</strong> YOLOv11 trained on sliced 1024 Γ 1024
|
| 806 |
-
aerial patches</li>
|
| 807 |
-
<li><strong>Inference:</strong> SAHI β tiled inference for
|
| 808 |
-
high-resolution images without GPU memory overflow</li>
|
| 809 |
-
<li><strong>XAI:</strong> Gaussian density heatmaps highlight
|
| 810 |
-
detection hotspot areas</li>
|
| 811 |
-
<li><strong>Performance:</strong> Precision 53.2 %
|
| 812 |
-
| Recall 49.1 %
|
| 813 |
-
| F1 51.0 %</li>
|
| 814 |
-
</ul>
|
| 815 |
-
<p>
|
| 816 |
-
<a href="https://github.com/helithalochana/EleFind-gradio-ui"
|
| 817 |
-
target="_blank">GitHub Repository</a>
|
| 818 |
-
Β·
|
| 819 |
-
<a href="https://huggingface.co/iamhelitha/EleFind-yolo11-elephant"
|
| 820 |
-
target="_blank">Model on HuggingFace</a>
|
| 821 |
-
</p>
|
| 822 |
-
</div>
|
| 823 |
-
"""
|
| 824 |
-
)
|
| 825 |
-
|
| 826 |
# ββ Event wiring βββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 827 |
_outputs = [
|
| 828 |
detection_output,
|
| 829 |
-
|
| 830 |
-
|
|
|
|
|
|
|
|
|
|
| 831 |
conf_chart_out,
|
| 832 |
det_table_out,
|
| 833 |
]
|
| 834 |
|
| 835 |
-
def _run(image, conf, ssize, overlap, iou, hm, progress=gr.Progress()):
|
| 836 |
-
det_img, heatmap_img, raw_stats, conf_data, table_data = process_image(
|
| 837 |
-
image, conf, ssize, overlap, iou, hm, progress
|
| 838 |
-
)
|
| 839 |
-
return det_img, heatmap_img, raw_stats, conf_data, table_data
|
| 840 |
-
|
| 841 |
detect_btn.click(
|
| 842 |
-
fn=
|
| 843 |
inputs=[
|
| 844 |
input_image,
|
| 845 |
conf_slider,
|
| 846 |
slice_slider,
|
| 847 |
overlap_slider,
|
| 848 |
iou_slider,
|
| 849 |
-
heatmap_toggle,
|
| 850 |
],
|
| 851 |
outputs=_outputs,
|
| 852 |
concurrency_limit=1,
|
|
|
|
| 853 |
)
|
| 854 |
|
| 855 |
return demo
|
|
|
|
| 1 |
"""
|
| 2 |
+
EleFind - Aerial Elephant Detection
|
| 3 |
+
=====================================
|
| 4 |
|
| 5 |
+
A Gradio 6 web interface for detecting elephants in aerial/drone imagery
|
| 6 |
using YOLOv11 with SAHI (Slicing Aided Hyper Inference).
|
| 7 |
|
| 8 |
Features:
|
| 9 |
- Upload aerial images and detect elephants with bounding boxes
|
|
|
|
| 10 |
- Adjustable SAHI parameters (confidence, slice size, overlap)
|
| 11 |
- Automatic model download from HuggingFace Hub
|
| 12 |
- Confidence bar chart and detection data table
|
|
|
|
| 13 |
|
| 14 |
Author: Helitha Guruge
|
| 15 |
Project: EleFind (Undergraduate Research Project)
|
|
|
|
| 171 |
slice_size: int = DEFAULT_SLICE,
|
| 172 |
overlap_ratio: float = DEFAULT_OVERLAP,
|
| 173 |
iou_threshold: float = DEFAULT_IOU,
|
| 174 |
+
) -> list:
|
| 175 |
"""Run SAHI sliced prediction and return a list of detection dicts."""
|
| 176 |
|
| 177 |
# Update model confidence threshold
|
|
|
|
| 213 |
return predictions
|
| 214 |
|
| 215 |
|
| 216 |
+
def draw_detections(image: np.ndarray, predictions: list) -> np.ndarray:
|
| 217 |
"""Draw bounding boxes and labels on the image."""
|
| 218 |
img = image.copy()
|
| 219 |
|
|
|
|
| 237 |
return img
|
| 238 |
|
| 239 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 240 |
# ---------------------------------------------------------------------------
|
| 241 |
# Normalisation helpers (handle various Gradio input types)
|
| 242 |
# ---------------------------------------------------------------------------
|
|
|
|
| 270 |
slice_size: int,
|
| 271 |
overlap_ratio: float,
|
| 272 |
iou_threshold: float,
|
|
|
|
| 273 |
progress=gr.Progress(),
|
| 274 |
):
|
| 275 |
+
"""Run detection pipeline and return annotated image, stat values,
|
| 276 |
+
parameters text, confidence chart data, and detection table data."""
|
| 277 |
|
| 278 |
image_np = _to_numpy_rgb(image)
|
| 279 |
if image_np is None:
|
| 280 |
+
return None, 0, 0.0, 0.0, 0.0, "", None, None
|
| 281 |
|
| 282 |
try:
|
| 283 |
progress(0.05, desc="Validating image")
|
|
|
|
| 296 |
progress(0.80, desc="Drawing detections")
|
| 297 |
det_image = draw_detections(image_np, predictions)
|
| 298 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 299 |
except Exception as e:
|
| 300 |
import traceback
|
| 301 |
+
err_msg = f"Error: {e}\n\n```\n{traceback.format_exc()}\n```"
|
| 302 |
+
return None, 0, 0.0, 0.0, 0.0, err_msg, None, None
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 303 |
|
| 304 |
+
# Compute stats
|
| 305 |
+
n = len(predictions)
|
| 306 |
+
avg_conf = sum(p["confidence"] for p in predictions) / n if n else 0.0
|
| 307 |
+
max_conf = max((p["confidence"] for p in predictions), default=0.0)
|
| 308 |
+
min_conf = min((p["confidence"] for p in predictions), default=0.0)
|
| 309 |
+
|
| 310 |
+
params_text = (
|
| 311 |
+
f"**Parameters:** Slice {int(slice_size)}x{int(slice_size)} px Β· "
|
| 312 |
+
f"Overlap {overlap_ratio:.0%} Β· Confidence >= {conf_threshold:.0%} Β· "
|
| 313 |
+
f"IoU {iou_threshold:.0%} Β· Image {w}x{h} px Β· Device `{DEVICE}`"
|
| 314 |
+
)
|
| 315 |
|
| 316 |
progress(1.0, desc="Done")
|
|
|
|
| 317 |
|
| 318 |
# Build chart / table data (pandas optional)
|
| 319 |
conf_chart = None
|
|
|
|
| 340 |
|
| 341 |
return (
|
| 342 |
Image.fromarray(det_image.astype(np.uint8)),
|
| 343 |
+
n,
|
| 344 |
+
avg_conf,
|
| 345 |
+
max_conf,
|
| 346 |
+
min_conf,
|
| 347 |
+
params_text,
|
| 348 |
conf_chart,
|
| 349 |
det_table,
|
| 350 |
)
|
|
|
|
| 360 |
neutral_hue="gray",
|
| 361 |
font=[gr.themes.GoogleFont("Inter"), "ui-sans-serif", "system-ui", "sans-serif"],
|
| 362 |
font_mono=[gr.themes.GoogleFont("JetBrains Mono"), "ui-monospace", "monospace"],
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 363 |
)
|
| 364 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 365 |
def build_ui() -> gr.Blocks:
|
| 366 |
+
"""Construct the Gradio Blocks interface."""
|
| 367 |
|
| 368 |
with gr.Blocks(
|
|
|
|
| 369 |
theme=_THEME,
|
| 370 |
title="EleFind β Aerial Elephant Detection",
|
| 371 |
fill_width=False,
|
| 372 |
) as demo:
|
| 373 |
|
| 374 |
# ββ Hero banner βββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 375 |
+
gr.Markdown(
|
| 376 |
+
"# EleFind\n\n"
|
| 377 |
+
"Aerial elephant detection powered by **YOLOv11 + SAHI** "
|
| 378 |
+
"(Slicing Aided Hyper Inference). Upload a drone or satellite image "
|
| 379 |
+
"and get instant detection results."
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 380 |
)
|
| 381 |
|
| 382 |
# ββ Main two-column layout βββββββββββββββββββββββββββββββββββββββββ
|
|
|
|
| 390 |
type="pil",
|
| 391 |
sources=["upload", "clipboard"],
|
| 392 |
show_fullscreen_button=True,
|
|
|
|
| 393 |
height=320,
|
| 394 |
)
|
| 395 |
|
| 396 |
+
with gr.Accordion("SAHI Detection Parameters", open=False):
|
| 397 |
conf_slider = gr.Slider(
|
| 398 |
minimum=0.05,
|
| 399 |
maximum=0.95,
|
|
|
|
| 427 |
info="Suppress duplicate boxes above this overlap",
|
| 428 |
)
|
| 429 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 430 |
detect_btn = gr.Button(
|
| 431 |
+
"Detect Elephants",
|
| 432 |
variant="primary",
|
| 433 |
size="lg",
|
| 434 |
)
|
| 435 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 436 |
|
| 437 |
# ββ RIGHT: Output tabs ββββββββββββββββββββββββββββββββββββββββ
|
| 438 |
with gr.Column(scale=6, min_width=400):
|
|
|
|
| 440 |
with gr.Tabs() as result_tabs:
|
| 441 |
|
| 442 |
# ββ Tab 1: Detection image ββββββββββββββββββββββββββββ
|
| 443 |
+
with gr.Tab("Detections", id="tab_det"):
|
| 444 |
detection_output = gr.Image(
|
| 445 |
label="Annotated detections",
|
| 446 |
type="pil",
|
| 447 |
interactive=False,
|
|
|
|
| 448 |
show_download_button=True,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 449 |
show_fullscreen_button=True,
|
|
|
|
| 450 |
height=420,
|
| 451 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 452 |
|
| 453 |
+
# ββ Tab 2: Statistics βββββββββββββββββββββββββββββββββ
|
| 454 |
+
with gr.Tab("Statistics", id="tab_stats"):
|
| 455 |
+
with gr.Row():
|
| 456 |
+
stat_count = gr.Number(
|
| 457 |
+
label="Elephants Detected", value=0,
|
| 458 |
+
interactive=False,
|
| 459 |
+
)
|
| 460 |
+
stat_avg = gr.Number(
|
| 461 |
+
label="Avg Confidence", value=0.0,
|
| 462 |
+
interactive=False, precision=2,
|
| 463 |
+
)
|
| 464 |
+
stat_max = gr.Number(
|
| 465 |
+
label="Highest Confidence", value=0.0,
|
| 466 |
+
interactive=False, precision=2,
|
| 467 |
+
)
|
| 468 |
+
stat_min = gr.Number(
|
| 469 |
+
label="Lowest Confidence", value=0.0,
|
| 470 |
+
interactive=False, precision=2,
|
| 471 |
+
)
|
| 472 |
+
params_md = gr.Markdown()
|
| 473 |
|
| 474 |
+
with gr.Accordion("Detection Table", open=True):
|
| 475 |
det_table_out = gr.Dataframe(
|
| 476 |
headers=["ID", "Confidence", "BBox (x1,y1,x2,y2)",
|
| 477 |
"Width (px)", "Height (px)"],
|
|
|
|
| 480 |
wrap=True,
|
| 481 |
)
|
| 482 |
|
| 483 |
+
with gr.Accordion("Confidence Chart", open=True):
|
| 484 |
if _PANDAS:
|
| 485 |
conf_chart_out = gr.BarPlot(
|
| 486 |
value=_EMPTY_CHART,
|
|
|
|
| 503 |
example_dir = Path(__file__).parent / "examples"
|
| 504 |
example_files = sorted(example_dir.glob("*.jpg")) if example_dir.exists() else []
|
| 505 |
if example_files:
|
| 506 |
+
with gr.Accordion("Example Aerial Images", open=True):
|
| 507 |
gr.Examples(
|
| 508 |
examples=[[str(f)] for f in example_files],
|
| 509 |
inputs=[input_image],
|
| 510 |
label=None,
|
| 511 |
)
|
| 512 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 513 |
# ββ Event wiring βββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 514 |
_outputs = [
|
| 515 |
detection_output,
|
| 516 |
+
stat_count,
|
| 517 |
+
stat_avg,
|
| 518 |
+
stat_max,
|
| 519 |
+
stat_min,
|
| 520 |
+
params_md,
|
| 521 |
conf_chart_out,
|
| 522 |
det_table_out,
|
| 523 |
]
|
| 524 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 525 |
detect_btn.click(
|
| 526 |
+
fn=process_image,
|
| 527 |
inputs=[
|
| 528 |
input_image,
|
| 529 |
conf_slider,
|
| 530 |
slice_slider,
|
| 531 |
overlap_slider,
|
| 532 |
iou_slider,
|
|
|
|
| 533 |
],
|
| 534 |
outputs=_outputs,
|
| 535 |
concurrency_limit=1,
|
| 536 |
+
api_name="detect",
|
| 537 |
)
|
| 538 |
|
| 539 |
return demo
|
docs/api-usage.md
ADDED
|
@@ -0,0 +1,242 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# EleFind API Usage
|
| 2 |
+
|
| 3 |
+
EleFind exposes a detection endpoint through Gradio's built-in API. The Space is publicly accessible β no authentication is required.
|
| 4 |
+
|
| 5 |
+
**Base URL:** `https://iamhelitha-elefind-gradio-ui.hf.space`
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## Endpoint
|
| 10 |
+
|
| 11 |
+
### `POST /call/detect`
|
| 12 |
+
|
| 13 |
+
Submits an image for elephant detection. Returns an `event_id` used to retrieve results.
|
| 14 |
+
|
| 15 |
+
**Request**
|
| 16 |
+
|
| 17 |
+
```http
|
| 18 |
+
POST /call/detect
|
| 19 |
+
Content-Type: application/json
|
| 20 |
+
|
| 21 |
+
{
|
| 22 |
+
"data": [
|
| 23 |
+
{ "path": "https://example.com/aerial-image.jpg" },
|
| 24 |
+
0.30,
|
| 25 |
+
1024,
|
| 26 |
+
0.30,
|
| 27 |
+
0.40
|
| 28 |
+
]
|
| 29 |
+
}
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
**Parameters (positional, in `data` array)**
|
| 33 |
+
|
| 34 |
+
| Index | Name | Type | Default | Range | Description |
|
| 35 |
+
|-------|------|------|---------|-------|-------------|
|
| 36 |
+
| 0 | `image` | URL or file path | required | β | Aerial image to analyse |
|
| 37 |
+
| 1 | `conf_threshold` | float | `0.30` | 0.05 β 0.95 | Minimum detection confidence |
|
| 38 |
+
| 2 | `slice_size` | int | `1024` | 256 β 2048 | SAHI tile size in pixels |
|
| 39 |
+
| 3 | `overlap_ratio` | float | `0.30` | 0.05 β 0.50 | Tile overlap fraction |
|
| 40 |
+
| 4 | `iou_threshold` | float | `0.40` | 0.10 β 0.80 | NMS IoU threshold |
|
| 41 |
+
|
| 42 |
+
**Response**
|
| 43 |
+
|
| 44 |
+
```json
|
| 45 |
+
{ "event_id": "abc123xyz" }
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
---
|
| 49 |
+
|
| 50 |
+
### `GET /call/detect/{event_id}`
|
| 51 |
+
|
| 52 |
+
Streams results for a submitted job using Server-Sent Events (SSE).
|
| 53 |
+
|
| 54 |
+
```http
|
| 55 |
+
GET /call/detect/abc123xyz
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
The stream emits events until a `complete` event is received:
|
| 59 |
+
|
| 60 |
+
```
|
| 61 |
+
event: generating
|
| 62 |
+
data: null
|
| 63 |
+
|
| 64 |
+
event: complete
|
| 65 |
+
data: [<detection_image>, <count>, <avg_confidence>, <max_confidence>, <min_confidence>, <params_text>, <conf_chart>, <det_table>]
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
**Output fields (positional)**
|
| 69 |
+
|
| 70 |
+
| Index | Field | Type | Description |
|
| 71 |
+
|-------|-------|------|-------------|
|
| 72 |
+
| 0 | `detection_image` | object | Annotated image `{ path, url, size, orig_name }` |
|
| 73 |
+
| 1 | `count` | int | Number of elephants detected |
|
| 74 |
+
| 2 | `avg_confidence` | float | Average detection confidence (0.0 β 1.0) |
|
| 75 |
+
| 3 | `max_confidence` | float | Highest single detection confidence |
|
| 76 |
+
| 4 | `min_confidence` | float | Lowest single detection confidence |
|
| 77 |
+
| 5 | `params_text` | string | Markdown summary of inference parameters |
|
| 78 |
+
| 6 | `conf_chart` | object / null | Per-elephant confidence data (pandas DataFrame as JSON) |
|
| 79 |
+
| 7 | `det_table` | object / null | Full detection table with bounding boxes |
|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
+
|
| 83 |
+
## Using the JavaScript client (recommended for React / Next.js)
|
| 84 |
+
|
| 85 |
+
Install the official Gradio client:
|
| 86 |
+
|
| 87 |
+
```bash
|
| 88 |
+
npm install @gradio/client
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
### Basic usage
|
| 92 |
+
|
| 93 |
+
```javascript
|
| 94 |
+
import { Client, handle_file } from "@gradio/client";
|
| 95 |
+
|
| 96 |
+
async function detectElephants(imageFile, options = {}) {
|
| 97 |
+
const client = await Client.connect("iamhelitha/EleFind-gradio-ui");
|
| 98 |
+
|
| 99 |
+
const result = await client.predict("/detect", {
|
| 100 |
+
image: handle_file(imageFile), // File or Blob object
|
| 101 |
+
conf_threshold: options.conf ?? 0.30,
|
| 102 |
+
slice_size: options.slice ?? 1024,
|
| 103 |
+
overlap_ratio: options.overlap ?? 0.30,
|
| 104 |
+
iou_threshold: options.iou ?? 0.40,
|
| 105 |
+
});
|
| 106 |
+
|
| 107 |
+
const [
|
| 108 |
+
detectionImage,
|
| 109 |
+
count,
|
| 110 |
+
avgConfidence,
|
| 111 |
+
maxConfidence,
|
| 112 |
+
minConfidence,
|
| 113 |
+
paramsText,
|
| 114 |
+
confChart,
|
| 115 |
+
detTable,
|
| 116 |
+
] = result.data;
|
| 117 |
+
|
| 118 |
+
return { detectionImage, count, avgConfidence, maxConfidence, minConfidence };
|
| 119 |
+
}
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
### React component example
|
| 123 |
+
|
| 124 |
+
```jsx
|
| 125 |
+
import { useState } from "react";
|
| 126 |
+
import { Client, handle_file } from "@gradio/client";
|
| 127 |
+
|
| 128 |
+
export default function ElephantDetector() {
|
| 129 |
+
const [result, setResult] = useState(null);
|
| 130 |
+
const [loading, setLoading] = useState(false);
|
| 131 |
+
|
| 132 |
+
async function handleSubmit(e) {
|
| 133 |
+
e.preventDefault();
|
| 134 |
+
const file = e.target.image.files[0];
|
| 135 |
+
if (!file) return;
|
| 136 |
+
|
| 137 |
+
setLoading(true);
|
| 138 |
+
try {
|
| 139 |
+
const client = await Client.connect("iamhelitha/EleFind-gradio-ui");
|
| 140 |
+
const { data } = await client.predict("/detect", {
|
| 141 |
+
image: handle_file(file),
|
| 142 |
+
conf_threshold: 0.30,
|
| 143 |
+
slice_size: 1024,
|
| 144 |
+
overlap_ratio: 0.30,
|
| 145 |
+
iou_threshold: 0.40,
|
| 146 |
+
});
|
| 147 |
+
|
| 148 |
+
setResult({
|
| 149 |
+
imageUrl: data[0].url,
|
| 150 |
+
count: data[1],
|
| 151 |
+
avgConf: data[2],
|
| 152 |
+
});
|
| 153 |
+
} finally {
|
| 154 |
+
setLoading(false);
|
| 155 |
+
}
|
| 156 |
+
}
|
| 157 |
+
|
| 158 |
+
return (
|
| 159 |
+
<form onSubmit={handleSubmit}>
|
| 160 |
+
<input type="file" name="image" accept="image/*" />
|
| 161 |
+
<button type="submit" disabled={loading}>
|
| 162 |
+
{loading ? "Detecting..." : "Detect Elephants"}
|
| 163 |
+
</button>
|
| 164 |
+
{result && (
|
| 165 |
+
<div>
|
| 166 |
+
<p>Elephants found: {result.count}</p>
|
| 167 |
+
<p>Avg confidence: {(result.avgConf * 100).toFixed(1)}%</p>
|
| 168 |
+
<img src={result.imageUrl} alt="Detection result" />
|
| 169 |
+
</div>
|
| 170 |
+
)}
|
| 171 |
+
</form>
|
| 172 |
+
);
|
| 173 |
+
}
|
| 174 |
+
```
|
| 175 |
+
|
| 176 |
+
---
|
| 177 |
+
|
| 178 |
+
## Using curl (testing / server-side)
|
| 179 |
+
|
| 180 |
+
### Step 1 β Submit the job
|
| 181 |
+
|
| 182 |
+
```bash
|
| 183 |
+
curl -X POST https://iamhelitha-elefind-gradio-ui.hf.space/call/detect \
|
| 184 |
+
-H "Content-Type: application/json" \
|
| 185 |
+
-d '{
|
| 186 |
+
"data": [
|
| 187 |
+
{ "path": "https://example.com/aerial-image.jpg" },
|
| 188 |
+
0.30,
|
| 189 |
+
1024,
|
| 190 |
+
0.30,
|
| 191 |
+
0.40
|
| 192 |
+
]
|
| 193 |
+
}'
|
| 194 |
+
```
|
| 195 |
+
|
| 196 |
+
Response:
|
| 197 |
+
```json
|
| 198 |
+
{ "event_id": "abc123xyz" }
|
| 199 |
+
```
|
| 200 |
+
|
| 201 |
+
### Step 2 β Stream the result
|
| 202 |
+
|
| 203 |
+
```bash
|
| 204 |
+
curl -N https://iamhelitha-elefind-gradio-ui.hf.space/call/detect/abc123xyz
|
| 205 |
+
```
|
| 206 |
+
|
| 207 |
+
---
|
| 208 |
+
|
| 209 |
+
## Using the Python client
|
| 210 |
+
|
| 211 |
+
```bash
|
| 212 |
+
pip install gradio_client
|
| 213 |
+
```
|
| 214 |
+
|
| 215 |
+
```python
|
| 216 |
+
from gradio_client import Client, handle_file
|
| 217 |
+
|
| 218 |
+
client = Client("iamhelitha/EleFind-gradio-ui")
|
| 219 |
+
|
| 220 |
+
result = client.predict(
|
| 221 |
+
image=handle_file("/path/to/aerial-image.jpg"),
|
| 222 |
+
conf_threshold=0.30,
|
| 223 |
+
slice_size=1024,
|
| 224 |
+
overlap_ratio=0.30,
|
| 225 |
+
iou_threshold=0.40,
|
| 226 |
+
api_name="/detect",
|
| 227 |
+
)
|
| 228 |
+
|
| 229 |
+
detection_image, count, avg_conf, max_conf, min_conf, params, chart, table = result
|
| 230 |
+
print(f"Elephants detected: {count}")
|
| 231 |
+
print(f"Average confidence: {avg_conf:.1%}")
|
| 232 |
+
```
|
| 233 |
+
|
| 234 |
+
---
|
| 235 |
+
|
| 236 |
+
## Notes
|
| 237 |
+
|
| 238 |
+
- **CORS:** Gradio Spaces allow requests from any origin, including browser-side JavaScript on Vercel or other external domains.
|
| 239 |
+
- **Rate limits:** The Space runs on CPU (free tier). Inference on a large image can take 30β120 seconds. Set appropriate timeouts in your client.
|
| 240 |
+
- **Concurrency:** The Space processes one request at a time (`concurrency_limit=1`, queue `max_size=10`). Requests beyond the queue limit will be rejected.
|
| 241 |
+
- **Image size:** Images larger than 6000 px on the longest edge are automatically downscaled before inference.
|
| 242 |
+
- **Interactive docs:** Visit `https://iamhelitha-elefind-gradio-ui.hf.space` and click "Use via API" in the footer to see the live API reference.
|