Spaces:
Runtime error
Runtime error
Update landing site safety apps and curated gallery
Browse files- ARCHITECTURE.md +2 -2
- LICENSE +21 -0
- README.md +31 -0
- app/config.py +17 -9
- app/curated.py +107 -0
- app/curated_ui.py +119 -0
- app/data_sources.py +19 -12
- app/depth_pipeline.py +29 -6
- app/safety.py +110 -16
- app/segmentation.py +30 -4
- app/ui.py +167 -278
- app/visualization.py +108 -17
- app/water.py +76 -0
- curated_gradio_app.py +49 -0
- requirements.txt +7 -0
- scripts/precompute_curated.py +270 -0
ARCHITECTURE.md
CHANGED
|
@@ -3,14 +3,14 @@
|
|
| 3 |
This document describes the flow in the current Gradio app (`app/ui.py`), from input selection through model inference, safety scoring, and UI composition.
|
| 4 |
|
| 5 |
## Data and Models
|
| 6 |
-
- **Inputs**: Images
|
| 7 |
- **Depth model**: Depth Anything 3, cached per model id (`DepthEngine`). Inference caps the long side to `process_res_cap` (default 1024) using `upper_bound_resize` before predicting.
|
| 8 |
- **Segmentation model**: SAM3 (`facebook/sam3`) for promptable water/road masking. Loaded once per model id; masks are recomputed every run (no caching). Default `segmentation_max_side` is 384 to keep it fast on CUDA.
|
| 9 |
|
| 10 |
## Constants and Defaults
|
| 11 |
- Altitude/FOV defaults: 450 m, 90°.
|
| 12 |
- Flatness/gradient thresholds: sliders (`std_thresh`, `grad_thresh`).
|
| 13 |
-
- Clearance factor: default 0
|
| 14 |
- Coverage strictness: default 0.95 (fraction of the footprint that must be safe).
|
| 15 |
- Depth smoothing: `depth_smoothing_base` scaled by resolution to reduce speckle before scoring.
|
| 16 |
- Roof mask: depth-based, with large components (>20% of the map) discarded to avoid masking whole fields.
|
|
|
|
| 3 |
This document describes the flow in the current Gradio app (`app/ui.py`), from input selection through model inference, safety scoring, and UI composition.
|
| 4 |
|
| 5 |
## Data and Models
|
| 6 |
+
- **Inputs**: Images under `data/Image/` (VISLOC and any custom folders) via `list_all_data_inputs`, with a 5% border crop (`crop_nonblack`) to drop black padding. Supported extensions: jpg/jpeg/png (any case).
|
| 7 |
- **Depth model**: Depth Anything 3, cached per model id (`DepthEngine`). Inference caps the long side to `process_res_cap` (default 1024) using `upper_bound_resize` before predicting.
|
| 8 |
- **Segmentation model**: SAM3 (`facebook/sam3`) for promptable water/road masking. Loaded once per model id; masks are recomputed every run (no caching). Default `segmentation_max_side` is 384 to keep it fast on CUDA.
|
| 9 |
|
| 10 |
## Constants and Defaults
|
| 11 |
- Altitude/FOV defaults: 450 m, 90°.
|
| 12 |
- Flatness/gradient thresholds: sliders (`std_thresh`, `grad_thresh`).
|
| 13 |
+
- Clearance factor: default 1.0 (hazards dilated to the footprint size unless changed).
|
| 14 |
- Coverage strictness: default 0.95 (fraction of the footprint that must be safe).
|
| 15 |
- Depth smoothing: `depth_smoothing_base` scaled by resolution to reduce speckle before scoring.
|
| 16 |
- Roof mask: depth-based, with large components (>20% of the map) discarded to avoid masking whole fields.
|
LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MIT License
|
| 2 |
+
|
| 3 |
+
Copyright (c) 2024-present
|
| 4 |
+
|
| 5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 6 |
+
of this software and associated documentation files (the "Software"), to deal
|
| 7 |
+
in the Software without restriction, including without limitation the rights
|
| 8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
| 9 |
+
copies of the Software, and to permit persons to whom the Software is
|
| 10 |
+
furnished to do so, subject to the following conditions:
|
| 11 |
+
|
| 12 |
+
The above copyright notice and this permission notice shall be included in all
|
| 13 |
+
copies or substantial portions of the Software.
|
| 14 |
+
|
| 15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
| 16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
| 17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
| 18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
| 19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
| 20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
| 21 |
+
SOFTWARE.
|
README.md
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Drone Landing Site Safety
|
| 2 |
+
|
| 3 |
+
Gradio apps that find safe landing zones in RGB imagery: run live inference or browse a precomputed curated gallery. The pipeline pairs monocular depth (Depth Anything 3) with promptable segmentation (water/motorways/trees) and geometric checks to flag flat, obstacle-free footprints, then renders overlays and metrics so you can see why a spot is safe.
|
| 4 |
+
|
| 5 |
+
## What’s inside
|
| 6 |
+
- **Main app (`gradio_app.py`)** — runs full inference (DepthAnything3 + SAM3 prompts for water/motorways/trees) with adjustable thresholds, overlays, and camera assumptions.
|
| 7 |
+
- **Curated gallery (`curated_gradio_app.py`)** — precomputed PNG/JPG/JSON artifacts for fast, zero-GPU browsing.
|
| 8 |
+
- Shared defaults live in `app/config.py` (`DEFAULT_ANALYZER_SETTINGS`), so both experiences stay in sync.
|
| 9 |
+
|
| 10 |
+
## Prereqs
|
| 11 |
+
- Python 3.10+ and a CUDA GPU for the main app (CPU works but is slow).
|
| 12 |
+
- Sample images: drop your RGBs under `data/Image/` (create subfolders if you want); the repo ignores `data/`. Only curated outputs (5 VISLOC samples) are bundled.
|
| 13 |
+
- Install deps: `pip install -r requirements.txt`.
|
| 14 |
+
|
| 15 |
+
## Run the main Gradio app (live inference)
|
| 16 |
+
```bash
|
| 17 |
+
python gradio_app.py
|
| 18 |
+
```
|
| 19 |
+
- Key defaults: 1024px process cap, 10m footprint, clearance factor 1.0, prompts `water` / `motorway` / `trees`, SAM3 segmentation.
|
| 20 |
+
- Useful env vars: `DA_USE_QUEUE=1`, `DA_SHARE=1`, `GRADIO_SERVER_PORT=7860`, `GRADIO_SERVER_PORT_RANGE=7860,7890`.
|
| 21 |
+
|
| 22 |
+
## Curated gallery (precomputed, CPU-friendly)
|
| 23 |
+
Already works out of the box with the bundled outputs in `app/demo_assets/curated/build/` (about 60 MB):
|
| 24 |
+
```bash
|
| 25 |
+
python curated_gradio_app.py
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
## References
|
| 29 |
+
- UAV-VisLoc dataset: Xu et al., 2024 (https://arxiv.org/abs/2405.11936)
|
| 30 |
+
- Depth Anything 3: Lin et al., 2025 (https://arxiv.org/abs/2511.10647)
|
| 31 |
+
- SAM 3: “SAM 3: Segment Anything with Concepts” (https://ai.meta.com/research/publications/sam-3-segment-anything-with-concepts/)
|
app/config.py
CHANGED
|
@@ -4,7 +4,7 @@ from dataclasses import dataclass
|
|
| 4 |
from pathlib import Path
|
| 5 |
|
| 6 |
VISLOC_DIR = Path("data/Image/VISLOC")
|
| 7 |
-
|
| 8 |
VIDEO_DIR = Path("data/Video")
|
| 9 |
IMAGE_EXTS = (".jpg", ".jpeg", ".png", ".JPG", ".JPEG", ".PNG")
|
| 10 |
VIDEO_EXTS = {".mp4", ".avi", ".mov", ".mkv", ".flv", ".wmv", ".webm", ".m4v"}
|
|
@@ -13,38 +13,44 @@ ASSUMED_FOV_DEG = 90.0
|
|
| 13 |
DEFAULT_MODEL_ID = "depth-anything/DA3MONO-LARGE"
|
| 14 |
SEGMENTATION_MODEL_ID = "facebook/sam3"
|
| 15 |
SEGMENTATION_MAX_SIDE = 384
|
| 16 |
-
SEGMENTATION_SCORE_THRESH = 0.
|
| 17 |
-
SEGMENTATION_MASK_THRESH = 0.
|
| 18 |
-
WATER_PROMPT = "water
|
| 19 |
-
ROAD_PROMPT = "
|
|
|
|
| 20 |
|
| 21 |
|
| 22 |
@dataclass(frozen=True)
|
| 23 |
class AnalyzerSettings:
|
| 24 |
"""Bundle knobs shared between the UI and the processing pipeline."""
|
| 25 |
|
| 26 |
-
footprint_m: float =
|
| 27 |
std_thresh: float = 0.005
|
| 28 |
grad_thresh: float = 0.1
|
| 29 |
-
clearance_factor: float = 0
|
| 30 |
process_res_cap: int = 1024
|
| 31 |
depth_smoothing_base: float = 0.0
|
| 32 |
segmentation_max_side: int = SEGMENTATION_MAX_SIDE
|
|
|
|
| 33 |
segmentation_score_thresh: float = SEGMENTATION_SCORE_THRESH
|
| 34 |
segmentation_mask_thresh: float = SEGMENTATION_MASK_THRESH
|
| 35 |
water_prompt: str = WATER_PROMPT
|
| 36 |
road_prompt: str = ROAD_PROMPT
|
|
|
|
| 37 |
coverage_strictness: float = 0.95
|
| 38 |
openness_weight: float = 0.3
|
| 39 |
-
texture_threshold: float = 0.
|
| 40 |
altitude_m: float = DEFAULT_ALTITUDE_M
|
| 41 |
fov_deg: float = ASSUMED_FOV_DEG
|
| 42 |
model_id: str = DEFAULT_MODEL_ID
|
| 43 |
|
| 44 |
|
|
|
|
|
|
|
|
|
|
| 45 |
__all__ = [
|
| 46 |
"VISLOC_DIR",
|
| 47 |
-
"
|
| 48 |
"VIDEO_DIR",
|
| 49 |
"IMAGE_EXTS",
|
| 50 |
"VIDEO_EXTS",
|
|
@@ -57,5 +63,7 @@ __all__ = [
|
|
| 57 |
"SEGMENTATION_MASK_THRESH",
|
| 58 |
"WATER_PROMPT",
|
| 59 |
"ROAD_PROMPT",
|
|
|
|
|
|
|
| 60 |
"AnalyzerSettings",
|
| 61 |
]
|
|
|
|
| 4 |
from pathlib import Path
|
| 5 |
|
| 6 |
VISLOC_DIR = Path("data/Image/VISLOC")
|
| 7 |
+
IMAGE_ROOT = Path("data/Image")
|
| 8 |
VIDEO_DIR = Path("data/Video")
|
| 9 |
IMAGE_EXTS = (".jpg", ".jpeg", ".png", ".JPG", ".JPEG", ".PNG")
|
| 10 |
VIDEO_EXTS = {".mp4", ".avi", ".mov", ".mkv", ".flv", ".wmv", ".webm", ".m4v"}
|
|
|
|
| 13 |
DEFAULT_MODEL_ID = "depth-anything/DA3MONO-LARGE"
|
| 14 |
SEGMENTATION_MODEL_ID = "facebook/sam3"
|
| 15 |
SEGMENTATION_MAX_SIDE = 384
|
| 16 |
+
SEGMENTATION_SCORE_THRESH = 0.25
|
| 17 |
+
SEGMENTATION_MASK_THRESH = 0.25
|
| 18 |
+
WATER_PROMPT = "water"
|
| 19 |
+
ROAD_PROMPT = "motorway"
|
| 20 |
+
TREE_PROMPT = "trees"
|
| 21 |
|
| 22 |
|
| 23 |
@dataclass(frozen=True)
|
| 24 |
class AnalyzerSettings:
|
| 25 |
"""Bundle knobs shared between the UI and the processing pipeline."""
|
| 26 |
|
| 27 |
+
footprint_m: float = 10.0
|
| 28 |
std_thresh: float = 0.005
|
| 29 |
grad_thresh: float = 0.1
|
| 30 |
+
clearance_factor: float = 1.0
|
| 31 |
process_res_cap: int = 1024
|
| 32 |
depth_smoothing_base: float = 0.0
|
| 33 |
segmentation_max_side: int = SEGMENTATION_MAX_SIDE
|
| 34 |
+
segmentation_model_id: str = SEGMENTATION_MODEL_ID
|
| 35 |
segmentation_score_thresh: float = SEGMENTATION_SCORE_THRESH
|
| 36 |
segmentation_mask_thresh: float = SEGMENTATION_MASK_THRESH
|
| 37 |
water_prompt: str = WATER_PROMPT
|
| 38 |
road_prompt: str = ROAD_PROMPT
|
| 39 |
+
tree_prompt: str = TREE_PROMPT
|
| 40 |
coverage_strictness: float = 0.95
|
| 41 |
openness_weight: float = 0.3
|
| 42 |
+
texture_threshold: float = 0.3
|
| 43 |
altitude_m: float = DEFAULT_ALTITUDE_M
|
| 44 |
fov_deg: float = ASSUMED_FOV_DEG
|
| 45 |
model_id: str = DEFAULT_MODEL_ID
|
| 46 |
|
| 47 |
|
| 48 |
+
DEFAULT_ANALYZER_SETTINGS = AnalyzerSettings()
|
| 49 |
+
|
| 50 |
+
|
| 51 |
__all__ = [
|
| 52 |
"VISLOC_DIR",
|
| 53 |
+
"IMAGE_ROOT",
|
| 54 |
"VIDEO_DIR",
|
| 55 |
"IMAGE_EXTS",
|
| 56 |
"VIDEO_EXTS",
|
|
|
|
| 63 |
"SEGMENTATION_MASK_THRESH",
|
| 64 |
"WATER_PROMPT",
|
| 65 |
"ROAD_PROMPT",
|
| 66 |
+
"TREE_PROMPT",
|
| 67 |
+
"DEFAULT_ANALYZER_SETTINGS",
|
| 68 |
"AnalyzerSettings",
|
| 69 |
]
|
app/curated.py
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
import json
|
| 4 |
+
from dataclasses import dataclass
|
| 5 |
+
from pathlib import Path
|
| 6 |
+
from typing import Any, Dict, List, Optional
|
| 7 |
+
|
| 8 |
+
from PIL import Image
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
@dataclass
|
| 12 |
+
class CuratedSample:
|
| 13 |
+
id: str
|
| 14 |
+
title: str
|
| 15 |
+
description: str
|
| 16 |
+
tags: List[str]
|
| 17 |
+
source_path: str
|
| 18 |
+
composed_path: Path
|
| 19 |
+
rgb_path: Path
|
| 20 |
+
summary: Dict[str, Any]
|
| 21 |
+
request: Dict[str, Any]
|
| 22 |
+
|
| 23 |
+
def load_composed(self) -> Image.Image:
|
| 24 |
+
return Image.open(self.composed_path).convert("RGB")
|
| 25 |
+
|
| 26 |
+
def load_rgb(self) -> Image.Image:
|
| 27 |
+
return Image.open(self.rgb_path).convert("RGB")
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
def format_status(summary: Optional[Dict[str, Any]]) -> str:
|
| 31 |
+
if not summary:
|
| 32 |
+
return "**Status:** Awaiting analysis."
|
| 33 |
+
mask_bits = []
|
| 34 |
+
for label, enabled, pct in (
|
| 35 |
+
("Water", summary.get("water_mask_enabled"), summary.get("water_mask_pct")),
|
| 36 |
+
("Road", summary.get("road_mask_enabled"), summary.get("road_mask_pct")),
|
| 37 |
+
("Roof", summary.get("roof_mask_enabled"), summary.get("roof_mask_pct")),
|
| 38 |
+
):
|
| 39 |
+
if enabled:
|
| 40 |
+
pct_text = "n/a" if pct is None else f"{pct:.1f}%"
|
| 41 |
+
mask_bits.append(f"{label} {pct_text}")
|
| 42 |
+
else:
|
| 43 |
+
mask_bits.append(f"{label} off")
|
| 44 |
+
masks_line = " • ".join(mask_bits)
|
| 45 |
+
lines = [
|
| 46 |
+
"**Status**",
|
| 47 |
+
f"Model: `{summary.get('model_id')}` — Process res: {summary.get('process_resolution')}px — Runtime: {summary.get('runtime_ms', 0):.0f} ms",
|
| 48 |
+
f"Footprint: {summary.get('footprint_m', 0.0):.1f} m ({summary.get('footprint_image_px', 0)}px image scale)",
|
| 49 |
+
f"Masks: {masks_line}",
|
| 50 |
+
]
|
| 51 |
+
return "<br/>".join(lines)
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
def format_metrics(summary: Optional[Dict[str, Any]]) -> str:
|
| 55 |
+
if not summary:
|
| 56 |
+
return "No metrics yet. Run the analyzer to populate this section."
|
| 57 |
+
lines = [
|
| 58 |
+
f"**Safe coverage:** {summary.get('safe_area_pct', 0.0):.1f}% of frame",
|
| 59 |
+
f"**Hazard coverage:** {summary.get('hazard_pct', 0.0):.1f}%",
|
| 60 |
+
f"**Landing center (px):** {summary.get('landing_center_image', ['-', '-'])[0]}, {summary.get('landing_center_image', ['-', '-'])[1]}",
|
| 61 |
+
f"**Footprint size:** {summary.get('footprint_m', 0.0):.1f} m ≈ {summary.get('footprint_image_px', 0)}px",
|
| 62 |
+
f"**Effective thresholds:** std ≤ {summary.get('std_thresh_applied', 0.0):.4f}, grad ≤ {summary.get('grad_thresh_applied', 0.0):.3f}",
|
| 63 |
+
]
|
| 64 |
+
if not summary.get("used_valid_center", True):
|
| 65 |
+
lines.append("Warning: No fully safe footprint; showing lowest-variance patch.")
|
| 66 |
+
warnings = summary.get("warnings") or []
|
| 67 |
+
if warnings:
|
| 68 |
+
warn_lines = "<br/>".join(f"⚠️ {msg}" for msg in warnings)
|
| 69 |
+
lines.append(warn_lines)
|
| 70 |
+
return "<br/>".join(lines)
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
def load_curated_index(index_path: Path) -> List[CuratedSample]:
|
| 74 |
+
if not index_path.exists():
|
| 75 |
+
return []
|
| 76 |
+
with index_path.open("r") as f:
|
| 77 |
+
data = json.load(f)
|
| 78 |
+
samples_in = data.get("samples", [])
|
| 79 |
+
if not isinstance(samples_in, list):
|
| 80 |
+
return []
|
| 81 |
+
base_dir = index_path.parent
|
| 82 |
+
samples: List[CuratedSample] = []
|
| 83 |
+
for item in samples_in:
|
| 84 |
+
if not isinstance(item, dict):
|
| 85 |
+
continue
|
| 86 |
+
artifacts = item.get("artifacts") or {}
|
| 87 |
+
try:
|
| 88 |
+
composed = base_dir / artifacts["composed"]
|
| 89 |
+
rgb = base_dir / artifacts["rgb"]
|
| 90 |
+
except Exception:
|
| 91 |
+
continue
|
| 92 |
+
sample = CuratedSample(
|
| 93 |
+
id=item.get("id") or composed.stem,
|
| 94 |
+
title=item.get("title") or item.get("id") or composed.stem,
|
| 95 |
+
description=item.get("description") or "",
|
| 96 |
+
tags=item.get("tags") or [],
|
| 97 |
+
source_path=item.get("source_path") or "",
|
| 98 |
+
composed_path=composed,
|
| 99 |
+
rgb_path=rgb,
|
| 100 |
+
summary=item.get("summary") or {},
|
| 101 |
+
request=item.get("request") or {},
|
| 102 |
+
)
|
| 103 |
+
samples.append(sample)
|
| 104 |
+
return samples
|
| 105 |
+
|
| 106 |
+
|
| 107 |
+
__all__ = ["CuratedSample", "format_metrics", "format_status", "load_curated_index"]
|
app/curated_ui.py
ADDED
|
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
import os
|
| 4 |
+
from pathlib import Path
|
| 5 |
+
from typing import List
|
| 6 |
+
|
| 7 |
+
import gradio as gr
|
| 8 |
+
|
| 9 |
+
from .curated import CuratedSample, format_metrics, format_status, load_curated_index
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
def _describe_sample(sample: CuratedSample) -> str:
|
| 13 |
+
lines: List[str] = [f"**{sample.title}**"]
|
| 14 |
+
if sample.description:
|
| 15 |
+
lines.append(sample.description)
|
| 16 |
+
if sample.tags:
|
| 17 |
+
lines.append("Tags: " + ", ".join(sample.tags))
|
| 18 |
+
if sample.source_path:
|
| 19 |
+
lines.append(f"Source: `{sample.source_path}`")
|
| 20 |
+
return "<br/>".join(lines)
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
def build_curated_ui(index_path: str | Path | None = None) -> gr.Blocks:
|
| 24 |
+
resolved_index = Path(index_path) if index_path else Path(
|
| 25 |
+
os.getenv("CURATED_INDEX_PATH", "app/demo_assets/curated/build/index.json")
|
| 26 |
+
)
|
| 27 |
+
samples = load_curated_index(resolved_index)
|
| 28 |
+
title = "Landing Site Safety Analyzer — Curated Gallery"
|
| 29 |
+
if not samples:
|
| 30 |
+
with gr.Blocks(title=title) as demo:
|
| 31 |
+
gr.Markdown(
|
| 32 |
+
"## Curated gallery not built yet\n"
|
| 33 |
+
"Generate the curated outputs (PNG + JSON) first:\n"
|
| 34 |
+
"```bash\n"
|
| 35 |
+
"python scripts/precompute_curated.py --manifest app/demo_assets/curated/samples.yaml --output-dir app/demo_assets/curated/build\n"
|
| 36 |
+
"```\n"
|
| 37 |
+
"Then re-launch this app."
|
| 38 |
+
)
|
| 39 |
+
return demo
|
| 40 |
+
|
| 41 |
+
samples_by_id = {s.id: s for s in samples}
|
| 42 |
+
sample_options = [(s.title, s.id) for s in samples]
|
| 43 |
+
default_id = samples[0].id
|
| 44 |
+
|
| 45 |
+
def _select_sample(sample_id: str):
|
| 46 |
+
sample = samples_by_id.get(sample_id)
|
| 47 |
+
if not sample:
|
| 48 |
+
raise gr.Error("Sample not found; rebuild the curated index.")
|
| 49 |
+
composed = sample.load_composed()
|
| 50 |
+
rgb = sample.load_rgb()
|
| 51 |
+
return composed, rgb, _describe_sample(sample), format_status(sample.summary), format_metrics(sample.summary)
|
| 52 |
+
|
| 53 |
+
with gr.Blocks(title=title) as demo:
|
| 54 |
+
gr.Markdown(
|
| 55 |
+
"## Landing Site Safety — Curated Gallery\n"
|
| 56 |
+
"Zero-GPU preview. These results are precomputed from the analyzer and load instantly."
|
| 57 |
+
)
|
| 58 |
+
gr.Markdown(
|
| 59 |
+
"DepthAnything3 + segmentation + landing-safety scoring, showcased on VISLOC scenes. "
|
| 60 |
+
"Each sample pairs the raw RGB with a safety overlay that highlights flat, obstacle-free landing zones."
|
| 61 |
+
)
|
| 62 |
+
gr.Markdown(
|
| 63 |
+
"1) Pick a sample below. 2) Compare the RGB and safety overlay. 3) Scan the status/metrics to see why that spot was chosen."
|
| 64 |
+
)
|
| 65 |
+
gr.HTML(
|
| 66 |
+
"""
|
| 67 |
+
<div style="background: #0d1117; color: #e6edf3; padding: 10px 12px; border-radius: 10px; font-size: 13px;">
|
| 68 |
+
<strong>Legend</strong>: safe mask = <span style="color:#00ff00;">green</span>, water = <span style="color:#0b84ff;">blue</span>, roads = <span style="color:#ff7800;">orange</span>, trees = <span style="color:#228b22;">forest green</span>, landing spot = blue box + crosshair.
|
| 69 |
+
</div>
|
| 70 |
+
"""
|
| 71 |
+
)
|
| 72 |
+
with gr.Row(equal_height=True):
|
| 73 |
+
sample_dropdown = gr.Dropdown(
|
| 74 |
+
label="Sample",
|
| 75 |
+
choices=sample_options,
|
| 76 |
+
value=default_id,
|
| 77 |
+
interactive=True,
|
| 78 |
+
info="Pick a curated VISLOC image and view the precomputed safety analysis.",
|
| 79 |
+
)
|
| 80 |
+
with gr.Row():
|
| 81 |
+
rgb_view = gr.Image(label="RGB reference", type="pil", height=520, show_fullscreen_button=True)
|
| 82 |
+
composed_view = gr.Image(
|
| 83 |
+
label="Safety overlay",
|
| 84 |
+
type="pil",
|
| 85 |
+
show_fullscreen_button=True,
|
| 86 |
+
height=520,
|
| 87 |
+
)
|
| 88 |
+
gr.Markdown(
|
| 89 |
+
"**What to look for:** Samples span water, roads, trees, rooftops, and open terrain to show how prompts "
|
| 90 |
+
"(`water`, `motorway`, `trees`) guide hazard masking and landing-site selection."
|
| 91 |
+
)
|
| 92 |
+
with gr.Row():
|
| 93 |
+
with gr.Column():
|
| 94 |
+
gr.Markdown("**Sample overview**")
|
| 95 |
+
description_card = gr.Markdown()
|
| 96 |
+
with gr.Column():
|
| 97 |
+
gr.Markdown("**Status & masks**")
|
| 98 |
+
status_card = gr.Markdown()
|
| 99 |
+
with gr.Column():
|
| 100 |
+
gr.Markdown("**Safety metrics**")
|
| 101 |
+
metrics_card = gr.Markdown()
|
| 102 |
+
|
| 103 |
+
demo.load(
|
| 104 |
+
fn=_select_sample,
|
| 105 |
+
inputs=sample_dropdown,
|
| 106 |
+
outputs=[composed_view, rgb_view, description_card, status_card, metrics_card],
|
| 107 |
+
queue=False,
|
| 108 |
+
)
|
| 109 |
+
sample_dropdown.change(
|
| 110 |
+
fn=_select_sample,
|
| 111 |
+
inputs=sample_dropdown,
|
| 112 |
+
outputs=[composed_view, rgb_view, description_card, status_card, metrics_card],
|
| 113 |
+
queue=False,
|
| 114 |
+
)
|
| 115 |
+
|
| 116 |
+
return demo
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
__all__ = ["build_curated_ui"]
|
app/data_sources.py
CHANGED
|
@@ -3,7 +3,7 @@ from __future__ import annotations
|
|
| 3 |
from functools import lru_cache
|
| 4 |
from pathlib import Path
|
| 5 |
|
| 6 |
-
from .config import
|
| 7 |
|
| 8 |
|
| 9 |
@lru_cache(maxsize=1)
|
|
@@ -14,14 +14,6 @@ def list_visloc_images() -> list[Path]:
|
|
| 14 |
return sorted(files)
|
| 15 |
|
| 16 |
|
| 17 |
-
@lru_cache(maxsize=1)
|
| 18 |
-
def list_hagdavs_images() -> list[Path]:
|
| 19 |
-
if not HAGDAVS_DIR.exists():
|
| 20 |
-
return []
|
| 21 |
-
files = [p for p in HAGDAVS_DIR.iterdir() if p.suffix in IMAGE_EXTS]
|
| 22 |
-
return sorted(files)
|
| 23 |
-
|
| 24 |
-
|
| 25 |
@lru_cache(maxsize=1)
|
| 26 |
def list_videos() -> list[Path]:
|
| 27 |
if not VIDEO_DIR.exists():
|
|
@@ -32,19 +24,34 @@ def list_videos() -> list[Path]:
|
|
| 32 |
|
| 33 |
@lru_cache(maxsize=1)
|
| 34 |
def list_all_data_inputs() -> list[str]:
|
| 35 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
|
| 38 |
def clear_caches() -> None:
|
| 39 |
list_visloc_images.cache_clear()
|
| 40 |
-
list_hagdavs_images.cache_clear()
|
| 41 |
list_videos.cache_clear()
|
| 42 |
list_all_data_inputs.cache_clear()
|
| 43 |
|
| 44 |
|
| 45 |
__all__ = [
|
| 46 |
"list_visloc_images",
|
| 47 |
-
"list_hagdavs_images",
|
| 48 |
"list_videos",
|
| 49 |
"list_all_data_inputs",
|
| 50 |
"clear_caches",
|
|
|
|
| 3 |
from functools import lru_cache
|
| 4 |
from pathlib import Path
|
| 5 |
|
| 6 |
+
from .config import IMAGE_EXTS, IMAGE_ROOT, VIDEO_DIR, VIDEO_EXTS, VISLOC_DIR
|
| 7 |
|
| 8 |
|
| 9 |
@lru_cache(maxsize=1)
|
|
|
|
| 14 |
return sorted(files)
|
| 15 |
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
@lru_cache(maxsize=1)
|
| 18 |
def list_videos() -> list[Path]:
|
| 19 |
if not VIDEO_DIR.exists():
|
|
|
|
| 24 |
|
| 25 |
@lru_cache(maxsize=1)
|
| 26 |
def list_all_data_inputs() -> list[str]:
|
| 27 |
+
paths: list[Path] = []
|
| 28 |
+
|
| 29 |
+
def _add(paths_in: list[Path]):
|
| 30 |
+
for p in paths_in:
|
| 31 |
+
if p not in paths:
|
| 32 |
+
paths.append(p)
|
| 33 |
+
|
| 34 |
+
# Prefer structured datasets first
|
| 35 |
+
_add(list_visloc_images())
|
| 36 |
+
|
| 37 |
+
# Allow arbitrary images anywhere under data/Image/
|
| 38 |
+
if IMAGE_ROOT.exists():
|
| 39 |
+
for p in IMAGE_ROOT.rglob("*"):
|
| 40 |
+
if p.is_file() and p.suffix in IMAGE_EXTS:
|
| 41 |
+
if p not in paths:
|
| 42 |
+
paths.append(p)
|
| 43 |
+
|
| 44 |
+
return [str(p) for p in sorted(paths)]
|
| 45 |
|
| 46 |
|
| 47 |
def clear_caches() -> None:
|
| 48 |
list_visloc_images.cache_clear()
|
|
|
|
| 49 |
list_videos.cache_clear()
|
| 50 |
list_all_data_inputs.cache_clear()
|
| 51 |
|
| 52 |
|
| 53 |
__all__ = [
|
| 54 |
"list_visloc_images",
|
|
|
|
| 55 |
"list_videos",
|
| 56 |
"list_all_data_inputs",
|
| 57 |
"clear_caches",
|
app/depth_pipeline.py
CHANGED
|
@@ -69,14 +69,26 @@ def fit_plane_ransac(points: np.ndarray, values: np.ndarray, iterations: int = 2
|
|
| 69 |
return best_coef
|
| 70 |
|
| 71 |
|
| 72 |
-
def remove_global_plane(depth: np.ndarray) -> np.ndarray:
|
| 73 |
if depth.ndim != 2:
|
| 74 |
return depth
|
|
|
|
|
|
|
|
|
|
| 75 |
h, w = depth.shape
|
| 76 |
yy, xx = np.mgrid[0:h, 0:w].astype(np.float32)
|
| 77 |
points = np.stack((xx.flatten(), yy.flatten()), axis=1)
|
| 78 |
values = depth.astype(np.float32).reshape(-1, 1)
|
| 79 |
-
coef =
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
if coef is None:
|
| 81 |
return depth
|
| 82 |
plane = (points @ coef[:2] + coef[2]).reshape(h, w)
|
|
@@ -151,10 +163,14 @@ class DepthEngine:
|
|
| 151 |
return self._model_cache[model_id]
|
| 152 |
|
| 153 |
def predict_depth(
|
| 154 |
-
self, image: np.ndarray, model_id: str, process_res_cap: int
|
| 155 |
-
) -> tuple[np.ndarray, np.ndarray, int]:
|
|
|
|
|
|
|
|
|
|
| 156 |
model, device = self.get_model(model_id)
|
| 157 |
process_res = min(max(image.shape[0], image.shape[1]), int(process_res_cap))
|
|
|
|
| 158 |
with torch.inference_mode():
|
| 159 |
pred = model.inference(
|
| 160 |
image=[image],
|
|
@@ -162,9 +178,16 @@ class DepthEngine:
|
|
| 162 |
process_res_method="upper_bound_resize",
|
| 163 |
export_dir=None,
|
| 164 |
)
|
|
|
|
| 165 |
depth_raw = np.array(pred.depth[0])
|
| 166 |
-
depth = remove_global_plane(depth_raw)
|
| 167 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 168 |
|
| 169 |
|
| 170 |
def smooth_depth(depth: np.ndarray, sigma: float) -> np.ndarray:
|
|
|
|
| 69 |
return best_coef
|
| 70 |
|
| 71 |
|
| 72 |
+
def remove_global_plane(depth: np.ndarray, method: str = "least_squares") -> np.ndarray:
|
| 73 |
if depth.ndim != 2:
|
| 74 |
return depth
|
| 75 |
+
method = (method or "least_squares").lower()
|
| 76 |
+
if method in {"none", "off"}:
|
| 77 |
+
return depth
|
| 78 |
h, w = depth.shape
|
| 79 |
yy, xx = np.mgrid[0:h, 0:w].astype(np.float32)
|
| 80 |
points = np.stack((xx.flatten(), yy.flatten()), axis=1)
|
| 81 |
values = depth.astype(np.float32).reshape(-1, 1)
|
| 82 |
+
coef = None
|
| 83 |
+
if method in {"ls", "least_squares", "lstsq"}:
|
| 84 |
+
try:
|
| 85 |
+
coef, *_ = np.linalg.lstsq(
|
| 86 |
+
np.concatenate([points, np.ones((points.shape[0], 1), dtype=np.float32)], axis=1),
|
| 87 |
+
values,
|
| 88 |
+
rcond=None,
|
| 89 |
+
)
|
| 90 |
+
except np.linalg.LinAlgError:
|
| 91 |
+
coef = None
|
| 92 |
if coef is None:
|
| 93 |
return depth
|
| 94 |
plane = (points @ coef[:2] + coef[2]).reshape(h, w)
|
|
|
|
| 163 |
return self._model_cache[model_id]
|
| 164 |
|
| 165 |
def predict_depth(
|
| 166 |
+
self, image: np.ndarray, model_id: str, process_res_cap: int, plane_method: str = "least_squares"
|
| 167 |
+
) -> tuple[np.ndarray, np.ndarray, int, dict[str, float]]:
|
| 168 |
+
import time as _time
|
| 169 |
+
|
| 170 |
+
t0 = _time.perf_counter()
|
| 171 |
model, device = self.get_model(model_id)
|
| 172 |
process_res = min(max(image.shape[0], image.shape[1]), int(process_res_cap))
|
| 173 |
+
t_pre = _time.perf_counter()
|
| 174 |
with torch.inference_mode():
|
| 175 |
pred = model.inference(
|
| 176 |
image=[image],
|
|
|
|
| 178 |
process_res_method="upper_bound_resize",
|
| 179 |
export_dir=None,
|
| 180 |
)
|
| 181 |
+
t_model = _time.perf_counter()
|
| 182 |
depth_raw = np.array(pred.depth[0])
|
| 183 |
+
depth = remove_global_plane(depth_raw, method=plane_method)
|
| 184 |
+
t_post = _time.perf_counter()
|
| 185 |
+
timings = {
|
| 186 |
+
"prep_ms": (t_pre - t0) * 1000.0,
|
| 187 |
+
"model_ms": (t_model - t_pre) * 1000.0,
|
| 188 |
+
"plane_ms": (t_post - t_model) * 1000.0,
|
| 189 |
+
}
|
| 190 |
+
return depth_raw, depth, process_res, timings
|
| 191 |
|
| 192 |
|
| 193 |
def smooth_depth(depth: np.ndarray, sigma: float) -> np.ndarray:
|
app/safety.py
CHANGED
|
@@ -10,9 +10,9 @@ import numpy as np
|
|
| 10 |
import torch
|
| 11 |
from PIL import Image
|
| 12 |
|
| 13 |
-
from .config import IMAGE_EXTS
|
| 14 |
from .depth_pipeline import DepthEngine, compute_roof_mask_depth, crop_nonblack, pick_flat_patch, smooth_depth
|
| 15 |
-
from .segmentation import SegmenterRequest, SegmenterService
|
| 16 |
from .visualization import build_result_layers
|
| 17 |
|
| 18 |
|
|
@@ -24,14 +24,17 @@ class AnalysisRequest:
|
|
| 24 |
use_water_mask: bool
|
| 25 |
use_road_mask: bool
|
| 26 |
use_roof_mask: bool
|
|
|
|
| 27 |
water_prompt: str
|
| 28 |
road_prompt: str
|
|
|
|
| 29 |
altitude_m: float
|
| 30 |
fov_deg: float
|
| 31 |
clearance_factor: float
|
| 32 |
process_res_cap: int
|
| 33 |
depth_smoothing_base: float
|
| 34 |
segmentation_max_side: int
|
|
|
|
| 35 |
segmentation_score_thresh: float
|
| 36 |
segmentation_mask_thresh: float
|
| 37 |
coverage_strictness: float
|
|
@@ -74,7 +77,12 @@ class AnalysisResult:
|
|
| 74 |
class SafetyAnalyzer:
|
| 75 |
def __init__(self, depth_engine: DepthEngine | None = None, segmenter: SegmenterService | None = None):
|
| 76 |
self.depth_engine = depth_engine or DepthEngine()
|
| 77 |
-
self.segmenter = segmenter or
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
|
| 79 |
@staticmethod
|
| 80 |
def build_depth_roof_mask(
|
|
@@ -119,10 +127,17 @@ class SafetyAnalyzer:
|
|
| 119 |
def analyze_image(self, image: Image.Image, request: AnalysisRequest) -> AnalysisResult:
|
| 120 |
t0 = time.perf_counter()
|
| 121 |
rgb_np = np.array(image)
|
| 122 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 123 |
res_scale = max(0.5, min(2.5, process_res / 1024))
|
| 124 |
sigma = max(0.0, request.depth_smoothing_base) * res_scale
|
| 125 |
depth = smooth_depth(depth, sigma)
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
fov = max(10.0, min(170.0, float(request.fov_deg)))
|
| 128 |
altitude = max(1.0, float(request.altitude_m))
|
|
@@ -158,8 +173,9 @@ class SafetyAnalyzer:
|
|
| 158 |
std_map_vis = np.sqrt(
|
| 159 |
np.maximum(box_mean_np(depth_norm * depth_norm, vis_patch) - box_mean_np(depth_norm, vis_patch) ** 2, 0.0)
|
| 160 |
)
|
|
|
|
| 161 |
|
| 162 |
-
gray = cv2.cvtColor(
|
| 163 |
gx = cv2.Sobel(gray, cv2.CV_32F, 1, 0, ksize=3)
|
| 164 |
gy = cv2.Sobel(gray, cv2.CV_32F, 0, 1, ksize=3)
|
| 165 |
texture = np.sqrt(gx * gx + gy * gy)
|
|
@@ -169,14 +185,21 @@ class SafetyAnalyzer:
|
|
| 169 |
texture_norm = (texture - texture.min()) / (np.ptp(texture) + 1e-6)
|
| 170 |
else:
|
| 171 |
texture_norm = np.zeros_like(texture)
|
| 172 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 173 |
|
| 174 |
water_mask_resized = None
|
| 175 |
road_mask_resized = None
|
| 176 |
roof_mask_resized = None
|
|
|
|
| 177 |
water_mask_block = None
|
| 178 |
road_mask_block = None
|
| 179 |
roof_mask_block = None
|
|
|
|
| 180 |
|
| 181 |
def expand_mask_for_footprint(mask: np.ndarray | None) -> np.ndarray | None:
|
| 182 |
if mask is None:
|
|
@@ -189,19 +212,22 @@ class SafetyAnalyzer:
|
|
| 189 |
return mask.copy()
|
| 190 |
expanded = cv2.dilate(mask.astype(np.uint8), kernel, iterations=1)
|
| 191 |
return expanded.astype(bool)
|
| 192 |
-
if request.use_water_mask or request.use_road_mask:
|
| 193 |
masks = self.segmenter.get_masks(
|
| 194 |
SegmenterRequest(
|
| 195 |
-
image=
|
| 196 |
source_path=request.source_path,
|
| 197 |
want_water=request.use_water_mask,
|
| 198 |
want_road=request.use_road_mask,
|
| 199 |
-
|
|
|
|
| 200 |
water_prompt=request.water_prompt,
|
| 201 |
road_prompt=request.road_prompt,
|
|
|
|
| 202 |
score_threshold=float(request.segmentation_score_thresh),
|
| 203 |
mask_threshold=float(request.segmentation_mask_thresh),
|
| 204 |
-
)
|
|
|
|
| 205 |
)
|
| 206 |
if request.use_water_mask and masks.get("water") is not None:
|
| 207 |
water_mask_resized = Image.fromarray(masks["water"].astype(np.uint8) * 255).resize(
|
|
@@ -215,6 +241,13 @@ class SafetyAnalyzer:
|
|
| 215 |
)
|
| 216 |
road_mask_resized = np.array(road_mask_resized) > 0
|
| 217 |
road_mask_block = expand_mask_for_footprint(road_mask_resized)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 218 |
|
| 219 |
# Autoscale sensitivity with resolution: stricter when resolution is low
|
| 220 |
std_thresh_eff = max(1e-6, float(request.std_thresh)) * (res_scale ** -0.5)
|
|
@@ -227,6 +260,7 @@ class SafetyAnalyzer:
|
|
| 227 |
grad_thresh=grad_thresh_eff,
|
| 228 |
water_mask=water_mask_block if water_mask_block is not None else water_mask_resized,
|
| 229 |
)
|
|
|
|
| 230 |
if request.use_roof_mask:
|
| 231 |
roof_mask_resized = self.build_depth_roof_mask(
|
| 232 |
depth=depth,
|
|
@@ -236,13 +270,14 @@ class SafetyAnalyzer:
|
|
| 236 |
)
|
| 237 |
roof_mask_block = expand_mask_for_footprint(roof_mask_resized)
|
| 238 |
seg_block_mask = None
|
| 239 |
-
for mask in (water_mask_block, road_mask_block, roof_mask_block):
|
| 240 |
if mask is None:
|
| 241 |
continue
|
| 242 |
if seg_block_mask is None:
|
| 243 |
seg_block_mask = mask.copy()
|
| 244 |
else:
|
| 245 |
seg_block_mask |= mask
|
|
|
|
| 246 |
if seg_block_mask is not None:
|
| 247 |
landing_mask = landing_mask & (~seg_block_mask)
|
| 248 |
if half_span > 0:
|
|
@@ -386,24 +421,69 @@ class SafetyAnalyzer:
|
|
| 386 |
center_img = (cx_img, cy_img)
|
| 387 |
center_depth = (cx, cy)
|
| 388 |
|
| 389 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 390 |
try:
|
| 391 |
footprint_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (patch_px, patch_px))
|
| 392 |
-
safe_display_mask = cv2.dilate(
|
|
|
|
|
|
|
| 393 |
except Exception:
|
| 394 |
-
|
| 395 |
mask_union = None
|
| 396 |
-
|
|
|
|
| 397 |
if mask is None:
|
| 398 |
continue
|
| 399 |
if mask_union is None:
|
| 400 |
mask_union = mask.copy()
|
| 401 |
else:
|
| 402 |
mask_union |= mask
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 403 |
seg_mask_union = mask_union.copy() if mask_union is not None else None
|
| 404 |
if mask_union is not None:
|
| 405 |
safe_display_mask = safe_display_mask & (~mask_union)
|
| 406 |
hazard_mask = ~safe_display_mask
|
|
|
|
|
|
|
| 407 |
|
| 408 |
layers = build_result_layers(
|
| 409 |
image=image,
|
|
@@ -418,7 +498,7 @@ class SafetyAnalyzer:
|
|
| 418 |
water_mask=water_mask_resized,
|
| 419 |
road_mask=road_mask_resized,
|
| 420 |
roof_mask=roof_mask_resized,
|
| 421 |
-
|
| 422 |
hazard_mask=hazard_mask,
|
| 423 |
)
|
| 424 |
try:
|
|
@@ -451,6 +531,20 @@ class SafetyAnalyzer:
|
|
| 451 |
elif roof_mask_resized is None:
|
| 452 |
warnings.append("Roof segmentation unavailable; continuing without mask.")
|
| 453 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 454 |
summary = AnalysisSummary(
|
| 455 |
model_id=request.model_id,
|
| 456 |
process_resolution=process_res,
|
|
|
|
| 10 |
import torch
|
| 11 |
from PIL import Image
|
| 12 |
|
| 13 |
+
from .config import DEFAULT_MODEL_ID, IMAGE_EXTS
|
| 14 |
from .depth_pipeline import DepthEngine, compute_roof_mask_depth, crop_nonblack, pick_flat_patch, smooth_depth
|
| 15 |
+
from .segmentation import SegmenterRequest, SegmenterService, get_global_segmenter
|
| 16 |
from .visualization import build_result_layers
|
| 17 |
|
| 18 |
|
|
|
|
| 24 |
use_water_mask: bool
|
| 25 |
use_road_mask: bool
|
| 26 |
use_roof_mask: bool
|
| 27 |
+
use_tree_mask: bool
|
| 28 |
water_prompt: str
|
| 29 |
road_prompt: str
|
| 30 |
+
tree_prompt: str
|
| 31 |
altitude_m: float
|
| 32 |
fov_deg: float
|
| 33 |
clearance_factor: float
|
| 34 |
process_res_cap: int
|
| 35 |
depth_smoothing_base: float
|
| 36 |
segmentation_max_side: int
|
| 37 |
+
segmentation_model_id: str
|
| 38 |
segmentation_score_thresh: float
|
| 39 |
segmentation_mask_thresh: float
|
| 40 |
coverage_strictness: float
|
|
|
|
| 77 |
class SafetyAnalyzer:
|
| 78 |
def __init__(self, depth_engine: DepthEngine | None = None, segmenter: SegmenterService | None = None):
|
| 79 |
self.depth_engine = depth_engine or DepthEngine()
|
| 80 |
+
self.segmenter = segmenter or get_global_segmenter()
|
| 81 |
+
# Preload default depth model to avoid first-call latency spikes.
|
| 82 |
+
try:
|
| 83 |
+
self.depth_engine.get_model(DEFAULT_MODEL_ID)
|
| 84 |
+
except Exception as exc:
|
| 85 |
+
print(f"[WARN] Could not preload depth model {DEFAULT_MODEL_ID}: {exc}")
|
| 86 |
|
| 87 |
@staticmethod
|
| 88 |
def build_depth_roof_mask(
|
|
|
|
| 127 |
def analyze_image(self, image: Image.Image, request: AnalysisRequest) -> AnalysisResult:
|
| 128 |
t0 = time.perf_counter()
|
| 129 |
rgb_np = np.array(image)
|
| 130 |
+
t_rgb = time.perf_counter()
|
| 131 |
+
depth_raw, depth, process_res, depth_times = self.depth_engine.predict_depth(
|
| 132 |
+
rgb_np, request.model_id, request.process_res_cap, "least_squares"
|
| 133 |
+
)
|
| 134 |
+
t_depth = time.perf_counter()
|
| 135 |
res_scale = max(0.5, min(2.5, process_res / 1024))
|
| 136 |
sigma = max(0.0, request.depth_smoothing_base) * res_scale
|
| 137 |
depth = smooth_depth(depth, sigma)
|
| 138 |
+
# Keep all downstream processing at the depth resolution to avoid expensive full-res passes.
|
| 139 |
+
proc_size = (depth.shape[1], depth.shape[0]) # (W, H)
|
| 140 |
+
rgb_proc = cv2.resize(rgb_np, proc_size, interpolation=cv2.INTER_AREA) if rgb_np.shape[:2][::-1] != proc_size else rgb_np
|
| 141 |
|
| 142 |
fov = max(10.0, min(170.0, float(request.fov_deg)))
|
| 143 |
altitude = max(1.0, float(request.altitude_m))
|
|
|
|
| 173 |
std_map_vis = np.sqrt(
|
| 174 |
np.maximum(box_mean_np(depth_norm * depth_norm, vis_patch) - box_mean_np(depth_norm, vis_patch) ** 2, 0.0)
|
| 175 |
)
|
| 176 |
+
t_depth_post = time.perf_counter()
|
| 177 |
|
| 178 |
+
gray = cv2.cvtColor(rgb_proc, cv2.COLOR_RGB2GRAY).astype(np.float32) / 255.0
|
| 179 |
gx = cv2.Sobel(gray, cv2.CV_32F, 1, 0, ksize=3)
|
| 180 |
gy = cv2.Sobel(gray, cv2.CV_32F, 0, 1, ksize=3)
|
| 181 |
texture = np.sqrt(gx * gx + gy * gy)
|
|
|
|
| 185 |
texture_norm = (texture - texture.min()) / (np.ptp(texture) + 1e-6)
|
| 186 |
else:
|
| 187 |
texture_norm = np.zeros_like(texture)
|
| 188 |
+
|
| 189 |
+
dy_depth, dx_depth = np.gradient(depth_norm)
|
| 190 |
+
grad_mag = np.sqrt(dx_depth * dx_depth + dy_depth * dy_depth)
|
| 191 |
+
grad_ref = np.percentile(grad_mag, 95) + 1e-6
|
| 192 |
+
grad_norm = np.clip(grad_mag / grad_ref, 0.0, 1.0)
|
| 193 |
+
t_texture = time.perf_counter()
|
| 194 |
|
| 195 |
water_mask_resized = None
|
| 196 |
road_mask_resized = None
|
| 197 |
roof_mask_resized = None
|
| 198 |
+
tree_mask_resized = None
|
| 199 |
water_mask_block = None
|
| 200 |
road_mask_block = None
|
| 201 |
roof_mask_block = None
|
| 202 |
+
tree_mask_block = None
|
| 203 |
|
| 204 |
def expand_mask_for_footprint(mask: np.ndarray | None) -> np.ndarray | None:
|
| 205 |
if mask is None:
|
|
|
|
| 212 |
return mask.copy()
|
| 213 |
expanded = cv2.dilate(mask.astype(np.uint8), kernel, iterations=1)
|
| 214 |
return expanded.astype(bool)
|
| 215 |
+
if request.use_water_mask or request.use_road_mask or request.use_tree_mask:
|
| 216 |
masks = self.segmenter.get_masks(
|
| 217 |
SegmenterRequest(
|
| 218 |
+
image=Image.fromarray(rgb_proc),
|
| 219 |
source_path=request.source_path,
|
| 220 |
want_water=request.use_water_mask,
|
| 221 |
want_road=request.use_road_mask,
|
| 222 |
+
want_tree=request.use_tree_mask,
|
| 223 |
+
max_side=int(max(128, min(request.segmentation_max_side, process_res))),
|
| 224 |
water_prompt=request.water_prompt,
|
| 225 |
road_prompt=request.road_prompt,
|
| 226 |
+
tree_prompt=request.tree_prompt,
|
| 227 |
score_threshold=float(request.segmentation_score_thresh),
|
| 228 |
mask_threshold=float(request.segmentation_mask_thresh),
|
| 229 |
+
),
|
| 230 |
+
model_id=request.segmentation_model_id,
|
| 231 |
)
|
| 232 |
if request.use_water_mask and masks.get("water") is not None:
|
| 233 |
water_mask_resized = Image.fromarray(masks["water"].astype(np.uint8) * 255).resize(
|
|
|
|
| 241 |
)
|
| 242 |
road_mask_resized = np.array(road_mask_resized) > 0
|
| 243 |
road_mask_block = expand_mask_for_footprint(road_mask_resized)
|
| 244 |
+
if request.use_tree_mask and masks.get("tree") is not None:
|
| 245 |
+
tree_mask_resized = Image.fromarray(masks["tree"].astype(np.uint8) * 255).resize(
|
| 246 |
+
(depth.shape[1], depth.shape[0]), resample=Image.NEAREST
|
| 247 |
+
)
|
| 248 |
+
tree_mask_resized = np.array(tree_mask_resized) > 0
|
| 249 |
+
tree_mask_block = expand_mask_for_footprint(tree_mask_resized)
|
| 250 |
+
t_masks = time.perf_counter()
|
| 251 |
|
| 252 |
# Autoscale sensitivity with resolution: stricter when resolution is low
|
| 253 |
std_thresh_eff = max(1e-6, float(request.std_thresh)) * (res_scale ** -0.5)
|
|
|
|
| 260 |
grad_thresh=grad_thresh_eff,
|
| 261 |
water_mask=water_mask_block if water_mask_block is not None else water_mask_resized,
|
| 262 |
)
|
| 263 |
+
t_pick = time.perf_counter()
|
| 264 |
if request.use_roof_mask:
|
| 265 |
roof_mask_resized = self.build_depth_roof_mask(
|
| 266 |
depth=depth,
|
|
|
|
| 270 |
)
|
| 271 |
roof_mask_block = expand_mask_for_footprint(roof_mask_resized)
|
| 272 |
seg_block_mask = None
|
| 273 |
+
for mask in (water_mask_block, road_mask_block, tree_mask_block, roof_mask_block):
|
| 274 |
if mask is None:
|
| 275 |
continue
|
| 276 |
if seg_block_mask is None:
|
| 277 |
seg_block_mask = mask.copy()
|
| 278 |
else:
|
| 279 |
seg_block_mask |= mask
|
| 280 |
+
landing_mask_pre_interior = landing_mask.copy()
|
| 281 |
if seg_block_mask is not None:
|
| 282 |
landing_mask = landing_mask & (~seg_block_mask)
|
| 283 |
if half_span > 0:
|
|
|
|
| 421 |
center_img = (cx_img, cy_img)
|
| 422 |
center_depth = (cx, cy)
|
| 423 |
|
| 424 |
+
# Display mask without interior cropping so overlays are not clipped at borders.
|
| 425 |
+
safe_display_mask = (
|
| 426 |
+
(std_map < std_thresh_eff)
|
| 427 |
+
& (grad_norm < grad_thresh_eff)
|
| 428 |
+
& landing_mask_pre_interior
|
| 429 |
+
& texture_mask
|
| 430 |
+
)
|
| 431 |
+
if seg_block_mask is not None:
|
| 432 |
+
safe_display_mask = safe_display_mask & (~seg_block_mask)
|
| 433 |
+
try:
|
| 434 |
+
clearance_px = max(1, int(round(request.clearance_factor * patch_px)))
|
| 435 |
+
if clearance_px % 2 == 0:
|
| 436 |
+
clearance_px += 1
|
| 437 |
+
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (clearance_px, clearance_px))
|
| 438 |
+
hazard_disp = ~safe_display_mask
|
| 439 |
+
if seg_block_mask is not None:
|
| 440 |
+
hazard_disp = hazard_disp & (~seg_block_mask)
|
| 441 |
+
buffered_disp = cv2.dilate(hazard_disp.astype(np.uint8), kernel, iterations=1).astype(bool)
|
| 442 |
+
safe_display_mask = safe_display_mask & (~buffered_disp)
|
| 443 |
+
if seg_block_mask is not None:
|
| 444 |
+
safe_display_mask = safe_display_mask & (~seg_block_mask)
|
| 445 |
+
except Exception:
|
| 446 |
+
pass
|
| 447 |
+
try:
|
| 448 |
+
coverage_disp = cv2.boxFilter(
|
| 449 |
+
safe_display_mask.astype(np.float32),
|
| 450 |
+
ddepth=-1,
|
| 451 |
+
ksize=(patch_px, patch_px),
|
| 452 |
+
normalize=True,
|
| 453 |
+
anchor=(patch_px // 2, patch_px // 2),
|
| 454 |
+
)
|
| 455 |
+
safe_display_mask = coverage_disp >= max(0.0, min(1.0, request.coverage_strictness))
|
| 456 |
+
except Exception:
|
| 457 |
+
pass
|
| 458 |
try:
|
| 459 |
footprint_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (patch_px, patch_px))
|
| 460 |
+
safe_display_mask = cv2.dilate(safe_display_mask.astype(np.uint8), footprint_kernel, iterations=1).astype(
|
| 461 |
+
bool
|
| 462 |
+
)
|
| 463 |
except Exception:
|
| 464 |
+
pass
|
| 465 |
mask_union = None
|
| 466 |
+
overlay_union = None
|
| 467 |
+
for mask in (water_mask_resized, road_mask_resized, tree_mask_resized, roof_mask_resized):
|
| 468 |
if mask is None:
|
| 469 |
continue
|
| 470 |
if mask_union is None:
|
| 471 |
mask_union = mask.copy()
|
| 472 |
else:
|
| 473 |
mask_union |= mask
|
| 474 |
+
for mask in (water_mask_resized, road_mask_resized, tree_mask_resized):
|
| 475 |
+
if mask is None:
|
| 476 |
+
continue
|
| 477 |
+
if overlay_union is None:
|
| 478 |
+
overlay_union = mask.copy()
|
| 479 |
+
else:
|
| 480 |
+
overlay_union |= mask
|
| 481 |
seg_mask_union = mask_union.copy() if mask_union is not None else None
|
| 482 |
if mask_union is not None:
|
| 483 |
safe_display_mask = safe_display_mask & (~mask_union)
|
| 484 |
hazard_mask = ~safe_display_mask
|
| 485 |
+
if roof_mask_resized is not None:
|
| 486 |
+
hazard_mask = hazard_mask & (~roof_mask_resized)
|
| 487 |
|
| 488 |
layers = build_result_layers(
|
| 489 |
image=image,
|
|
|
|
| 498 |
water_mask=water_mask_resized,
|
| 499 |
road_mask=road_mask_resized,
|
| 500 |
roof_mask=roof_mask_resized,
|
| 501 |
+
tree_mask=tree_mask_resized,
|
| 502 |
hazard_mask=hazard_mask,
|
| 503 |
)
|
| 504 |
try:
|
|
|
|
| 531 |
elif roof_mask_resized is None:
|
| 532 |
warnings.append("Roof segmentation unavailable; continuing without mask.")
|
| 533 |
|
| 534 |
+
t_final = time.perf_counter()
|
| 535 |
+
print(
|
| 536 |
+
"[TIMING] rgb->np {:.0f}ms | depth_model {:.0f}ms | plane {:.0f}ms | depth_misc {:.0f}ms | texture {:.0f}ms | masks {:.0f}ms | pick {:.0f}ms | compose {:.0f}ms | total {:.0f}ms".format(
|
| 537 |
+
(t_rgb - t0) * 1000,
|
| 538 |
+
depth_times.get("model_ms", 0.0),
|
| 539 |
+
depth_times.get("plane_ms", 0.0),
|
| 540 |
+
depth_times.get("prep_ms", 0.0),
|
| 541 |
+
(t_texture - t_depth_post) * 1000,
|
| 542 |
+
(t_masks - t_texture) * 1000,
|
| 543 |
+
(t_pick - t_masks) * 1000,
|
| 544 |
+
(t_final - t_pick) * 1000,
|
| 545 |
+
(t_final - t0) * 1000,
|
| 546 |
+
)
|
| 547 |
+
)
|
| 548 |
summary = AnalysisSummary(
|
| 549 |
model_id=request.model_id,
|
| 550 |
process_resolution=process_res,
|
app/segmentation.py
CHANGED
|
@@ -1,7 +1,6 @@
|
|
| 1 |
from __future__ import annotations
|
| 2 |
|
| 3 |
from dataclasses import dataclass
|
| 4 |
-
from typing import Dict, Optional
|
| 5 |
import re
|
| 6 |
|
| 7 |
import numpy as np
|
|
@@ -15,6 +14,7 @@ from .config import (
|
|
| 15 |
SEGMENTATION_MODEL_ID,
|
| 16 |
SEGMENTATION_SCORE_THRESH,
|
| 17 |
WATER_PROMPT,
|
|
|
|
| 18 |
)
|
| 19 |
|
| 20 |
|
|
@@ -23,6 +23,13 @@ class SemanticSegmenter:
|
|
| 23 |
|
| 24 |
def __init__(self, model_id: str):
|
| 25 |
import transformers # type: ignore
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
processor_cls = getattr(transformers, "Sam3Processor", None) or getattr(
|
| 28 |
transformers, "AutoProcessor", None
|
|
@@ -113,9 +120,11 @@ class SegmenterRequest:
|
|
| 113 |
source_path: Optional[str] = None
|
| 114 |
want_water: bool = False
|
| 115 |
want_road: bool = False
|
|
|
|
| 116 |
max_side: int = SEGMENTATION_MAX_SIDE
|
| 117 |
water_prompt: str = WATER_PROMPT
|
| 118 |
road_prompt: str = ROAD_PROMPT
|
|
|
|
| 119 |
score_threshold: float = SEGMENTATION_SCORE_THRESH
|
| 120 |
mask_threshold: float = SEGMENTATION_MASK_THRESH
|
| 121 |
|
|
@@ -126,21 +135,28 @@ class SegmenterService:
|
|
| 126 |
def __init__(self, model_id: str = SEGMENTATION_MODEL_ID):
|
| 127 |
self.model_id = model_id
|
| 128 |
self._segmenters: Dict[str, SemanticSegmenter] = {}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 129 |
|
| 130 |
def _get_segmenter(self, model_id: str) -> SemanticSegmenter:
|
| 131 |
if model_id not in self._segmenters:
|
| 132 |
self._segmenters[model_id] = SemanticSegmenter(model_id)
|
| 133 |
return self._segmenters[model_id]
|
| 134 |
|
| 135 |
-
def get_masks(self, request: SegmenterRequest) -> dict[str, np.ndarray]:
|
| 136 |
-
if not (request.want_water or request.want_road):
|
| 137 |
return {}
|
| 138 |
-
segmenter = self._get_segmenter(self.model_id)
|
| 139 |
prompts: dict[str, str] = {}
|
| 140 |
if request.want_water and request.water_prompt:
|
| 141 |
prompts["water"] = request.water_prompt
|
| 142 |
if request.want_road and request.road_prompt:
|
| 143 |
prompts["road"] = request.road_prompt
|
|
|
|
|
|
|
| 144 |
try:
|
| 145 |
masks = segmenter.segment(
|
| 146 |
request.image,
|
|
@@ -161,3 +177,13 @@ class SegmenterService:
|
|
| 161 |
|
| 162 |
|
| 163 |
__all__ = ["SegmenterService", "SegmenterRequest", "SemanticSegmenter"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
from __future__ import annotations
|
| 2 |
|
| 3 |
from dataclasses import dataclass
|
|
|
|
| 4 |
import re
|
| 5 |
|
| 6 |
import numpy as np
|
|
|
|
| 14 |
SEGMENTATION_MODEL_ID,
|
| 15 |
SEGMENTATION_SCORE_THRESH,
|
| 16 |
WATER_PROMPT,
|
| 17 |
+
TREE_PROMPT,
|
| 18 |
)
|
| 19 |
|
| 20 |
|
|
|
|
| 23 |
|
| 24 |
def __init__(self, model_id: str):
|
| 25 |
import transformers # type: ignore
|
| 26 |
+
from transformers.utils import logging as hf_logging # type: ignore
|
| 27 |
+
|
| 28 |
+
hf_logging.set_verbosity_error()
|
| 29 |
+
try:
|
| 30 |
+
hf_logging.disable_progress_bar()
|
| 31 |
+
except Exception:
|
| 32 |
+
pass
|
| 33 |
|
| 34 |
processor_cls = getattr(transformers, "Sam3Processor", None) or getattr(
|
| 35 |
transformers, "AutoProcessor", None
|
|
|
|
| 120 |
source_path: Optional[str] = None
|
| 121 |
want_water: bool = False
|
| 122 |
want_road: bool = False
|
| 123 |
+
want_tree: bool = False
|
| 124 |
max_side: int = SEGMENTATION_MAX_SIDE
|
| 125 |
water_prompt: str = WATER_PROMPT
|
| 126 |
road_prompt: str = ROAD_PROMPT
|
| 127 |
+
tree_prompt: str = TREE_PROMPT
|
| 128 |
score_threshold: float = SEGMENTATION_SCORE_THRESH
|
| 129 |
mask_threshold: float = SEGMENTATION_MASK_THRESH
|
| 130 |
|
|
|
|
| 135 |
def __init__(self, model_id: str = SEGMENTATION_MODEL_ID):
|
| 136 |
self.model_id = model_id
|
| 137 |
self._segmenters: Dict[str, SemanticSegmenter] = {}
|
| 138 |
+
# Eagerly load the default model once to avoid repeated cold-starts.
|
| 139 |
+
try:
|
| 140 |
+
self._segmenters[model_id] = SemanticSegmenter(model_id)
|
| 141 |
+
except Exception as exc:
|
| 142 |
+
print(f"[WARN] Failed to preload segmentation model {model_id}: {exc}")
|
| 143 |
|
| 144 |
def _get_segmenter(self, model_id: str) -> SemanticSegmenter:
|
| 145 |
if model_id not in self._segmenters:
|
| 146 |
self._segmenters[model_id] = SemanticSegmenter(model_id)
|
| 147 |
return self._segmenters[model_id]
|
| 148 |
|
| 149 |
+
def get_masks(self, request: SegmenterRequest, model_id: str | None = None) -> dict[str, np.ndarray]:
|
| 150 |
+
if not (request.want_water or request.want_road or request.want_tree):
|
| 151 |
return {}
|
| 152 |
+
segmenter = self._get_segmenter(model_id or self.model_id)
|
| 153 |
prompts: dict[str, str] = {}
|
| 154 |
if request.want_water and request.water_prompt:
|
| 155 |
prompts["water"] = request.water_prompt
|
| 156 |
if request.want_road and request.road_prompt:
|
| 157 |
prompts["road"] = request.road_prompt
|
| 158 |
+
if request.want_tree and request.tree_prompt:
|
| 159 |
+
prompts["tree"] = request.tree_prompt
|
| 160 |
try:
|
| 161 |
masks = segmenter.segment(
|
| 162 |
request.image,
|
|
|
|
| 177 |
|
| 178 |
|
| 179 |
__all__ = ["SegmenterService", "SegmenterRequest", "SemanticSegmenter"]
|
| 180 |
+
|
| 181 |
+
# Shared singleton to avoid reloads across analyzer instances
|
| 182 |
+
_GLOBAL_SEGMENTER: SegmenterService | None = None
|
| 183 |
+
|
| 184 |
+
|
| 185 |
+
def get_global_segmenter(default_model_id: str = SEGMENTATION_MODEL_ID) -> SegmenterService:
|
| 186 |
+
global _GLOBAL_SEGMENTER
|
| 187 |
+
if _GLOBAL_SEGMENTER is None or _GLOBAL_SEGMENTER.model_id != default_model_id:
|
| 188 |
+
_GLOBAL_SEGMENTER = SegmenterService(default_model_id)
|
| 189 |
+
return _GLOBAL_SEGMENTER
|
app/ui.py
CHANGED
|
@@ -5,7 +5,7 @@ from typing import Dict
|
|
| 5 |
|
| 6 |
import gradio as gr
|
| 7 |
|
| 8 |
-
from .config import
|
| 9 |
from .data_sources import list_all_data_inputs
|
| 10 |
from .safety import AnalysisRequest, AnalysisSummary, SafetyAnalyzer
|
| 11 |
from .visualization import compose_view
|
|
@@ -19,14 +19,16 @@ def _make_request(
|
|
| 19 |
use_water_mask,
|
| 20 |
use_road_mask,
|
| 21 |
use_roof_mask,
|
|
|
|
| 22 |
water_prompt,
|
| 23 |
road_prompt,
|
|
|
|
| 24 |
altitude_m,
|
| 25 |
fov_deg,
|
| 26 |
clearance_factor,
|
| 27 |
process_res_cap,
|
| 28 |
-
depth_smoothing_base,
|
| 29 |
segmentation_max_side,
|
|
|
|
| 30 |
segmentation_score_thresh,
|
| 31 |
segmentation_mask_thresh,
|
| 32 |
coverage_strictness,
|
|
@@ -41,14 +43,17 @@ def _make_request(
|
|
| 41 |
use_water_mask=use_water_mask,
|
| 42 |
use_road_mask=use_road_mask,
|
| 43 |
use_roof_mask=use_roof_mask,
|
|
|
|
| 44 |
water_prompt=water_prompt,
|
| 45 |
road_prompt=road_prompt,
|
|
|
|
| 46 |
altitude_m=altitude_m,
|
| 47 |
fov_deg=fov_deg,
|
| 48 |
clearance_factor=clearance_factor,
|
| 49 |
process_res_cap=process_res_cap,
|
| 50 |
-
depth_smoothing_base=
|
| 51 |
segmentation_max_side=segmentation_max_side,
|
|
|
|
| 52 |
segmentation_score_thresh=segmentation_score_thresh,
|
| 53 |
segmentation_mask_thresh=segmentation_mask_thresh,
|
| 54 |
coverage_strictness=coverage_strictness,
|
|
@@ -102,7 +107,7 @@ def _format_metrics(summary: AnalysisSummary | None) -> str:
|
|
| 102 |
|
| 103 |
def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
| 104 |
analyzer = analyzer or SafetyAnalyzer()
|
| 105 |
-
defaults =
|
| 106 |
data_inputs = list_all_data_inputs()
|
| 107 |
|
| 108 |
with gr.Blocks(title="Landing Site Safety Analyzer (VISLOC)") as demo:
|
|
@@ -110,6 +115,30 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 110 |
"## Landing Site Safety Analyzer\n"
|
| 111 |
"Evaluate VISLOC imagery with DepthAnything3 to spot flat, obstacle-free landing sites."
|
| 112 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 113 |
images_state = gr.State({})
|
| 114 |
|
| 115 |
with gr.Row(equal_height=False):
|
|
@@ -130,6 +159,14 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 130 |
],
|
| 131 |
info="Select a pretrained checkpoint.",
|
| 132 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 133 |
footprint_m = gr.Slider(
|
| 134 |
label="Landing footprint (meters)",
|
| 135 |
value=defaults.footprint_m,
|
|
@@ -154,23 +191,67 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 154 |
step=0.01,
|
| 155 |
info="Lower suppresses slopes/edges; higher tolerates tilt.",
|
| 156 |
)
|
| 157 |
-
use_water_mask = gr.Checkbox(
|
|
|
|
|
|
|
|
|
|
|
|
|
| 158 |
water_prompt = gr.Textbox(
|
| 159 |
label="Water prompt",
|
| 160 |
value=defaults.water_prompt,
|
| 161 |
-
placeholder="e.g., water
|
| 162 |
)
|
| 163 |
use_road_mask = gr.Checkbox(label="Exclude roads (segmentation)", value=True)
|
| 164 |
road_prompt = gr.Textbox(
|
| 165 |
label="Road prompt",
|
| 166 |
value=defaults.road_prompt,
|
| 167 |
-
placeholder="e.g., road
|
| 168 |
)
|
| 169 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 170 |
with gr.Row():
|
| 171 |
run_btn = gr.Button("Run", variant="primary")
|
| 172 |
stop_btn = gr.Button("Stop", variant="stop")
|
| 173 |
-
with gr.Accordion("
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 174 |
gr.Markdown("Adjust detail levels and scoring.")
|
| 175 |
clearance_factor = gr.Slider(
|
| 176 |
label="Clearance factor",
|
|
@@ -180,22 +261,6 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 180 |
step=0.05,
|
| 181 |
info="Dilate unsafe regions relative to footprint size.",
|
| 182 |
)
|
| 183 |
-
process_res_cap = gr.Slider(
|
| 184 |
-
label="Depth max side (px)",
|
| 185 |
-
value=defaults.process_res_cap,
|
| 186 |
-
minimum=512,
|
| 187 |
-
maximum=2048,
|
| 188 |
-
step=32,
|
| 189 |
-
info="Largest long-side resolution fed into the depth model.",
|
| 190 |
-
)
|
| 191 |
-
depth_smoothing_base = gr.Slider(
|
| 192 |
-
label="Depth smoothing base",
|
| 193 |
-
value=defaults.depth_smoothing_base,
|
| 194 |
-
minimum=0.0,
|
| 195 |
-
maximum=2.0,
|
| 196 |
-
step=0.05,
|
| 197 |
-
info="Base Gaussian sigma applied before flatness scoring.",
|
| 198 |
-
)
|
| 199 |
coverage_strictness = gr.Slider(
|
| 200 |
label="Coverage strictness",
|
| 201 |
value=defaults.coverage_strictness,
|
|
@@ -220,49 +285,8 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 220 |
step=0.05,
|
| 221 |
info="Lower values avoid visually textured (high-contrast) regions like tracks or debris.",
|
| 222 |
)
|
| 223 |
-
|
| 224 |
-
|
| 225 |
-
segmentation_max_side = gr.Slider(
|
| 226 |
-
label="Segmentation max side (px)",
|
| 227 |
-
value=defaults.segmentation_max_side,
|
| 228 |
-
minimum=256,
|
| 229 |
-
maximum=2048,
|
| 230 |
-
step=32,
|
| 231 |
-
info="Largest long-side resolution for running the segmentation model.",
|
| 232 |
-
)
|
| 233 |
-
segmentation_score_thresh = gr.Slider(
|
| 234 |
-
label="Segmentation score threshold",
|
| 235 |
-
value=defaults.segmentation_score_thresh,
|
| 236 |
-
minimum=0.1,
|
| 237 |
-
maximum=0.9,
|
| 238 |
-
step=0.05,
|
| 239 |
-
info="Minimum instance confidence from SAM3 to keep a mask.",
|
| 240 |
-
)
|
| 241 |
-
segmentation_mask_thresh = gr.Slider(
|
| 242 |
-
label="Segmentation mask threshold",
|
| 243 |
-
value=defaults.segmentation_mask_thresh,
|
| 244 |
-
minimum=0.1,
|
| 245 |
-
maximum=0.9,
|
| 246 |
-
step=0.05,
|
| 247 |
-
info="Pixel probability threshold when binarizing SAM3 masks.",
|
| 248 |
-
)
|
| 249 |
-
with gr.Accordion("Camera settings", open=False):
|
| 250 |
-
gr.Markdown("Configure capture assumptions for footprint sizing.")
|
| 251 |
-
altitude_m = gr.Slider(
|
| 252 |
-
label="Camera altitude (m)",
|
| 253 |
-
value=defaults.altitude_m,
|
| 254 |
-
minimum=10,
|
| 255 |
-
maximum=1500,
|
| 256 |
-
step=5,
|
| 257 |
-
)
|
| 258 |
-
fov_deg = gr.Slider(
|
| 259 |
-
label="Camera FOV (deg)",
|
| 260 |
-
value=defaults.fov_deg,
|
| 261 |
-
minimum=30,
|
| 262 |
-
maximum=150,
|
| 263 |
-
step=1,
|
| 264 |
-
)
|
| 265 |
-
with gr.Accordion("Layer overlays", open=False):
|
| 266 |
gr.Markdown("Toggle visualization layers.")
|
| 267 |
base_view = gr.Dropdown(
|
| 268 |
label="Base view",
|
|
@@ -274,42 +298,54 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 274 |
"Depth gradient",
|
| 275 |
"Gradient mask",
|
| 276 |
"Water mask",
|
|
|
|
|
|
|
| 277 |
"Safety score",
|
| 278 |
"Safety heatmap overlay",
|
| 279 |
],
|
| 280 |
)
|
| 281 |
-
|
| 282 |
-
|
| 283 |
-
label="
|
| 284 |
-
|
| 285 |
-
|
| 286 |
-
|
| 287 |
-
|
| 288 |
-
info="Transparency for the safety highlight overlay.",
|
| 289 |
-
)
|
| 290 |
-
hazard_on = gr.Checkbox(
|
| 291 |
-
label="Hazard highlights",
|
| 292 |
-
value=True,
|
| 293 |
-
info="Show excluded segmentation masks (water/roads/roofs) in red.",
|
| 294 |
-
)
|
| 295 |
-
hazard_opacity = gr.Slider(
|
| 296 |
-
label="Hazard opacity",
|
| 297 |
-
value=0.2,
|
| 298 |
-
minimum=0.0,
|
| 299 |
-
maximum=1.0,
|
| 300 |
-
step=0.05,
|
| 301 |
-
info="Transparency for the hazard overlay.",
|
| 302 |
-
)
|
| 303 |
grad_on = gr.Checkbox(label="Depth gradient", value=False, info="Gradient magnitude overlay.")
|
| 304 |
-
flat_on = gr.Checkbox(label="Flatness map", value=False, info="Flatness std map overlay.")
|
| 305 |
spot_on = gr.Checkbox(label="Show landing spot", value=True)
|
| 306 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 307 |
main_view = gr.Image(
|
| 308 |
label="Preview",
|
| 309 |
height=720,
|
| 310 |
elem_id="main-preview",
|
| 311 |
show_fullscreen_button=False,
|
| 312 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 313 |
status_card = gr.Markdown("**Status:** Awaiting analysis.")
|
| 314 |
metrics_card = gr.Markdown("No metrics yet. Run the analyzer to get results.")
|
| 315 |
|
|
@@ -321,14 +357,16 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 321 |
use_water_mask,
|
| 322 |
use_road_mask,
|
| 323 |
use_roof_mask,
|
|
|
|
| 324 |
water_prompt,
|
| 325 |
road_prompt,
|
|
|
|
| 326 |
altitude_m,
|
| 327 |
fov_deg,
|
| 328 |
clearance_factor,
|
| 329 |
process_res_cap,
|
| 330 |
-
depth_smoothing_base,
|
| 331 |
segmentation_max_side,
|
|
|
|
| 332 |
segmentation_score_thresh,
|
| 333 |
segmentation_mask_thresh,
|
| 334 |
coverage_strictness,
|
|
@@ -337,11 +375,11 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 337 |
texture_threshold,
|
| 338 |
base_view,
|
| 339 |
heat_on,
|
| 340 |
-
heat_alpha,
|
| 341 |
hazard_on,
|
| 342 |
-
|
|
|
|
|
|
|
| 343 |
grad_on,
|
| 344 |
-
flat_on,
|
| 345 |
spot_on,
|
| 346 |
):
|
| 347 |
if not input_path:
|
|
@@ -358,14 +396,16 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 358 |
use_water_mask,
|
| 359 |
use_road_mask,
|
| 360 |
use_roof_mask,
|
|
|
|
| 361 |
water_prompt,
|
| 362 |
road_prompt,
|
|
|
|
| 363 |
altitude_m,
|
| 364 |
fov_deg,
|
| 365 |
clearance_factor,
|
| 366 |
process_res_cap,
|
| 367 |
-
depth_smoothing_base,
|
| 368 |
segmentation_max_side,
|
|
|
|
| 369 |
segmentation_score_thresh,
|
| 370 |
segmentation_mask_thresh,
|
| 371 |
coverage_strictness,
|
|
@@ -383,11 +423,15 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 383 |
imgs,
|
| 384 |
base_view,
|
| 385 |
heat_on,
|
| 386 |
-
|
| 387 |
hazard_on,
|
| 388 |
-
|
|
|
|
|
|
|
|
|
|
| 389 |
grad_on,
|
| 390 |
-
|
|
|
|
| 391 |
spot_on=spot_on,
|
| 392 |
)
|
| 393 |
return imgs, composed, _format_status(summary), _format_metrics(summary)
|
|
@@ -400,14 +444,16 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 400 |
use_water_mask,
|
| 401 |
use_road_mask,
|
| 402 |
use_roof_mask,
|
|
|
|
| 403 |
water_prompt,
|
| 404 |
road_prompt,
|
|
|
|
| 405 |
altitude_m,
|
| 406 |
fov_deg,
|
| 407 |
clearance_factor,
|
| 408 |
process_res_cap,
|
| 409 |
-
depth_smoothing_base,
|
| 410 |
segmentation_max_side,
|
|
|
|
| 411 |
segmentation_score_thresh,
|
| 412 |
segmentation_mask_thresh,
|
| 413 |
coverage_strictness,
|
|
@@ -416,11 +462,11 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 416 |
texture_threshold,
|
| 417 |
base_view,
|
| 418 |
heat_on,
|
| 419 |
-
heat_opacity,
|
| 420 |
hazard_on,
|
| 421 |
-
|
|
|
|
|
|
|
| 422 |
grad_on,
|
| 423 |
-
flat_on,
|
| 424 |
spot_on,
|
| 425 |
]
|
| 426 |
|
|
@@ -435,11 +481,11 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 435 |
images_state,
|
| 436 |
base_view,
|
| 437 |
heat_on,
|
| 438 |
-
heat_opacity,
|
| 439 |
hazard_on,
|
| 440 |
-
|
|
|
|
|
|
|
| 441 |
grad_on,
|
| 442 |
-
flat_on,
|
| 443 |
spot_on,
|
| 444 |
]
|
| 445 |
|
|
@@ -447,11 +493,11 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 447 |
images_state_val,
|
| 448 |
base_view_val,
|
| 449 |
heat_on_val,
|
| 450 |
-
heat_opacity_val,
|
| 451 |
hazard_on_val,
|
| 452 |
-
|
|
|
|
|
|
|
| 453 |
grad_on_val,
|
| 454 |
-
flat_on_val,
|
| 455 |
spot_on_val,
|
| 456 |
):
|
| 457 |
if not images_state_val:
|
|
@@ -460,11 +506,15 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 460 |
images_state_val,
|
| 461 |
base_view_val,
|
| 462 |
heat_on_val,
|
| 463 |
-
|
| 464 |
hazard_on_val,
|
| 465 |
-
|
|
|
|
|
|
|
|
|
|
| 466 |
grad_on_val,
|
| 467 |
-
|
|
|
|
| 468 |
spot_on_val,
|
| 469 |
)
|
| 470 |
return images_state_val, composed, gr.update(), gr.update()
|
|
@@ -477,8 +527,10 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 477 |
overlay_toggle_controls = (
|
| 478 |
heat_on,
|
| 479 |
hazard_on,
|
|
|
|
|
|
|
|
|
|
| 480 |
grad_on,
|
| 481 |
-
flat_on,
|
| 482 |
spot_on,
|
| 483 |
)
|
| 484 |
for control in overlay_toggle_controls:
|
|
@@ -487,170 +539,7 @@ def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
|
| 487 |
inputs=overlay_inputs,
|
| 488 |
outputs=[images_state, main_view, status_card, metrics_card],
|
| 489 |
)
|
| 490 |
-
|
| 491 |
-
slider.release(
|
| 492 |
-
fn=update_overlays_only,
|
| 493 |
-
inputs=overlay_inputs,
|
| 494 |
-
outputs=[images_state, main_view, status_card, metrics_card],
|
| 495 |
-
)
|
| 496 |
-
|
| 497 |
-
model_inputs = [
|
| 498 |
-
images_state,
|
| 499 |
-
input_path,
|
| 500 |
-
footprint_m,
|
| 501 |
-
std_thresh,
|
| 502 |
-
grad_thresh,
|
| 503 |
-
use_water_mask,
|
| 504 |
-
use_road_mask,
|
| 505 |
-
use_roof_mask,
|
| 506 |
-
water_prompt,
|
| 507 |
-
road_prompt,
|
| 508 |
-
altitude_m,
|
| 509 |
-
fov_deg,
|
| 510 |
-
clearance_factor,
|
| 511 |
-
process_res_cap,
|
| 512 |
-
depth_smoothing_base,
|
| 513 |
-
segmentation_max_side,
|
| 514 |
-
segmentation_score_thresh,
|
| 515 |
-
segmentation_mask_thresh,
|
| 516 |
-
coverage_strictness,
|
| 517 |
-
model_id,
|
| 518 |
-
base_view,
|
| 519 |
-
heat_on,
|
| 520 |
-
heat_opacity,
|
| 521 |
-
hazard_on,
|
| 522 |
-
hazard_opacity,
|
| 523 |
-
grad_on,
|
| 524 |
-
flat_on,
|
| 525 |
-
spot_on,
|
| 526 |
-
openness_weight,
|
| 527 |
-
texture_threshold,
|
| 528 |
-
]
|
| 529 |
-
|
| 530 |
-
def update_preview_ui(*vals):
|
| 531 |
-
(
|
| 532 |
-
images_state_val,
|
| 533 |
-
input_path_val,
|
| 534 |
-
footprint_m_val,
|
| 535 |
-
std_thresh_val,
|
| 536 |
-
grad_thresh_val,
|
| 537 |
-
use_water_mask_val,
|
| 538 |
-
use_road_mask_val,
|
| 539 |
-
use_roof_mask_val,
|
| 540 |
-
water_prompt_val,
|
| 541 |
-
road_prompt_val,
|
| 542 |
-
altitude_m_val,
|
| 543 |
-
fov_deg_val,
|
| 544 |
-
clearance_factor_val,
|
| 545 |
-
process_res_cap_val,
|
| 546 |
-
depth_smoothing_base_val,
|
| 547 |
-
segmentation_max_side_val,
|
| 548 |
-
segmentation_score_thresh_val,
|
| 549 |
-
segmentation_mask_thresh_val,
|
| 550 |
-
coverage_strictness_val,
|
| 551 |
-
model_id_val,
|
| 552 |
-
base_view_val,
|
| 553 |
-
heat_on_val,
|
| 554 |
-
heat_opacity_val,
|
| 555 |
-
hazard_on_val,
|
| 556 |
-
hazard_opacity_val,
|
| 557 |
-
grad_on_val,
|
| 558 |
-
flat_on_val,
|
| 559 |
-
spot_on_val,
|
| 560 |
-
openness_weight_val,
|
| 561 |
-
texture_threshold_val,
|
| 562 |
-
) = vals
|
| 563 |
-
path = Path(str(input_path_val))
|
| 564 |
-
imgs_val = images_state_val
|
| 565 |
-
summary_val: AnalysisSummary | None = None
|
| 566 |
-
if path.exists() and path.suffix.lower() in IMAGE_EXTS:
|
| 567 |
-
request = _make_request(
|
| 568 |
-
footprint_m_val,
|
| 569 |
-
std_thresh_val,
|
| 570 |
-
grad_thresh_val,
|
| 571 |
-
use_water_mask_val,
|
| 572 |
-
use_road_mask_val,
|
| 573 |
-
use_roof_mask_val,
|
| 574 |
-
water_prompt_val,
|
| 575 |
-
road_prompt_val,
|
| 576 |
-
altitude_m_val,
|
| 577 |
-
fov_deg_val,
|
| 578 |
-
clearance_factor_val,
|
| 579 |
-
process_res_cap_val,
|
| 580 |
-
depth_smoothing_base_val,
|
| 581 |
-
segmentation_max_side_val,
|
| 582 |
-
segmentation_score_thresh_val,
|
| 583 |
-
segmentation_mask_thresh_val,
|
| 584 |
-
coverage_strictness_val,
|
| 585 |
-
model_id_val,
|
| 586 |
-
openness_weight_val,
|
| 587 |
-
texture_threshold_val,
|
| 588 |
-
)
|
| 589 |
-
try:
|
| 590 |
-
result = analyzer.process_path(path, request)
|
| 591 |
-
imgs_val = result.images
|
| 592 |
-
summary_val = result.summary
|
| 593 |
-
except Exception:
|
| 594 |
-
imgs_val = images_state_val
|
| 595 |
-
summary_val = None
|
| 596 |
-
if not imgs_val:
|
| 597 |
-
return images_state_val, gr.update(), gr.update(), gr.update()
|
| 598 |
-
composed = compose_view(
|
| 599 |
-
imgs_val,
|
| 600 |
-
base_view_val,
|
| 601 |
-
heat_on_val,
|
| 602 |
-
heat_opacity_val,
|
| 603 |
-
hazard_on_val,
|
| 604 |
-
hazard_opacity_val,
|
| 605 |
-
grad_on_val,
|
| 606 |
-
flat_on_val,
|
| 607 |
-
spot_on_val,
|
| 608 |
-
)
|
| 609 |
-
if summary_val is None:
|
| 610 |
-
status_txt = gr.update()
|
| 611 |
-
metrics_txt = gr.update()
|
| 612 |
-
else:
|
| 613 |
-
status_txt = _format_status(summary_val)
|
| 614 |
-
metrics_txt = _format_metrics(summary_val)
|
| 615 |
-
return imgs_val, composed, status_txt, metrics_txt
|
| 616 |
-
|
| 617 |
-
for control in (
|
| 618 |
-
input_path,
|
| 619 |
-
footprint_m,
|
| 620 |
-
std_thresh,
|
| 621 |
-
grad_thresh,
|
| 622 |
-
use_water_mask,
|
| 623 |
-
use_road_mask,
|
| 624 |
-
use_roof_mask,
|
| 625 |
-
altitude_m,
|
| 626 |
-
fov_deg,
|
| 627 |
-
clearance_factor,
|
| 628 |
-
model_id,
|
| 629 |
-
openness_weight,
|
| 630 |
-
texture_threshold,
|
| 631 |
-
segmentation_max_side,
|
| 632 |
-
segmentation_score_thresh,
|
| 633 |
-
segmentation_mask_thresh,
|
| 634 |
-
):
|
| 635 |
-
control.change(
|
| 636 |
-
fn=update_preview_ui, inputs=model_inputs, outputs=[images_state, main_view, status_card, metrics_card]
|
| 637 |
-
)
|
| 638 |
-
for prompt_control in (water_prompt, road_prompt):
|
| 639 |
-
prompt_control.submit(
|
| 640 |
-
fn=update_preview_ui,
|
| 641 |
-
inputs=model_inputs,
|
| 642 |
-
outputs=[images_state, main_view, status_card, metrics_card],
|
| 643 |
-
)
|
| 644 |
-
coverage_strictness.release(
|
| 645 |
-
fn=update_preview_ui,
|
| 646 |
-
inputs=model_inputs,
|
| 647 |
-
outputs=[images_state, main_view, status_card, metrics_card],
|
| 648 |
-
)
|
| 649 |
-
depth_smoothing_base.release(
|
| 650 |
-
fn=update_preview_ui,
|
| 651 |
-
inputs=model_inputs,
|
| 652 |
-
outputs=[images_state, main_view, status_card, metrics_card],
|
| 653 |
-
)
|
| 654 |
|
| 655 |
return demo
|
| 656 |
|
|
|
|
| 5 |
|
| 6 |
import gradio as gr
|
| 7 |
|
| 8 |
+
from .config import DEFAULT_ANALYZER_SETTINGS, IMAGE_EXTS
|
| 9 |
from .data_sources import list_all_data_inputs
|
| 10 |
from .safety import AnalysisRequest, AnalysisSummary, SafetyAnalyzer
|
| 11 |
from .visualization import compose_view
|
|
|
|
| 19 |
use_water_mask,
|
| 20 |
use_road_mask,
|
| 21 |
use_roof_mask,
|
| 22 |
+
use_tree_mask,
|
| 23 |
water_prompt,
|
| 24 |
road_prompt,
|
| 25 |
+
tree_prompt,
|
| 26 |
altitude_m,
|
| 27 |
fov_deg,
|
| 28 |
clearance_factor,
|
| 29 |
process_res_cap,
|
|
|
|
| 30 |
segmentation_max_side,
|
| 31 |
+
segmentation_model_id,
|
| 32 |
segmentation_score_thresh,
|
| 33 |
segmentation_mask_thresh,
|
| 34 |
coverage_strictness,
|
|
|
|
| 43 |
use_water_mask=use_water_mask,
|
| 44 |
use_road_mask=use_road_mask,
|
| 45 |
use_roof_mask=use_roof_mask,
|
| 46 |
+
use_tree_mask=use_tree_mask,
|
| 47 |
water_prompt=water_prompt,
|
| 48 |
road_prompt=road_prompt,
|
| 49 |
+
tree_prompt=tree_prompt,
|
| 50 |
altitude_m=altitude_m,
|
| 51 |
fov_deg=fov_deg,
|
| 52 |
clearance_factor=clearance_factor,
|
| 53 |
process_res_cap=process_res_cap,
|
| 54 |
+
depth_smoothing_base=0.0,
|
| 55 |
segmentation_max_side=segmentation_max_side,
|
| 56 |
+
segmentation_model_id=segmentation_model_id,
|
| 57 |
segmentation_score_thresh=segmentation_score_thresh,
|
| 58 |
segmentation_mask_thresh=segmentation_mask_thresh,
|
| 59 |
coverage_strictness=coverage_strictness,
|
|
|
|
| 107 |
|
| 108 |
def build_ui(analyzer: SafetyAnalyzer | None = None) -> gr.Blocks:
|
| 109 |
analyzer = analyzer or SafetyAnalyzer()
|
| 110 |
+
defaults = DEFAULT_ANALYZER_SETTINGS
|
| 111 |
data_inputs = list_all_data_inputs()
|
| 112 |
|
| 113 |
with gr.Blocks(title="Landing Site Safety Analyzer (VISLOC)") as demo:
|
|
|
|
| 115 |
"## Landing Site Safety Analyzer\n"
|
| 116 |
"Evaluate VISLOC imagery with DepthAnything3 to spot flat, obstacle-free landing sites."
|
| 117 |
)
|
| 118 |
+
gr.HTML(
|
| 119 |
+
"""
|
| 120 |
+
<style>
|
| 121 |
+
#preview-wrap { position: relative; }
|
| 122 |
+
#preview-wrap .hover-legend {
|
| 123 |
+
position: absolute;
|
| 124 |
+
right: 12px;
|
| 125 |
+
bottom: 12px;
|
| 126 |
+
background: rgba(0, 0, 0, 0.65);
|
| 127 |
+
color: #fff;
|
| 128 |
+
padding: 8px 10px;
|
| 129 |
+
border-radius: 10px;
|
| 130 |
+
font-size: 12px;
|
| 131 |
+
line-height: 1.4;
|
| 132 |
+
opacity: 0;
|
| 133 |
+
transition: opacity 0.2s ease;
|
| 134 |
+
pointer-events: none;
|
| 135 |
+
}
|
| 136 |
+
#preview-wrap:hover .hover-legend { opacity: 1; }
|
| 137 |
+
#preview-wrap .hover-legend .row { display: flex; align-items: center; gap: 6px; margin-bottom: 4px; }
|
| 138 |
+
#preview-wrap .hover-legend .swatch { width: 12px; height: 12px; border-radius: 3px; display: inline-block; }
|
| 139 |
+
</style>
|
| 140 |
+
"""
|
| 141 |
+
)
|
| 142 |
images_state = gr.State({})
|
| 143 |
|
| 144 |
with gr.Row(equal_height=False):
|
|
|
|
| 159 |
],
|
| 160 |
info="Select a pretrained checkpoint.",
|
| 161 |
)
|
| 162 |
+
process_res_cap = gr.Slider(
|
| 163 |
+
label="Processing max side (px)",
|
| 164 |
+
value=defaults.process_res_cap,
|
| 165 |
+
minimum=512,
|
| 166 |
+
maximum=2048,
|
| 167 |
+
step=32,
|
| 168 |
+
info="Global resolution cap for depth/analysis steps.",
|
| 169 |
+
)
|
| 170 |
footprint_m = gr.Slider(
|
| 171 |
label="Landing footprint (meters)",
|
| 172 |
value=defaults.footprint_m,
|
|
|
|
| 191 |
step=0.01,
|
| 192 |
info="Lower suppresses slopes/edges; higher tolerates tilt.",
|
| 193 |
)
|
| 194 |
+
use_water_mask = gr.Checkbox(
|
| 195 |
+
label="Exclude water (segmentation)",
|
| 196 |
+
value=True,
|
| 197 |
+
info="Runs SAM3 segmentation with water prompts.",
|
| 198 |
+
)
|
| 199 |
water_prompt = gr.Textbox(
|
| 200 |
label="Water prompt",
|
| 201 |
value=defaults.water_prompt,
|
| 202 |
+
placeholder="e.g., water",
|
| 203 |
)
|
| 204 |
use_road_mask = gr.Checkbox(label="Exclude roads (segmentation)", value=True)
|
| 205 |
road_prompt = gr.Textbox(
|
| 206 |
label="Road prompt",
|
| 207 |
value=defaults.road_prompt,
|
| 208 |
+
placeholder="e.g., road",
|
| 209 |
)
|
| 210 |
+
use_tree_mask = gr.Checkbox(label="Exclude trees (segmentation)", value=True)
|
| 211 |
+
tree_prompt = gr.Textbox(
|
| 212 |
+
label="Tree prompt",
|
| 213 |
+
value=defaults.tree_prompt,
|
| 214 |
+
placeholder="e.g., tree",
|
| 215 |
+
)
|
| 216 |
+
use_roof_mask = gr.Checkbox(label="Exclude rooftops (depth-based)", value=True)
|
| 217 |
with gr.Row():
|
| 218 |
run_btn = gr.Button("Run", variant="primary")
|
| 219 |
stop_btn = gr.Button("Stop", variant="stop")
|
| 220 |
+
with gr.Accordion("Segmentation settings", open=False):
|
| 221 |
+
gr.Markdown("Control SAM3/Mask2Former and thresholds.")
|
| 222 |
+
segmentation_model_id = gr.Dropdown(
|
| 223 |
+
label="Segmentation model",
|
| 224 |
+
value=defaults.segmentation_model_id,
|
| 225 |
+
choices=[
|
| 226 |
+
("SAM3", "facebook/sam3"),
|
| 227 |
+
],
|
| 228 |
+
info="Choose segmentation backbone for water/road masks (SAM3 only).",
|
| 229 |
+
)
|
| 230 |
+
segmentation_max_side = gr.Slider(
|
| 231 |
+
label="Segmentation max side (px)",
|
| 232 |
+
value=defaults.segmentation_max_side,
|
| 233 |
+
minimum=256,
|
| 234 |
+
maximum=2048,
|
| 235 |
+
step=32,
|
| 236 |
+
info="Largest long-side resolution for running the segmentation model.",
|
| 237 |
+
)
|
| 238 |
+
segmentation_score_thresh = gr.Slider(
|
| 239 |
+
label="Segmentation score threshold",
|
| 240 |
+
value=defaults.segmentation_score_thresh,
|
| 241 |
+
minimum=0.1,
|
| 242 |
+
maximum=0.9,
|
| 243 |
+
step=0.05,
|
| 244 |
+
info="Minimum instance confidence from SAM3 to keep a mask.",
|
| 245 |
+
)
|
| 246 |
+
segmentation_mask_thresh = gr.Slider(
|
| 247 |
+
label="Segmentation mask threshold",
|
| 248 |
+
value=defaults.segmentation_mask_thresh,
|
| 249 |
+
minimum=0.1,
|
| 250 |
+
maximum=0.9,
|
| 251 |
+
step=0.05,
|
| 252 |
+
info="Pixel probability threshold when binarizing SAM3 masks.",
|
| 253 |
+
)
|
| 254 |
+
with gr.Accordion("Advanced settings", open=False):
|
| 255 |
gr.Markdown("Adjust detail levels and scoring.")
|
| 256 |
clearance_factor = gr.Slider(
|
| 257 |
label="Clearance factor",
|
|
|
|
| 261 |
step=0.05,
|
| 262 |
info="Dilate unsafe regions relative to footprint size.",
|
| 263 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 264 |
coverage_strictness = gr.Slider(
|
| 265 |
label="Coverage strictness",
|
| 266 |
value=defaults.coverage_strictness,
|
|
|
|
| 285 |
step=0.05,
|
| 286 |
info="Lower values avoid visually textured (high-contrast) regions like tracks or debris.",
|
| 287 |
)
|
| 288 |
+
# Plane removal fixed to least squares; toggle removed.
|
| 289 |
+
with gr.Accordion("Overlay settings", open=False):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 290 |
gr.Markdown("Toggle visualization layers.")
|
| 291 |
base_view = gr.Dropdown(
|
| 292 |
label="Base view",
|
|
|
|
| 298 |
"Depth gradient",
|
| 299 |
"Gradient mask",
|
| 300 |
"Water mask",
|
| 301 |
+
"Road mask",
|
| 302 |
+
"Tree mask",
|
| 303 |
"Safety score",
|
| 304 |
"Safety heatmap overlay",
|
| 305 |
],
|
| 306 |
)
|
| 307 |
+
with gr.Row():
|
| 308 |
+
heat_on = gr.Checkbox(label="Safety highlight", value=True, info="Show safe landing mask (green).")
|
| 309 |
+
hazard_on = gr.Checkbox(label="Hazard highlight", value=False, info="Show hazard mask (red).")
|
| 310 |
+
with gr.Row():
|
| 311 |
+
water_on = gr.Checkbox(label="Water overlay", value=True, info="Color water masks blue.")
|
| 312 |
+
road_on = gr.Checkbox(label="Road overlay", value=True, info="Color road masks orange.")
|
| 313 |
+
tree_on = gr.Checkbox(label="Tree overlay", value=True, info="Color tree masks green.")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 314 |
grad_on = gr.Checkbox(label="Depth gradient", value=False, info="Gradient magnitude overlay.")
|
|
|
|
| 315 |
spot_on = gr.Checkbox(label="Show landing spot", value=True)
|
| 316 |
+
with gr.Accordion("Camera settings", open=False):
|
| 317 |
+
gr.Markdown("Configure capture assumptions for footprint sizing.")
|
| 318 |
+
altitude_m = gr.Slider(
|
| 319 |
+
label="Camera altitude (m)",
|
| 320 |
+
value=defaults.altitude_m,
|
| 321 |
+
minimum=10,
|
| 322 |
+
maximum=1500,
|
| 323 |
+
step=5,
|
| 324 |
+
)
|
| 325 |
+
fov_deg = gr.Slider(
|
| 326 |
+
label="Camera FOV (deg)",
|
| 327 |
+
value=defaults.fov_deg,
|
| 328 |
+
minimum=30,
|
| 329 |
+
maximum=150,
|
| 330 |
+
step=1,
|
| 331 |
+
)
|
| 332 |
+
with gr.Column(scale=2, min_width=520, elem_id="preview-wrap"):
|
| 333 |
main_view = gr.Image(
|
| 334 |
label="Preview",
|
| 335 |
height=720,
|
| 336 |
elem_id="main-preview",
|
| 337 |
show_fullscreen_button=False,
|
| 338 |
)
|
| 339 |
+
gr.HTML(
|
| 340 |
+
"""
|
| 341 |
+
<div class="hover-legend">
|
| 342 |
+
<div class="row"><span class="swatch" style="background: linear-gradient(90deg, #1e0a3c 0%, #fcfecf 100%);"></span><span>Continuous safety gradient</span></div>
|
| 343 |
+
<div class="row"><span class="swatch" style="background: #00ff00;"></span><span>Safe outline</span></div>
|
| 344 |
+
<div class="row"><span class="swatch" style="background: #007aff;"></span><span>Water hazards</span></div>
|
| 345 |
+
<div class="row"><span class="swatch" style="background: #ff6200;"></span><span>Road hazards</span></div>
|
| 346 |
+
</div>
|
| 347 |
+
"""
|
| 348 |
+
)
|
| 349 |
status_card = gr.Markdown("**Status:** Awaiting analysis.")
|
| 350 |
metrics_card = gr.Markdown("No metrics yet. Run the analyzer to get results.")
|
| 351 |
|
|
|
|
| 357 |
use_water_mask,
|
| 358 |
use_road_mask,
|
| 359 |
use_roof_mask,
|
| 360 |
+
use_tree_mask,
|
| 361 |
water_prompt,
|
| 362 |
road_prompt,
|
| 363 |
+
tree_prompt,
|
| 364 |
altitude_m,
|
| 365 |
fov_deg,
|
| 366 |
clearance_factor,
|
| 367 |
process_res_cap,
|
|
|
|
| 368 |
segmentation_max_side,
|
| 369 |
+
segmentation_model_id,
|
| 370 |
segmentation_score_thresh,
|
| 371 |
segmentation_mask_thresh,
|
| 372 |
coverage_strictness,
|
|
|
|
| 375 |
texture_threshold,
|
| 376 |
base_view,
|
| 377 |
heat_on,
|
|
|
|
| 378 |
hazard_on,
|
| 379 |
+
water_on,
|
| 380 |
+
road_on,
|
| 381 |
+
tree_on,
|
| 382 |
grad_on,
|
|
|
|
| 383 |
spot_on,
|
| 384 |
):
|
| 385 |
if not input_path:
|
|
|
|
| 396 |
use_water_mask,
|
| 397 |
use_road_mask,
|
| 398 |
use_roof_mask,
|
| 399 |
+
use_tree_mask,
|
| 400 |
water_prompt,
|
| 401 |
road_prompt,
|
| 402 |
+
tree_prompt,
|
| 403 |
altitude_m,
|
| 404 |
fov_deg,
|
| 405 |
clearance_factor,
|
| 406 |
process_res_cap,
|
|
|
|
| 407 |
segmentation_max_side,
|
| 408 |
+
segmentation_model_id,
|
| 409 |
segmentation_score_thresh,
|
| 410 |
segmentation_mask_thresh,
|
| 411 |
coverage_strictness,
|
|
|
|
| 423 |
imgs,
|
| 424 |
base_view,
|
| 425 |
heat_on,
|
| 426 |
+
0.2,
|
| 427 |
hazard_on,
|
| 428 |
+
0.2,
|
| 429 |
+
water_on,
|
| 430 |
+
road_on,
|
| 431 |
+
tree_on,
|
| 432 |
grad_on,
|
| 433 |
+
False,
|
| 434 |
+
False,
|
| 435 |
spot_on=spot_on,
|
| 436 |
)
|
| 437 |
return imgs, composed, _format_status(summary), _format_metrics(summary)
|
|
|
|
| 444 |
use_water_mask,
|
| 445 |
use_road_mask,
|
| 446 |
use_roof_mask,
|
| 447 |
+
use_tree_mask,
|
| 448 |
water_prompt,
|
| 449 |
road_prompt,
|
| 450 |
+
tree_prompt,
|
| 451 |
altitude_m,
|
| 452 |
fov_deg,
|
| 453 |
clearance_factor,
|
| 454 |
process_res_cap,
|
|
|
|
| 455 |
segmentation_max_side,
|
| 456 |
+
segmentation_model_id,
|
| 457 |
segmentation_score_thresh,
|
| 458 |
segmentation_mask_thresh,
|
| 459 |
coverage_strictness,
|
|
|
|
| 462 |
texture_threshold,
|
| 463 |
base_view,
|
| 464 |
heat_on,
|
|
|
|
| 465 |
hazard_on,
|
| 466 |
+
water_on,
|
| 467 |
+
road_on,
|
| 468 |
+
tree_on,
|
| 469 |
grad_on,
|
|
|
|
| 470 |
spot_on,
|
| 471 |
]
|
| 472 |
|
|
|
|
| 481 |
images_state,
|
| 482 |
base_view,
|
| 483 |
heat_on,
|
|
|
|
| 484 |
hazard_on,
|
| 485 |
+
water_on,
|
| 486 |
+
road_on,
|
| 487 |
+
tree_on,
|
| 488 |
grad_on,
|
|
|
|
| 489 |
spot_on,
|
| 490 |
]
|
| 491 |
|
|
|
|
| 493 |
images_state_val,
|
| 494 |
base_view_val,
|
| 495 |
heat_on_val,
|
|
|
|
| 496 |
hazard_on_val,
|
| 497 |
+
water_on_val,
|
| 498 |
+
road_on_val,
|
| 499 |
+
tree_on_val,
|
| 500 |
grad_on_val,
|
|
|
|
| 501 |
spot_on_val,
|
| 502 |
):
|
| 503 |
if not images_state_val:
|
|
|
|
| 506 |
images_state_val,
|
| 507 |
base_view_val,
|
| 508 |
heat_on_val,
|
| 509 |
+
0.2,
|
| 510 |
hazard_on_val,
|
| 511 |
+
0.2,
|
| 512 |
+
water_on_val,
|
| 513 |
+
road_on_val,
|
| 514 |
+
tree_on_val,
|
| 515 |
grad_on_val,
|
| 516 |
+
False,
|
| 517 |
+
False,
|
| 518 |
spot_on_val,
|
| 519 |
)
|
| 520 |
return images_state_val, composed, gr.update(), gr.update()
|
|
|
|
| 527 |
overlay_toggle_controls = (
|
| 528 |
heat_on,
|
| 529 |
hazard_on,
|
| 530 |
+
water_on,
|
| 531 |
+
road_on,
|
| 532 |
+
tree_on,
|
| 533 |
grad_on,
|
|
|
|
| 534 |
spot_on,
|
| 535 |
)
|
| 536 |
for control in overlay_toggle_controls:
|
|
|
|
| 539 |
inputs=overlay_inputs,
|
| 540 |
outputs=[images_state, main_view, status_card, metrics_card],
|
| 541 |
)
|
| 542 |
+
# Opacity sliders removed; overlays now use fixed alpha.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 543 |
|
| 544 |
return demo
|
| 545 |
|
app/visualization.py
CHANGED
|
@@ -41,6 +41,30 @@ def make_safety_heatmap(
|
|
| 41 |
return safe_img, hazard_img, score_gray
|
| 42 |
|
| 43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
def build_result_layers(
|
| 45 |
image: Image.Image,
|
| 46 |
depth_raw: np.ndarray,
|
|
@@ -54,7 +78,7 @@ def build_result_layers(
|
|
| 54 |
water_mask: np.ndarray | None,
|
| 55 |
road_mask: np.ndarray | None,
|
| 56 |
roof_mask: np.ndarray | None,
|
| 57 |
-
|
| 58 |
hazard_mask: np.ndarray,
|
| 59 |
) -> Dict[str, Image.Image]:
|
| 60 |
depth_vis = Image.fromarray(visualize_depth(depth_raw, cmap="Spectral")).resize(
|
|
@@ -76,9 +100,21 @@ def build_result_layers(
|
|
| 76 |
water_mask_img = _mask_to_image(water_mask)
|
| 77 |
road_mask_img = _mask_to_image(road_mask)
|
| 78 |
roof_mask_img = _mask_to_image(roof_mask)
|
| 79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
safe_overlay, hazard_overlay, heat_gray = make_safety_heatmap(image, safe_mask, hazard_mask, risk_map)
|
|
|
|
| 82 |
|
| 83 |
spot_overlay = Image.new("RGBA", image.size, (0, 0, 0, 0))
|
| 84 |
draw = ImageDraw.Draw(spot_overlay)
|
|
@@ -123,6 +159,17 @@ def build_result_layers(
|
|
| 123 |
box_draw = ImageDraw.Draw(overlay_box)
|
| 124 |
fill = (0, 102, 255, 60)
|
| 125 |
outline = (0, 102, 255, 255)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
box_draw.rectangle((bx0, by0, bx1, by1), fill=fill, outline=outline, width=4)
|
| 127 |
box_draw.line((cx_draw, by0, cx_draw, by1), fill=outline, width=2)
|
| 128 |
box_draw.line((bx0, cy_draw, bx1, cy_draw), fill=outline, width=2)
|
|
@@ -138,9 +185,13 @@ def build_result_layers(
|
|
| 138 |
"Water mask": water_mask_img,
|
| 139 |
"Road mask": road_mask_img,
|
| 140 |
"Roof mask": roof_mask_img,
|
| 141 |
-
"
|
| 142 |
"Safety heatmap overlay": safe_overlay,
|
| 143 |
"Hazard overlay": hazard_overlay,
|
|
|
|
|
|
|
|
|
|
|
|
|
| 144 |
"Safety score": heat_gray,
|
| 145 |
"Landing spot overlay": Image.alpha_composite(spot_overlay, overlay_box),
|
| 146 |
}
|
|
@@ -153,8 +204,12 @@ def compose_view(
|
|
| 153 |
heat_alpha: float,
|
| 154 |
hazard_on: bool,
|
| 155 |
hazard_alpha: float,
|
|
|
|
|
|
|
|
|
|
| 156 |
grad_on: bool,
|
| 157 |
flat_on: bool,
|
|
|
|
| 158 |
spot_on: bool,
|
| 159 |
) -> Image.Image:
|
| 160 |
import gradio as gr
|
|
@@ -170,24 +225,54 @@ def compose_view(
|
|
| 170 |
out = base.convert("RGBA")
|
| 171 |
|
| 172 |
if heat_on and "Safety heatmap overlay" in images_dict:
|
| 173 |
-
|
| 174 |
-
if
|
| 175 |
-
|
| 176 |
alpha_factor = max(0.0, min(1.0, heat_alpha))
|
| 177 |
-
alpha_channel = np.array(
|
| 178 |
alpha_channel = (alpha_channel.astype(np.float32) * alpha_factor).astype(np.uint8)
|
| 179 |
-
|
| 180 |
-
out = Image.alpha_composite(out,
|
| 181 |
|
| 182 |
-
if hazard_on and "
|
| 183 |
-
hazard = images_dict
|
| 184 |
if hazard is not None:
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
out = Image.alpha_composite(out,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 191 |
|
| 192 |
if grad_on and "Depth gradient" in images_dict:
|
| 193 |
grad_img = images_dict["Depth gradient"]
|
|
@@ -203,6 +288,12 @@ def compose_view(
|
|
| 203 |
flat_rgba.putalpha(int(FLAT_ALPHA * 255))
|
| 204 |
out = Image.alpha_composite(out, flat_rgba)
|
| 205 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 206 |
if spot_on and "Landing spot overlay" in images_dict:
|
| 207 |
spot = images_dict["Landing spot overlay"]
|
| 208 |
if spot is not None:
|
|
|
|
| 41 |
return safe_img, hazard_img, score_gray
|
| 42 |
|
| 43 |
|
| 44 |
+
def make_flatness_heatmap(std_map_vis: np.ndarray, target_size: tuple[int, int]) -> Image.Image:
|
| 45 |
+
# Normalize and map to a simple turbo-like palette
|
| 46 |
+
std_norm = std_map_vis
|
| 47 |
+
if std_norm.max() > std_norm.min():
|
| 48 |
+
std_norm = (std_norm - std_norm.min()) / (np.ptp(std_norm) + 1e-6)
|
| 49 |
+
cmap = np.array(
|
| 50 |
+
[
|
| 51 |
+
[48, 18, 59],
|
| 52 |
+
[65, 68, 135],
|
| 53 |
+
[42, 120, 142],
|
| 54 |
+
[34, 168, 132],
|
| 55 |
+
[122, 209, 81],
|
| 56 |
+
[253, 231, 36],
|
| 57 |
+
],
|
| 58 |
+
dtype=np.float32,
|
| 59 |
+
)
|
| 60 |
+
idx = np.clip((std_norm * (len(cmap) - 1)).astype(np.int32), 0, len(cmap) - 1)
|
| 61 |
+
heat_rgb = cmap[idx]
|
| 62 |
+
heat_overlay = np.zeros((std_norm.shape[0], std_norm.shape[1], 4), dtype=np.uint8)
|
| 63 |
+
heat_overlay[..., :3] = heat_rgb.astype(np.uint8)
|
| 64 |
+
heat_overlay[..., 3] = (np.clip(std_norm, 0.0, 1.0) * 160).astype(np.uint8)
|
| 65 |
+
return Image.fromarray(heat_overlay, mode="RGBA").resize(target_size, resample=Image.BILINEAR)
|
| 66 |
+
|
| 67 |
+
|
| 68 |
def build_result_layers(
|
| 69 |
image: Image.Image,
|
| 70 |
depth_raw: np.ndarray,
|
|
|
|
| 78 |
water_mask: np.ndarray | None,
|
| 79 |
road_mask: np.ndarray | None,
|
| 80 |
roof_mask: np.ndarray | None,
|
| 81 |
+
tree_mask: np.ndarray | None,
|
| 82 |
hazard_mask: np.ndarray,
|
| 83 |
) -> Dict[str, Image.Image]:
|
| 84 |
depth_vis = Image.fromarray(visualize_depth(depth_raw, cmap="Spectral")).resize(
|
|
|
|
| 100 |
water_mask_img = _mask_to_image(water_mask)
|
| 101 |
road_mask_img = _mask_to_image(road_mask)
|
| 102 |
roof_mask_img = _mask_to_image(roof_mask)
|
| 103 |
+
tree_mask_img = _mask_to_image(tree_mask)
|
| 104 |
+
|
| 105 |
+
def _color_overlay(mask: np.ndarray | None, color: tuple[int, int, int]) -> Image.Image:
|
| 106 |
+
if mask is None:
|
| 107 |
+
return Image.new("RGBA", image.size, (0, 0, 0, 0))
|
| 108 |
+
m = Image.fromarray((mask.astype(np.uint8) * 255)).resize(image.size, resample=Image.NEAREST)
|
| 109 |
+
rgba = Image.new("RGBA", image.size, color + (255,))
|
| 110 |
+
return Image.composite(rgba, Image.new("RGBA", image.size, (0, 0, 0, 0)), m)
|
| 111 |
+
|
| 112 |
+
water_hazard_overlay = _color_overlay(water_mask, (0, 122, 255)) # blue
|
| 113 |
+
road_hazard_overlay = _color_overlay(road_mask, (255, 98, 0)) # orange/red
|
| 114 |
+
tree_hazard_overlay = _color_overlay(tree_mask, (34, 139, 34)) # forest green
|
| 115 |
|
| 116 |
safe_overlay, hazard_overlay, heat_gray = make_safety_heatmap(image, safe_mask, hazard_mask, risk_map)
|
| 117 |
+
flat_heat_overlay = make_flatness_heatmap(std_map_vis, image.size)
|
| 118 |
|
| 119 |
spot_overlay = Image.new("RGBA", image.size, (0, 0, 0, 0))
|
| 120 |
draw = ImageDraw.Draw(spot_overlay)
|
|
|
|
| 159 |
box_draw = ImageDraw.Draw(overlay_box)
|
| 160 |
fill = (0, 102, 255, 60)
|
| 161 |
outline = (0, 102, 255, 255)
|
| 162 |
+
|
| 163 |
+
# Crosshair sized 3x the landing box for clearer focus.
|
| 164 |
+
cross_half = int(round(side_img * 1.5))
|
| 165 |
+
hx0 = max(0, cx_draw - cross_half)
|
| 166 |
+
hx1 = min(image.width - 1, cx_draw + cross_half)
|
| 167 |
+
hy0 = max(0, cy_draw - cross_half)
|
| 168 |
+
hy1 = min(image.height - 1, cy_draw + cross_half)
|
| 169 |
+
cross_width = 4
|
| 170 |
+
draw.line((hx0, cy_draw, hx1, cy_draw), fill=outline, width=cross_width)
|
| 171 |
+
draw.line((cx_draw, hy0, cx_draw, hy1), fill=outline, width=cross_width)
|
| 172 |
+
|
| 173 |
box_draw.rectangle((bx0, by0, bx1, by1), fill=fill, outline=outline, width=4)
|
| 174 |
box_draw.line((cx_draw, by0, cx_draw, by1), fill=outline, width=2)
|
| 175 |
box_draw.line((bx0, cy_draw, bx1, cy_draw), fill=outline, width=2)
|
|
|
|
| 185 |
"Water mask": water_mask_img,
|
| 186 |
"Road mask": road_mask_img,
|
| 187 |
"Roof mask": roof_mask_img,
|
| 188 |
+
"Tree mask": tree_mask_img,
|
| 189 |
"Safety heatmap overlay": safe_overlay,
|
| 190 |
"Hazard overlay": hazard_overlay,
|
| 191 |
+
"Water hazard overlay": water_hazard_overlay,
|
| 192 |
+
"Road hazard overlay": road_hazard_overlay,
|
| 193 |
+
"Tree hazard overlay": tree_hazard_overlay,
|
| 194 |
+
"Flatness heatmap overlay": flat_heat_overlay,
|
| 195 |
"Safety score": heat_gray,
|
| 196 |
"Landing spot overlay": Image.alpha_composite(spot_overlay, overlay_box),
|
| 197 |
}
|
|
|
|
| 204 |
heat_alpha: float,
|
| 205 |
hazard_on: bool,
|
| 206 |
hazard_alpha: float,
|
| 207 |
+
water_on: bool,
|
| 208 |
+
road_on: bool,
|
| 209 |
+
tree_on: bool,
|
| 210 |
grad_on: bool,
|
| 211 |
flat_on: bool,
|
| 212 |
+
flat_heat_on: bool,
|
| 213 |
spot_on: bool,
|
| 214 |
) -> Image.Image:
|
| 215 |
import gradio as gr
|
|
|
|
| 225 |
out = base.convert("RGBA")
|
| 226 |
|
| 227 |
if heat_on and "Safety heatmap overlay" in images_dict:
|
| 228 |
+
safe_overlay = images_dict["Safety heatmap overlay"]
|
| 229 |
+
if safe_overlay is not None:
|
| 230 |
+
safe_rgba = safe_overlay.convert("RGBA")
|
| 231 |
alpha_factor = max(0.0, min(1.0, heat_alpha))
|
| 232 |
+
alpha_channel = np.array(safe_rgba.getchannel("A"), dtype=np.uint8)
|
| 233 |
alpha_channel = (alpha_channel.astype(np.float32) * alpha_factor).astype(np.uint8)
|
| 234 |
+
safe_rgba.putalpha(Image.fromarray(alpha_channel, mode="L"))
|
| 235 |
+
out = Image.alpha_composite(out, safe_rgba)
|
| 236 |
|
| 237 |
+
if hazard_on and "Hazard overlay" in images_dict:
|
| 238 |
+
hazard = images_dict.get("Hazard overlay")
|
| 239 |
if hazard is not None:
|
| 240 |
+
hazard_rgba = hazard.convert("RGBA")
|
| 241 |
+
alpha_factor = max(0.0, min(1.0, hazard_alpha))
|
| 242 |
+
alpha_channel = np.array(hazard_rgba.getchannel("A"), dtype=np.uint8)
|
| 243 |
+
alpha_channel = (alpha_channel.astype(np.float32) * alpha_factor).astype(np.uint8)
|
| 244 |
+
hazard_rgba.putalpha(Image.fromarray(alpha_channel, mode="L"))
|
| 245 |
+
out = Image.alpha_composite(out, hazard_rgba)
|
| 246 |
+
|
| 247 |
+
if water_on and "Water hazard overlay" in images_dict:
|
| 248 |
+
water = images_dict.get("Water hazard overlay")
|
| 249 |
+
if water is not None:
|
| 250 |
+
water_rgba = water.convert("RGBA")
|
| 251 |
+
alpha_factor = max(0.0, min(1.0, hazard_alpha))
|
| 252 |
+
alpha_channel = np.array(water_rgba.getchannel("A"), dtype=np.uint8)
|
| 253 |
+
alpha_channel = (alpha_channel.astype(np.float32) * alpha_factor).astype(np.uint8)
|
| 254 |
+
water_rgba.putalpha(Image.fromarray(alpha_channel, mode="L"))
|
| 255 |
+
out = Image.alpha_composite(out, water_rgba)
|
| 256 |
+
|
| 257 |
+
if road_on and "Road hazard overlay" in images_dict:
|
| 258 |
+
road = images_dict.get("Road hazard overlay")
|
| 259 |
+
if road is not None:
|
| 260 |
+
road_rgba = road.convert("RGBA")
|
| 261 |
+
alpha_factor = max(0.0, min(1.0, hazard_alpha))
|
| 262 |
+
alpha_channel = np.array(road_rgba.getchannel("A"), dtype=np.uint8)
|
| 263 |
+
alpha_channel = (alpha_channel.astype(np.float32) * alpha_factor).astype(np.uint8)
|
| 264 |
+
road_rgba.putalpha(Image.fromarray(alpha_channel, mode="L"))
|
| 265 |
+
out = Image.alpha_composite(out, road_rgba)
|
| 266 |
+
|
| 267 |
+
if tree_on and "Tree hazard overlay" in images_dict:
|
| 268 |
+
tree = images_dict.get("Tree hazard overlay")
|
| 269 |
+
if tree is not None:
|
| 270 |
+
tree_rgba = tree.convert("RGBA")
|
| 271 |
+
alpha_factor = max(0.0, min(1.0, hazard_alpha))
|
| 272 |
+
alpha_channel = np.array(tree_rgba.getchannel("A"), dtype=np.uint8)
|
| 273 |
+
alpha_channel = (alpha_channel.astype(np.float32) * alpha_factor).astype(np.uint8)
|
| 274 |
+
tree_rgba.putalpha(Image.fromarray(alpha_channel, mode="L"))
|
| 275 |
+
out = Image.alpha_composite(out, tree_rgba)
|
| 276 |
|
| 277 |
if grad_on and "Depth gradient" in images_dict:
|
| 278 |
grad_img = images_dict["Depth gradient"]
|
|
|
|
| 288 |
flat_rgba.putalpha(int(FLAT_ALPHA * 255))
|
| 289 |
out = Image.alpha_composite(out, flat_rgba)
|
| 290 |
|
| 291 |
+
if flat_heat_on and "Flatness heatmap overlay" in images_dict:
|
| 292 |
+
flat_heat = images_dict["Flatness heatmap overlay"]
|
| 293 |
+
if flat_heat is not None:
|
| 294 |
+
flat_heat_rgba = flat_heat.convert("RGBA")
|
| 295 |
+
out = Image.alpha_composite(out, flat_heat_rgba)
|
| 296 |
+
|
| 297 |
if spot_on and "Landing spot overlay" in images_dict:
|
| 298 |
spot = images_dict["Landing spot overlay"]
|
| 299 |
if spot is not None:
|
app/water.py
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
import cv2
|
| 4 |
+
import numpy as np
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
def heuristic_water_mask(
|
| 8 |
+
rgb_np: np.ndarray,
|
| 9 |
+
grad_norm: np.ndarray,
|
| 10 |
+
texture_norm: np.ndarray | None = None,
|
| 11 |
+
max_side: int = 512,
|
| 12 |
+
) -> np.ndarray | None:
|
| 13 |
+
"""Cheap water detector using color, low texture, and flat depth cues."""
|
| 14 |
+
if rgb_np is None or grad_norm is None:
|
| 15 |
+
return None
|
| 16 |
+
h, w = rgb_np.shape[:2]
|
| 17 |
+
if h == 0 or w == 0:
|
| 18 |
+
return None
|
| 19 |
+
|
| 20 |
+
scale = min(1.0, float(max_side) / float(max(h, w)))
|
| 21 |
+
if scale < 1.0:
|
| 22 |
+
new_size = (max(1, int(round(w * scale))), max(1, int(round(h * scale))))
|
| 23 |
+
rgb_small = cv2.resize(rgb_np, new_size, interpolation=cv2.INTER_AREA)
|
| 24 |
+
grad_small = cv2.resize(grad_norm, new_size, interpolation=cv2.INTER_LINEAR)
|
| 25 |
+
tex_small = (
|
| 26 |
+
cv2.resize(texture_norm, new_size, interpolation=cv2.INTER_LINEAR) if texture_norm is not None else None
|
| 27 |
+
)
|
| 28 |
+
else:
|
| 29 |
+
rgb_small = rgb_np
|
| 30 |
+
grad_small = grad_norm
|
| 31 |
+
tex_small = texture_norm
|
| 32 |
+
|
| 33 |
+
hsv = cv2.cvtColor(rgb_small, cv2.COLOR_RGB2HSV)
|
| 34 |
+
h_ch, s_ch, v_ch = cv2.split(hsv)
|
| 35 |
+
|
| 36 |
+
hue_mask = (h_ch >= 80) & (h_ch <= 140) # blues/cyans
|
| 37 |
+
sat_mask = s_ch > 40
|
| 38 |
+
val_mask = v_ch > 40
|
| 39 |
+
color_mask = hue_mask & sat_mask & val_mask
|
| 40 |
+
|
| 41 |
+
gray = cv2.cvtColor(rgb_small, cv2.COLOR_RGB2GRAY).astype(np.float32) / 255.0
|
| 42 |
+
gx = cv2.Sobel(gray, cv2.CV_32F, 1, 0, ksize=3)
|
| 43 |
+
gy = cv2.Sobel(gray, cv2.CV_32F, 0, 1, ksize=3)
|
| 44 |
+
texture = np.sqrt(gx * gx + gy * gy)
|
| 45 |
+
texture = cv2.GaussianBlur(texture, (0, 0), sigmaX=1.2, sigmaY=1.2)
|
| 46 |
+
if tex_small is None:
|
| 47 |
+
if texture.max() > texture.min():
|
| 48 |
+
tex_norm = (texture - texture.min()) / (np.ptp(texture) + 1e-6)
|
| 49 |
+
else:
|
| 50 |
+
tex_norm = np.zeros_like(texture)
|
| 51 |
+
else:
|
| 52 |
+
tex_norm = np.clip(tex_small.astype(np.float32), 0.0, 1.0)
|
| 53 |
+
low_texture = tex_norm < 0.35
|
| 54 |
+
|
| 55 |
+
depth_flat = grad_small < 0.08
|
| 56 |
+
|
| 57 |
+
mask = color_mask & low_texture & depth_flat
|
| 58 |
+
|
| 59 |
+
mask_uint = mask.astype(np.uint8)
|
| 60 |
+
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
|
| 61 |
+
try:
|
| 62 |
+
mask_uint = cv2.morphologyEx(mask_uint, cv2.MORPH_OPEN, kernel)
|
| 63 |
+
mask_uint = cv2.morphologyEx(mask_uint, cv2.MORPH_CLOSE, kernel)
|
| 64 |
+
mask_uint = cv2.dilate(mask_uint, kernel, iterations=1)
|
| 65 |
+
except Exception:
|
| 66 |
+
pass
|
| 67 |
+
|
| 68 |
+
if mask_uint.max() == 0:
|
| 69 |
+
return None
|
| 70 |
+
|
| 71 |
+
if (rgb_small.shape[0], rgb_small.shape[1]) != (h, w):
|
| 72 |
+
mask_uint = cv2.resize(mask_uint, (w, h), interpolation=cv2.INTER_NEAREST)
|
| 73 |
+
return mask_uint.astype(bool)
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
__all__ = ["heuristic_water_mask"]
|
curated_gradio_app.py
ADDED
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""Launch the curated (precomputed) Landing Site Safety Gradio demo."""
|
| 3 |
+
|
| 4 |
+
from __future__ import annotations
|
| 5 |
+
|
| 6 |
+
import os
|
| 7 |
+
|
| 8 |
+
from app.curated_ui import build_curated_ui
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
def main() -> None:
|
| 12 |
+
index_override = os.getenv("CURATED_INDEX_PATH")
|
| 13 |
+
demo = build_curated_ui(index_override)
|
| 14 |
+
|
| 15 |
+
use_queue = os.getenv("DA_USE_QUEUE")
|
| 16 |
+
use_queue_flag = False if use_queue is None else use_queue.lower() not in {"0", "false", "no"}
|
| 17 |
+
share = os.getenv("DA_SHARE")
|
| 18 |
+
share_flag = False if share is None else share.lower() not in {"0", "false", "no"}
|
| 19 |
+
server_port_str = os.getenv("GRADIO_SERVER_PORT")
|
| 20 |
+
server_port = int(server_port_str) if server_port_str else None
|
| 21 |
+
server_port_range = None
|
| 22 |
+
range_env = os.getenv("GRADIO_SERVER_PORT_RANGE")
|
| 23 |
+
if range_env:
|
| 24 |
+
try:
|
| 25 |
+
start_str, end_str = range_env.split(",", 1)
|
| 26 |
+
server_port_range = (int(start_str), int(end_str))
|
| 27 |
+
except ValueError:
|
| 28 |
+
server_port_range = None
|
| 29 |
+
launch_kwargs = {"share": share_flag}
|
| 30 |
+
if server_port is not None:
|
| 31 |
+
launch_kwargs["server_port"] = server_port
|
| 32 |
+
if server_port_range is not None:
|
| 33 |
+
launch_kwargs["server_port_range"] = server_port_range
|
| 34 |
+
if use_queue_flag:
|
| 35 |
+
try:
|
| 36 |
+
demo.queue().launch(**launch_kwargs)
|
| 37 |
+
except TypeError:
|
| 38 |
+
launch_kwargs.pop("server_port_range", None)
|
| 39 |
+
demo.queue().launch(**launch_kwargs)
|
| 40 |
+
else:
|
| 41 |
+
try:
|
| 42 |
+
demo.launch(**launch_kwargs)
|
| 43 |
+
except TypeError:
|
| 44 |
+
launch_kwargs.pop("server_port_range", None)
|
| 45 |
+
demo.launch(**launch_kwargs)
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
if __name__ == "__main__":
|
| 49 |
+
main()
|
requirements.txt
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
gradio>=3.50
|
| 2 |
+
numpy
|
| 3 |
+
opencv-python
|
| 4 |
+
Pillow
|
| 5 |
+
pyyaml
|
| 6 |
+
torch
|
| 7 |
+
transformers>=4.39
|
scripts/precompute_curated.py
ADDED
|
@@ -0,0 +1,270 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""Precompute curated demo outputs for the gallery mode.
|
| 3 |
+
|
| 4 |
+
Given a sample manifest, this script runs the Landing Site Safety Analyzer on each
|
| 5 |
+
image, saves the composed preview and RGB thumbnail, and writes an index.json that
|
| 6 |
+
the curated Gradio app can serve instantly (CPU-friendly).
|
| 7 |
+
"""
|
| 8 |
+
|
| 9 |
+
from __future__ import annotations
|
| 10 |
+
|
| 11 |
+
import argparse
|
| 12 |
+
import dataclasses
|
| 13 |
+
import json
|
| 14 |
+
import sys
|
| 15 |
+
from datetime import datetime
|
| 16 |
+
from pathlib import Path
|
| 17 |
+
from typing import Any, Dict, List
|
| 18 |
+
|
| 19 |
+
import numpy as np
|
| 20 |
+
|
| 21 |
+
try:
|
| 22 |
+
import yaml # type: ignore
|
| 23 |
+
except ImportError as exc: # pragma: no cover - dependency shim
|
| 24 |
+
raise SystemExit("pyyaml is required for curated manifest parsing (pip install pyyaml).") from exc
|
| 25 |
+
from PIL import Image
|
| 26 |
+
|
| 27 |
+
# Ensure repository root is on the path so `app` imports work when running the script directly
|
| 28 |
+
ROOT = Path(__file__).resolve().parents[1]
|
| 29 |
+
if str(ROOT) not in sys.path:
|
| 30 |
+
sys.path.append(str(ROOT))
|
| 31 |
+
|
| 32 |
+
from app.config import DEFAULT_ANALYZER_SETTINGS, AnalyzerSettings, IMAGE_EXTS # type: ignore # noqa: E402
|
| 33 |
+
from app.safety import AnalysisRequest, SafetyAnalyzer # type: ignore # noqa: E402
|
| 34 |
+
from app.visualization import compose_view # type: ignore # noqa: E402
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
def _load_manifest(path: Path) -> List[Dict[str, Any]]:
|
| 38 |
+
if not path.exists():
|
| 39 |
+
raise FileNotFoundError(f"Manifest not found: {path}")
|
| 40 |
+
with path.open("r") as f:
|
| 41 |
+
data = yaml.safe_load(f)
|
| 42 |
+
samples = data.get("samples") if isinstance(data, dict) else None
|
| 43 |
+
if not samples:
|
| 44 |
+
raise ValueError(f"No samples found in manifest: {path}")
|
| 45 |
+
entries: List[Dict[str, Any]] = []
|
| 46 |
+
for item in samples:
|
| 47 |
+
if not isinstance(item, dict):
|
| 48 |
+
continue
|
| 49 |
+
if "id" not in item or "path" not in item:
|
| 50 |
+
continue
|
| 51 |
+
entries.append(item)
|
| 52 |
+
if not entries:
|
| 53 |
+
raise ValueError(f"Manifest contained no usable entries: {path}")
|
| 54 |
+
return entries
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
def _analysis_request_from_args(args: argparse.Namespace, source_path: Path) -> AnalysisRequest:
|
| 58 |
+
defaults = DEFAULT_ANALYZER_SETTINGS
|
| 59 |
+
resolve = lambda value, default: default if value is None else value # noqa: E731
|
| 60 |
+
|
| 61 |
+
process_res_cap = int(resolve(args.process_res_cap, defaults.process_res_cap))
|
| 62 |
+
segmentation_max_side = int(resolve(args.segmentation_max_side, defaults.segmentation_max_side))
|
| 63 |
+
return AnalysisRequest(
|
| 64 |
+
footprint_m=float(resolve(args.footprint_m, defaults.footprint_m)),
|
| 65 |
+
std_thresh=float(resolve(args.std_thresh, defaults.std_thresh)),
|
| 66 |
+
grad_thresh=float(resolve(args.grad_thresh, defaults.grad_thresh)),
|
| 67 |
+
use_water_mask=bool(args.use_water_mask),
|
| 68 |
+
use_road_mask=bool(args.use_road_mask),
|
| 69 |
+
use_roof_mask=bool(args.use_roof_mask),
|
| 70 |
+
use_tree_mask=True,
|
| 71 |
+
water_prompt=resolve(args.water_prompt, defaults.water_prompt),
|
| 72 |
+
road_prompt=resolve(args.road_prompt, defaults.road_prompt),
|
| 73 |
+
tree_prompt=resolve(getattr(args, "tree_prompt", None), defaults.tree_prompt),
|
| 74 |
+
altitude_m=float(resolve(args.altitude_m, defaults.altitude_m)),
|
| 75 |
+
fov_deg=float(resolve(args.fov_deg, defaults.fov_deg)),
|
| 76 |
+
clearance_factor=float(resolve(args.clearance_factor, defaults.clearance_factor)),
|
| 77 |
+
process_res_cap=process_res_cap,
|
| 78 |
+
depth_smoothing_base=float(resolve(args.depth_smoothing_base, defaults.depth_smoothing_base)),
|
| 79 |
+
segmentation_model_id=resolve(args.segmentation_model_id, defaults.segmentation_model_id),
|
| 80 |
+
segmentation_max_side=segmentation_max_side,
|
| 81 |
+
segmentation_score_thresh=float(resolve(args.segmentation_score_thresh, defaults.segmentation_score_thresh)),
|
| 82 |
+
segmentation_mask_thresh=float(resolve(args.segmentation_mask_thresh, defaults.segmentation_mask_thresh)),
|
| 83 |
+
coverage_strictness=float(resolve(args.coverage_strictness, defaults.coverage_strictness)),
|
| 84 |
+
model_id=resolve(args.model_id, defaults.model_id),
|
| 85 |
+
openness_weight=float(resolve(args.openness_weight, defaults.openness_weight)),
|
| 86 |
+
texture_threshold=float(resolve(args.texture_threshold, defaults.texture_threshold)),
|
| 87 |
+
source_path=str(source_path),
|
| 88 |
+
)
|
| 89 |
+
|
| 90 |
+
|
| 91 |
+
def _ensure_image(path: Path) -> Path:
|
| 92 |
+
if not path.exists():
|
| 93 |
+
raise FileNotFoundError(f"Sample image missing: {path}")
|
| 94 |
+
if path.suffix.lower() not in IMAGE_EXTS:
|
| 95 |
+
raise ValueError(f"Unsupported image type for curated sample: {path.name}")
|
| 96 |
+
return path
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
def _save_image(img: Image.Image, path: Path, quality: int = 95) -> None:
|
| 100 |
+
path.parent.mkdir(parents=True, exist_ok=True)
|
| 101 |
+
save_kwargs: Dict[str, Any] = {}
|
| 102 |
+
if path.suffix.lower() in (".jpg", ".jpeg"):
|
| 103 |
+
save_kwargs["quality"] = quality
|
| 104 |
+
save_kwargs["optimize"] = True
|
| 105 |
+
img.save(path, **save_kwargs)
|
| 106 |
+
|
| 107 |
+
|
| 108 |
+
def _relative_to_base(path: Path, base: Path) -> str:
|
| 109 |
+
try:
|
| 110 |
+
return path.relative_to(base).as_posix()
|
| 111 |
+
except ValueError:
|
| 112 |
+
return path.as_posix()
|
| 113 |
+
|
| 114 |
+
|
| 115 |
+
def _to_builtin(obj: Any) -> Any:
|
| 116 |
+
"""Recursively convert numpy/scalar types to JSON-friendly Python types."""
|
| 117 |
+
if isinstance(obj, np.generic):
|
| 118 |
+
return obj.item()
|
| 119 |
+
if isinstance(obj, dict):
|
| 120 |
+
return {k: _to_builtin(v) for k, v in obj.items()}
|
| 121 |
+
if isinstance(obj, (list, tuple)):
|
| 122 |
+
return [_to_builtin(v) for v in obj]
|
| 123 |
+
return obj
|
| 124 |
+
|
| 125 |
+
|
| 126 |
+
def precompute_curated(
|
| 127 |
+
manifest_path: Path,
|
| 128 |
+
output_root: Path,
|
| 129 |
+
args: argparse.Namespace,
|
| 130 |
+
base_view: str = "RGB",
|
| 131 |
+
heat_opacity: float = 0.2,
|
| 132 |
+
hazard_opacity: float = 0.2,
|
| 133 |
+
) -> Path:
|
| 134 |
+
manifest_entries = _load_manifest(manifest_path)
|
| 135 |
+
output_root.mkdir(parents=True, exist_ok=True)
|
| 136 |
+
analyzer = SafetyAnalyzer()
|
| 137 |
+
index_entries: List[Dict[str, Any]] = []
|
| 138 |
+
|
| 139 |
+
for item in manifest_entries:
|
| 140 |
+
sample_id = item.get("id")
|
| 141 |
+
source_path = _ensure_image(Path(item.get("path")))
|
| 142 |
+
title = item.get("title") or sample_id
|
| 143 |
+
description = item.get("description") or ""
|
| 144 |
+
tags = item.get("tags") or []
|
| 145 |
+
|
| 146 |
+
request = _analysis_request_from_args(args, source_path)
|
| 147 |
+
print(f"[INFO] Processing {sample_id} -> {source_path}")
|
| 148 |
+
result = analyzer.process_path(source_path, request)
|
| 149 |
+
composed = compose_view(
|
| 150 |
+
result.images,
|
| 151 |
+
base_view=base_view,
|
| 152 |
+
heat_on=True,
|
| 153 |
+
heat_alpha=float(heat_opacity),
|
| 154 |
+
hazard_on=True,
|
| 155 |
+
hazard_alpha=float(hazard_opacity),
|
| 156 |
+
water_on=True,
|
| 157 |
+
road_on=True,
|
| 158 |
+
tree_on=True,
|
| 159 |
+
grad_on=False,
|
| 160 |
+
flat_on=False,
|
| 161 |
+
flat_heat_on=False,
|
| 162 |
+
spot_on=True,
|
| 163 |
+
)
|
| 164 |
+
|
| 165 |
+
sample_dir = output_root / sample_id
|
| 166 |
+
rgb_path = sample_dir / "rgb.jpg"
|
| 167 |
+
composed_path = sample_dir / "composed.png"
|
| 168 |
+
summary_path = sample_dir / "summary.json"
|
| 169 |
+
|
| 170 |
+
summary_dict = _to_builtin(dataclasses.asdict(result.summary))
|
| 171 |
+
|
| 172 |
+
_save_image(result.images["RGB"], rgb_path)
|
| 173 |
+
_save_image(composed, composed_path, quality=98)
|
| 174 |
+
with summary_path.open("w") as f:
|
| 175 |
+
json.dump(summary_dict, f, indent=2)
|
| 176 |
+
|
| 177 |
+
entry = {
|
| 178 |
+
"id": sample_id,
|
| 179 |
+
"title": title,
|
| 180 |
+
"description": description,
|
| 181 |
+
"tags": tags,
|
| 182 |
+
"source_path": str(source_path),
|
| 183 |
+
"artifacts": {
|
| 184 |
+
"rgb": _relative_to_base(rgb_path, output_root),
|
| 185 |
+
"composed": _relative_to_base(composed_path, output_root),
|
| 186 |
+
"summary": _relative_to_base(summary_path, output_root),
|
| 187 |
+
},
|
| 188 |
+
"summary": summary_dict,
|
| 189 |
+
"request": _to_builtin(dataclasses.asdict(request)),
|
| 190 |
+
}
|
| 191 |
+
index_entries.append(entry)
|
| 192 |
+
|
| 193 |
+
index = {
|
| 194 |
+
"generated_at": datetime.utcnow().isoformat(timespec="seconds") + "Z",
|
| 195 |
+
"num_samples": len(index_entries),
|
| 196 |
+
"output_root": output_root.as_posix(),
|
| 197 |
+
"manifest": manifest_path.as_posix(),
|
| 198 |
+
"samples": index_entries,
|
| 199 |
+
}
|
| 200 |
+
index_path = output_root / "index.json"
|
| 201 |
+
with index_path.open("w") as f:
|
| 202 |
+
json.dump(index, f, indent=2)
|
| 203 |
+
print(f"[DONE] Wrote curated index: {index_path}")
|
| 204 |
+
return index_path
|
| 205 |
+
|
| 206 |
+
|
| 207 |
+
def build_parser() -> argparse.ArgumentParser:
|
| 208 |
+
p = argparse.ArgumentParser(description="Precompute curated demo outputs.")
|
| 209 |
+
p.add_argument(
|
| 210 |
+
"--manifest",
|
| 211 |
+
type=Path,
|
| 212 |
+
default=Path("app/demo_assets/curated/samples.yaml"),
|
| 213 |
+
help="YAML manifest with curated sample definitions.",
|
| 214 |
+
)
|
| 215 |
+
p.add_argument(
|
| 216 |
+
"--output-dir",
|
| 217 |
+
type=Path,
|
| 218 |
+
default=Path("app/demo_assets/curated/build"),
|
| 219 |
+
help="Directory to store curated outputs and index.json.",
|
| 220 |
+
)
|
| 221 |
+
# Analysis controls
|
| 222 |
+
p.add_argument("--model-id", type=str, help="DepthAnything3 model id to use.")
|
| 223 |
+
p.add_argument("--footprint-m", type=float, help="Landing footprint size in meters.")
|
| 224 |
+
p.add_argument("--std-thresh", type=float, help="Flatness threshold.")
|
| 225 |
+
p.add_argument("--grad-thresh", type=float, help="Gradient threshold.")
|
| 226 |
+
p.add_argument("--coverage-strictness", type=float, help="Coverage strictness for safe areas.")
|
| 227 |
+
p.add_argument("--openness-weight", type=float, help="Weight for distance-from-hazards when scoring.")
|
| 228 |
+
p.add_argument("--texture-threshold", type=float, help="Texture tolerance.")
|
| 229 |
+
p.add_argument("--clearance-factor", type=float, help="Clearance dilation multiplier.")
|
| 230 |
+
p.add_argument("--process-res-cap", type=int, help="Depth max resolution (long side).")
|
| 231 |
+
p.add_argument("--depth-smoothing-base", type=float, help="Base sigma for depth smoothing.")
|
| 232 |
+
p.add_argument("--segmentation-max-side", type=int, help="Segmentation max side.")
|
| 233 |
+
p.add_argument("--segmentation-model-id", type=str, help="Segmentation model id (e.g., facebook/sam3 or maskformer).")
|
| 234 |
+
p.add_argument("--segmentation-score-thresh", type=float, help="Segmentation score threshold.")
|
| 235 |
+
p.add_argument("--segmentation-mask-thresh", type=float, help="Segmentation mask threshold.")
|
| 236 |
+
p.add_argument("--altitude-m", type=float, help="Camera altitude in meters.")
|
| 237 |
+
p.add_argument("--fov-deg", type=float, help="Camera FOV in degrees.")
|
| 238 |
+
p.add_argument("--water-prompt", type=str, help="Water segmentation prompt.")
|
| 239 |
+
p.add_argument("--road-prompt", type=str, help="Road segmentation prompt.")
|
| 240 |
+
p.add_argument("--tree-prompt", type=str, help="Tree segmentation prompt.")
|
| 241 |
+
p.add_argument("--use-water-mask", action="store_true", dest="use_water_mask", help="Enable water mask.")
|
| 242 |
+
p.add_argument("--no-water-mask", action="store_false", dest="use_water_mask", help="Disable water mask.")
|
| 243 |
+
p.add_argument("--use-road-mask", action="store_true", dest="use_road_mask", help="Enable road mask.")
|
| 244 |
+
p.add_argument("--no-road-mask", action="store_false", dest="use_road_mask", help="Disable road mask.")
|
| 245 |
+
p.add_argument("--use-roof-mask", action="store_true", dest="use_roof_mask", default=True, help="Enable roof mask.")
|
| 246 |
+
p.add_argument("--no-roof-mask", action="store_false", dest="use_roof_mask", help="Disable roof mask.")
|
| 247 |
+
p.set_defaults(use_water_mask=True, use_road_mask=True)
|
| 248 |
+
p.add_argument("--cpu", action="store_true", help="Force CPU inference to avoid CUDA OOM.")
|
| 249 |
+
# View controls
|
| 250 |
+
p.add_argument("--base-view", type=str, default="RGB", help="Base view to compose (RGB/Depth/etc).")
|
| 251 |
+
p.add_argument("--heat-opacity", type=float, default=0.2, help="Safety overlay opacity.")
|
| 252 |
+
p.add_argument("--hazard-opacity", type=float, default=0.2, help="Hazard overlay opacity.")
|
| 253 |
+
return p
|
| 254 |
+
|
| 255 |
+
|
| 256 |
+
if __name__ == "__main__":
|
| 257 |
+
parser = build_parser()
|
| 258 |
+
args = parser.parse_args()
|
| 259 |
+
if args.cpu:
|
| 260 |
+
import os
|
| 261 |
+
|
| 262 |
+
os.environ["CUDA_VISIBLE_DEVICES"] = ""
|
| 263 |
+
precompute_curated(
|
| 264 |
+
manifest_path=Path(args.manifest),
|
| 265 |
+
output_root=Path(args.output_dir),
|
| 266 |
+
args=args,
|
| 267 |
+
base_view=args.base_view,
|
| 268 |
+
heat_opacity=args.heat_opacity,
|
| 269 |
+
hazard_opacity=args.hazard_opacity,
|
| 270 |
+
)
|