File size: 13,351 Bytes
e92408b 8ac50b6 a51a1a7 8ac50b6 a51a1a7 a802703 a51a1a7 8ac50b6 a51a1a7 8ac50b6 a51a1a7 8ac50b6 a51a1a7 8ac50b6 a51a1a7 8ac50b6 a51a1a7 31e8146 a51a1a7 8ac50b6 a51a1a7 8ac50b6 a51a1a7 8ac50b6 a51a1a7 8ac50b6 a51a1a7 8ac50b6 a51a1a7 8ac50b6 a51a1a7 8ac50b6 31e8146 a51a1a7 8ac50b6 a51a1a7 8ac50b6 c6ebd63 4d0f464 8ac50b6 a51a1a7 8ac50b6 4d0f464 a51a1a7 4d0f464 a51a1a7 c6ebd63 a51a1a7 8ac50b6 a51a1a7 8ac50b6 a51a1a7 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 | import streamlit as st
st.set_page_config(page_title="Perception Benchmark", layout="wide", page_icon="π¦
")
# ===================================================================
# Routing β Sidebar Navigation
# ===================================================================
PIPELINES = {
"π Home": None,
"π Stereo + Depth": {
"π§ͺ Data Lab": "tabs.stereo.data_lab",
"π¬ Feature Lab": "tabs.stereo.feature_lab",
"βοΈ Model Tuning": "tabs.stereo.model_tuning",
"π Localization Lab": "tabs.stereo.localization",
"π― Real-Time Detection":"tabs.stereo.detection",
"π Evaluation": "tabs.stereo.evaluation",
"π Stereo Geometry": "tabs.stereo.stereo_depth",
},
"π Generalisation": {
"π§ͺ Data Lab": "tabs.generalisation.data_lab",
"π¬ Feature Lab": "tabs.generalisation.feature_lab",
"βοΈ Model Tuning": "tabs.generalisation.model_tuning",
"π Localization Lab": "tabs.generalisation.localization",
"π― Real-Time Detection":"tabs.generalisation.detection",
"π Evaluation": "tabs.generalisation.evaluation",
},
}
st.sidebar.title("π¦
Recognition BenchMark")
pipeline_choice = st.sidebar.radio("Pipeline", list(PIPELINES.keys()), key="nav_pipeline")
stage_module = None
if PIPELINES[pipeline_choice] is not None:
stages_map = PIPELINES[pipeline_choice]
stage_choice = st.sidebar.radio("Stage", list(stages_map.keys()), key="nav_stage")
module_path = stages_map[stage_choice]
# dynamic import
import importlib
stage_module = importlib.import_module(module_path)
# Session status widget (always visible in sidebar)
st.sidebar.divider()
st.sidebar.subheader("π Session Status")
for pipe_label, pipe_key in [("Stereo", "stereo_pipeline"), ("General", "gen_pipeline")]:
pipe = st.session_state.get(pipe_key, {})
checks = {
"Data locked": "train_image" in pipe,
"Crop defined": "crop" in pipe,
"Modules set": "active_modules" in pipe,
"RCE trained": "rce_head" in pipe,
"CNN trained": any(f"cnn_head_{n}" in pipe
for n in ["ResNet-18", "MobileNetV3", "MobileViT-XXS"]),
"Dets ready": "rce_dets" in pipe or "cnn_dets" in pipe,
}
with st.sidebar.expander(f"**{pipe_label}** β {sum(checks.values())}/{len(checks)}"):
for label, done in checks.items():
st.markdown(f"{'β
' if done else 'β¬'} {label}")
# ===================================================================
# Home Page
# ===================================================================
def render_home():
st.title("π¦
Recognition BenchMark")
st.subheader("A stereo-vision pipeline for object recognition & depth estimation")
st.caption("Compare classical feature engineering (RCE) against modern deep learning backbones β end-to-end, in your browser.")
st.divider()
# -------------------------------------------------------------------
# Two Pipelines
# -------------------------------------------------------------------
st.header("πΊοΈ Two Pipelines")
st.markdown("""
Choose a pipeline from the **sidebar**:
- **π Stereo + Depth** β 7 stages. Uses a stereo image pair (LEFT=train, RIGHT=test)
with calibration data and ground-truth disparities. Ends with metric depth estimation.
- **π Generalisation** β 6 stages. Uses different scene *variants* from the Middlebury dataset
(train on one variant, test on another). Tests how well models generalise across viewpoints.
""")
col_s, col_g = st.columns(2)
with col_s:
st.markdown("### π Stereo + Depth (7 stages)")
stereo_stages = [
("π§ͺ", "Data Lab", "Load stereo pair, calib, GT depth. Define ROIs."),
("π¬", "Feature Lab", "Toggle RCE modules, compare CNN activations."),
("βοΈ", "Model Tuning", "Train RCE / CNN / ORB heads."),
("π", "Localization Lab", "Compare 5 localization strategies."),
("π―", "Real-Time Detection","Sliding window on the TEST image."),
("π", "Evaluation", "Confusion matrices, PR curves, F1."),
("π", "Stereo Geometry", "StereoSGBM disparity β metric depth."),
]
for icon, title, desc in stereo_stages:
st.markdown(f"{icon} **{title}** β {desc}")
with col_g:
st.markdown("### π Generalisation (6 stages)")
gen_stages = [
("π§ͺ", "Data Lab", "Pick scene group & variants (train β test)."),
("π¬", "Feature Lab", "Toggle RCE modules, compare CNN activations."),
("βοΈ", "Model Tuning", "Train RCE / CNN / ORB heads."),
("π", "Localization Lab", "Compare 5 localization strategies."),
("π―", "Real-Time Detection","Sliding window on a different variant."),
("π", "Evaluation", "Confusion matrices, PR curves, F1."),
]
for icon, title, desc in gen_stages:
st.markdown(f"{icon} **{title}** β {desc}")
st.divider()
# -------------------------------------------------------------------
# Models
# -------------------------------------------------------------------
st.header("π§ Models Used")
tab_rce, tab_resnet, tab_mobilenet, tab_mobilevit, tab_yolo = st.tabs(
["RCE Engine", "ResNet-18", "MobileNetV3-Small", "MobileViT-XXS", "YOLOv8n"])
with tab_rce:
st.markdown("### 𧬠RCE β Relative Contextual Encoding")
st.markdown("""
**Type:** Modular hand-crafted feature extractor
**Architecture:** Seven physics-inspired modules, each producing a 10-bin histogram:
| Module | Input | Operation |
|--------|-------|-----------|
| **Intensity** | Grayscale | Pixel-value histogram (global appearance) |
| **Sobel** | Gradient magnitude | Edge strength distribution (texture) |
| **Spectral** | FFT log-magnitude | Frequency content (pattern / structure) |
| **Laplacian** | Laplacian response | Second-derivative focus / sharpness |
| **Gradient Orientation** | Sobel angles | Edge direction histogram |
| **Gabor** | Multi-kernel response | Texture at multiple orientations / scales |
| **LBP** | Local Binary Patterns | Micro-texture descriptor |
Max feature vector = **70D** (7 modules Γ 10 bins).
""")
with tab_resnet:
st.markdown("### ποΈ ResNet-18")
st.markdown("""
**Source:** PyTorch Hub (`torchvision.models.ResNet18_Weights.DEFAULT`)
**Pre-training:** ImageNet-1k (1.28 M images, 1 000 classes)
**Backbone output:** 512-dimensional embedding (after `avgpool`)
**Head:** LogisticRegression trained on your session data
**In this app:** The entire backbone is **frozen** (`requires_grad=False`).
Only the lightweight head adapts to your specific object.
""")
with tab_mobilenet:
st.markdown("### π± MobileNetV3-Small")
st.markdown("""
**Source:** PyTorch Hub (`torchvision.models.MobileNet_V3_Small_Weights.DEFAULT`)
**Pre-training:** ImageNet-1k
**Backbone output:** 576-dimensional embedding
**Head:** LogisticRegression trained on your session data
**In this app:** Typically 3β5Γ faster than ResNet-18.
""")
with tab_mobilevit:
st.markdown("### π€ MobileViT-XXS")
st.markdown("""
**Source:** timm β `mobilevit_xxs.cvnets_in1k` (Apple Research, 2022)
**Pre-training:** ImageNet-1k
**Backbone output:** 320-dimensional embedding (`num_classes=0`)
**Head:** LogisticRegression trained on your session data
**In this app:** Hybrid CNN + Vision Transformer. Only ~1.3 M parameters.
""")
with tab_yolo:
st.markdown("### π― YOLOv8-Nano (Backbone)")
st.markdown("""
**Source:** Ultralytics YOLOv8n (`models/yolov8n.pt`)
**Pre-training:** COCO (80 classes)
**Backbone output:** 256-dimensional embedding (after SPPF + GAP)
**Head:** LogisticRegression trained on your session data
**In this app:** Only the backbone (layers 0β9) is used as a frozen
feature extractor β the detection head is discarded. Smallest backbone
in the benchmark at 256D.
""")
st.divider()
# -------------------------------------------------------------------
# Depth Estimation
# -------------------------------------------------------------------
st.header("π Stereo Depth Estimation")
tab_sgbm, tab_dav2, tab_epi = st.tabs(["StereoSGBM (Classical)", "Depth Anything V2 (NN)", "Epipolar Geometry (Sparse)"])
with tab_sgbm:
st.markdown("### π StereoSGBM β Semi-Global Block Matching")
col_d1, col_d2 = st.columns(2)
with col_d1:
st.markdown("""
**Algorithm:** `cv2.StereoSGBM`
SGBM minimises a global energy function combining:
- Data cost (pixel intensity difference)
- Smoothness penalty (P1, P2 regularisation)
It processes multiple horizontal and diagonal scan-line passes,
making it significantly more accurate than basic block matching.
""")
with col_d2:
st.markdown("**Depth formula (Middlebury convention):**")
st.latex(r"Z = \frac{f \times B}{d + d_{\text{offs}}}")
st.markdown("""
- $f$ β focal length (pixels)
- $B$ β baseline (mm, from calibration file)
- $d$ β disparity (pixels)
- $d_{\\text{offs}}$ β optical-center offset between cameras
""")
with tab_dav2:
st.markdown("### π€ Depth Anything V2 Small β Monocular Depth NN")
col_n1, col_n2 = st.columns(2)
with col_n1:
st.markdown("""
**Source:** HuggingFace β `depth-anything/Depth-Anything-V2-Small-hf`
**Pre-training:** 62 M synthetic + real images (DA-2 dataset)
**Architecture:** ViT-Small encoder + DPT decode head
**Output:** Relative inverse-depth map (not metric)
**Parameters:** ~24 M | **Weights:** ~100 MB
**Inference:** CPU-only, ~300β500 ms at Middlebury resolution
**In this app:** Used as a comparison baseline against StereoSGBM.
Because the NN output is scale-agnostic, a **least-squares affine
alignment** is applied before computing error metrics:
""")
st.latex(r"\hat{d} = \alpha \cdot d_{\text{NN}} + \beta")
st.markdown(r"where $\alpha, \beta$ are fitted over mutually valid pixels.")
with col_n2:
st.markdown("""
**Why compare these?**
| | StereoSGBM | Depth Anything V2 |
|---|---|---|
| **Input** | Stereo pair | Single image |
| **Output** | Metric disparity | Relative depth |
| **Speed** | ~50 ms | ~400 ms |
| **Needs calibration** | β
Yes | β No |
| **Generalises to new scenes** | Limited | β
Strong |
| **Error metric** | Direct MAE/RMSE | After alignment |
The Stereo Stage shows both side-by-side with MAE, RMSE,
and Bad-2.0 pixel error against the Middlebury ground truth.
""")
with tab_epi:
st.markdown("### π Epipolar Geometry β Sparse Stereo Matching")
col_e1, col_e2 = st.columns(2)
with col_e1:
st.markdown("""
**What it is:** The classical, principled way to find correspondences between a stereo pair.
Unlike StereoSGBM β which searches every pixel on the same row β the epipolar
approach works **point by point** on detected objects:
1. **Detect key-points** (ORB) inside the bounding box in the **left** image.
2. **Compute the fundamental matrix F** from the camera calibration:
""")
st.latex(r"F = K_R^{-T} \; [t]_\times \; K_L^{-1}")
st.markdown("""
3. **Project each key-point** through F β this produces an **epipolar line** in the right image.
4. **Template-match** a patch around the key-point *along* that line (NCC).
5. The x-offset between the two matches gives the **disparity** $d = x_L - x_R$.
6. Recover metric depth:
""")
st.latex(r"Z = \frac{f \times B}{d + d_{\text{offs}}}")
with col_e2:
st.markdown("""
**Why epipolar?**
For a rectified stereo pair the epipolar lines are horizontal, so the search
collapses to 1D β but you only pay the cost for key-points you actually care about,
not the whole image.
| | StereoSGBM | Epipolar (sparse) |
|---|---|---|
| **Scope** | All pixels | Key-points inside detections |
| **Search space** | Full row | Along epipolar line (1D) |
| **F matrix used** | β Implicit | β
Explicit |
| **Output** | Dense depth map | Depth per key-point |
| **Best for** | Full-scene depth | Object-level depth queries |
**In this app (Step 6 β Stereo Geometry tab):**
- ORB key-points are extracted from each detection bounding-box.
- F is built from the `cam0` / `cam1` matrices in the Middlebury `calib.txt`.
- For rectified Middlebury pairs the epipolar lines are verified horizontal
(row 0 of F β 0).
- Results are shown alongside the dense SGBM depth in a comparison table.
""")
st.divider()
st.caption("Select a pipeline from the **sidebar** to begin.")
# ===================================================================
# Dispatch
# ===================================================================
if stage_module is not None:
stage_module.render()
else:
render_home() |