fix(arch): comprehensive architecture audit fixes (#44)
Browse files* docs: add NEXT-CONCERNS.md detailing critical architecture debt
Introduces a new document outlining validated concerns regarding config drift and dependency pinning, emphasizing the need for a single source of truth in configuration and ensuring reproducible builds. Highlights resolved issues and confirms the current state of frontend configuration. This documentation aims to guide future development and maintain architectural integrity.
* fix(arch): comprehensive architecture audit fixes
P0 (Critical):
- Dockerfile: Add --extra api to uv sync (fixes missing uvicorn at runtime)
P1 (High):
- Makefile: Add --extra api --extra gradio to install target
- Dockerfile: Update stale StaticFiles comment to reflect explicit routes
- app.py: Update stale deployment comment to reference api.main
P2 (Medium) - Wire in dead config:
- loader.py: Wire Settings.hf_dataset_id and Settings.hf_token through
- deepisles.py: Use Settings.deepisles_docker_image (removed hardcoded constant)
- pipeline.py: Wire timeout and gpu settings through (defaults from config)
- routes.py: Use atomic create_job_if_under_limit to prevent TOCTOU race
- job_store.py: Add create_job_if_under_limit atomic method
- files.py: Add NIfTI extension allowlist (defense-in-depth)
P3 (Low) - Documentation:
- pyproject.toml: Update description to mention React SPA + FastAPI
- README.md: Update to describe React SPA + FastAPI architecture
- requirements.txt: Clarify as pip-only fallback
P4 (Nitpicks) - Code cleanliness:
- deepisles.py: Use EXPECTED_INPUT_FILES/OPTIONAL_INPUT_FILES in validation
- frontend: Extract shared retry constants to utils/retry.ts
- inference/__init__.py: Remove DEEPISLES_IMAGE export (now configurable)
All 157 tests pass. Linter and type checker clean.
Audit documented in: docs/bugs/ARCHITECTURE-AUDIT-2024-12-13.md
* refactor(deepisles): use named constants instead of positional unpacking
Address CodeRabbit nitpick: positional unpacking from EXPECTED_INPUT_FILES
creates coupling to list order. Use explicit DWI_FILENAME, ADC_FILENAME,
FLAIR_FILENAME constants for clarity and robustness.
Lists preserved for backwards-compatible exports.
- Dockerfile +3 -2
- Makefile +1 -1
- README.md +6 -3
- app.py +6 -4
- docs/bugs/ARCHITECTURE-AUDIT-2024-12-13.md +412 -0
- NEXT-CONCERNS.md → docs/bugs/NEXT-CONCERNS.md +0 -0
- frontend/src/components/CaseSelector.tsx +4 -10
- frontend/src/hooks/useSegmentation.ts +7 -17
- frontend/src/utils/retry.ts +30 -0
- pyproject.toml +1 -1
- requirements.txt +3 -2
- src/stroke_deepisles_demo/api/files.py +13 -0
- src/stroke_deepisles_demo/api/job_store.py +51 -0
- src/stroke_deepisles_demo/api/routes.py +12 -8
- src/stroke_deepisles_demo/data/loader.py +15 -9
- src/stroke_deepisles_demo/inference/__init__.py +1 -2
- src/stroke_deepisles_demo/inference/deepisles.py +32 -11
- src/stroke_deepisles_demo/pipeline.py +21 -9
|
@@ -40,7 +40,8 @@ ENV PATH="$VIRTUAL_ENV/bin:$PATH"
|
|
| 40 |
|
| 41 |
# Install Python dependencies from lock file (frozen = fail if lock stale)
|
| 42 |
# This ensures CI, local dev, and production use IDENTICAL versions
|
| 43 |
-
|
|
|
|
| 44 |
|
| 45 |
# Copy application source code and package files
|
| 46 |
COPY --chown=1000:1000 README.md /home/user/demo/README.md
|
|
@@ -66,7 +67,7 @@ ENV DEEPISLES_PATH=/app
|
|
| 66 |
ENV HF_HOME=/home/user/demo/cache
|
| 67 |
|
| 68 |
# Create directories for data with proper permissions
|
| 69 |
-
#
|
| 70 |
RUN mkdir -p /home/user/demo/data /home/user/demo/results /home/user/demo/cache /tmp/stroke-results && \
|
| 71 |
chown -R 1000:1000 /home/user/demo /tmp/stroke-results
|
| 72 |
|
|
|
|
| 40 |
|
| 41 |
# Install Python dependencies from lock file (frozen = fail if lock stale)
|
| 42 |
# This ensures CI, local dev, and production use IDENTICAL versions
|
| 43 |
+
# CRITICAL: --extra api installs FastAPI/uvicorn required by CMD
|
| 44 |
+
RUN uv sync --frozen --no-dev --no-install-project --extra api
|
| 45 |
|
| 46 |
# Copy application source code and package files
|
| 47 |
COPY --chown=1000:1000 README.md /home/user/demo/README.md
|
|
|
|
| 67 |
ENV HF_HOME=/home/user/demo/cache
|
| 68 |
|
| 69 |
# Create directories for data with proper permissions
|
| 70 |
+
# /tmp/stroke-results stores job result files, served via explicit /files/{job_id}/ routes
|
| 71 |
RUN mkdir -p /home/user/demo/data /home/user/demo/results /home/user/demo/cache /tmp/stroke-results && \
|
| 72 |
chown -R 1000:1000 /home/user/demo /tmp/stroke-results
|
| 73 |
|
|
@@ -1,7 +1,7 @@
|
|
| 1 |
.PHONY: install test test-integration test-all lint format check all clean
|
| 2 |
|
| 3 |
install:
|
| 4 |
-
uv sync
|
| 5 |
|
| 6 |
test:
|
| 7 |
uv run pytest
|
|
|
|
| 1 |
.PHONY: install test test-integration test-all lint format check all clean
|
| 2 |
|
| 3 |
install:
|
| 4 |
+
uv sync --extra api --extra gradio
|
| 5 |
|
| 6 |
test:
|
| 7 |
uv run pytest
|
|
@@ -34,7 +34,9 @@ A demonstration pipeline and UI for ischemic stroke lesion segmentation using **
|
|
| 34 |
This project provides a complete end-to-end workflow:
|
| 35 |
1. **Data Loading**: Lazy-loading of NIfTI neuroimaging data from HuggingFace.
|
| 36 |
2. **Inference**: Running DeepISLES segmentation (SEALS or Ensemble) via Docker.
|
| 37 |
-
3. **Visualization**: Interactive 3D
|
|
|
|
|
|
|
| 38 |
|
| 39 |
> **Disclaimer**: This software is for research and demonstration purposes only. It is not intended for clinical use.
|
| 40 |
|
|
@@ -43,7 +45,7 @@ This project provides a complete end-to-end workflow:
|
|
| 43 |
- 🧠 **State-of-the-Art Segmentation**: Uses DeepISLES (ISLES'22 winner) for accurate lesion segmentation.
|
| 44 |
- ☁️ **Cloud-Native Data**: Streams data directly from HuggingFace Datasets (no massive downloads).
|
| 45 |
- 🐳 **Dockerized Inference**: Encapsulates complex deep learning dependencies in a reproducible container.
|
| 46 |
-
- 🖥️ **
|
| 47 |
- ⚙️ **Production Ready**: Type-safe, tested, and configurable via environment variables.
|
| 48 |
|
| 49 |
## Requirements
|
|
@@ -114,7 +116,8 @@ graph TD
|
|
| 114 |
Staging -->|Mount Volume| Docker[DeepISLES Container]
|
| 115 |
Docker -->|Inference| Results[Prediction Mask]
|
| 116 |
Results -->|Load| Metrics["Metrics (Dice)"]
|
| 117 |
-
Results -->|
|
|
|
|
| 118 |
```
|
| 119 |
|
| 120 |
## License
|
|
|
|
| 34 |
This project provides a complete end-to-end workflow:
|
| 35 |
1. **Data Loading**: Lazy-loading of NIfTI neuroimaging data from HuggingFace.
|
| 36 |
2. **Inference**: Running DeepISLES segmentation (SEALS or Ensemble) via Docker.
|
| 37 |
+
3. **Visualization**: Interactive 3D viewing with NiiVue in React SPA + FastAPI backend.
|
| 38 |
+
|
| 39 |
+
> **Note**: A legacy Gradio UI is available for local development (`app.py`).
|
| 40 |
|
| 41 |
> **Disclaimer**: This software is for research and demonstration purposes only. It is not intended for clinical use.
|
| 42 |
|
|
|
|
| 45 |
- 🧠 **State-of-the-Art Segmentation**: Uses DeepISLES (ISLES'22 winner) for accurate lesion segmentation.
|
| 46 |
- ☁️ **Cloud-Native Data**: Streams data directly from HuggingFace Datasets (no massive downloads).
|
| 47 |
- 🐳 **Dockerized Inference**: Encapsulates complex deep learning dependencies in a reproducible container.
|
| 48 |
+
- 🖥️ **Modern UI**: React SPA + FastAPI backend with NiiVue for 3D neuroimaging visualization.
|
| 49 |
- ⚙️ **Production Ready**: Type-safe, tested, and configurable via environment variables.
|
| 50 |
|
| 51 |
## Requirements
|
|
|
|
| 116 |
Staging -->|Mount Volume| Docker[DeepISLES Container]
|
| 117 |
Docker -->|Inference| Results[Prediction Mask]
|
| 118 |
Results -->|Load| Metrics["Metrics (Dice)"]
|
| 119 |
+
Results -->|Serve via API| FastAPI[FastAPI Backend]
|
| 120 |
+
FastAPI -->|JSON + Files| React[React SPA + NiiVue]
|
| 121 |
```
|
| 122 |
|
| 123 |
## License
|
|
@@ -1,9 +1,11 @@
|
|
| 1 |
-
"""Alternative entry point for local development.
|
| 2 |
|
| 3 |
-
NOTE: HuggingFace Spaces Docker deployment uses
|
| 4 |
-
|
|
|
|
| 5 |
|
| 6 |
-
For HF Spaces deployment, see: src/stroke_deepisles_demo/
|
|
|
|
| 7 |
"""
|
| 8 |
|
| 9 |
import gradio as gr
|
|
|
|
| 1 |
+
"""Alternative entry point for local Gradio development.
|
| 2 |
|
| 3 |
+
NOTE: HuggingFace Spaces Docker deployment uses FastAPI via uvicorn:
|
| 4 |
+
uvicorn stroke_deepisles_demo.api.main:app --host 0.0.0.0 --port 7860
|
| 5 |
+
(see Dockerfile CMD). This file runs the legacy Gradio UI for local development.
|
| 6 |
|
| 7 |
+
For HF Spaces deployment, see: src/stroke_deepisles_demo/api/main.py
|
| 8 |
+
For legacy Gradio UI, see: src/stroke_deepisles_demo/ui/app.py
|
| 9 |
"""
|
| 10 |
|
| 11 |
import gradio as gr
|
|
@@ -0,0 +1,412 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Architecture Audit - 2024-12-13
|
| 2 |
+
|
| 3 |
+
**Auditor**: Claude Code (validating external analysis)
|
| 4 |
+
**Date**: 2024-12-13
|
| 5 |
+
**Status**: VALIDATED - Fixes in branch `fix/architecture-audit`
|
| 6 |
+
|
| 7 |
+
## Summary
|
| 8 |
+
|
| 9 |
+
External audit identified multiple issues. This document validates each claim from first principles
|
| 10 |
+
and documents the fix strategy. Per user directive: **wire in settings properly rather than removing them**.
|
| 11 |
+
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
## P0 - Critical (Release Blockers)
|
| 15 |
+
|
| 16 |
+
### P0-001: Docker build missing API extras ⚠️ CONFIRMED
|
| 17 |
+
|
| 18 |
+
**Location**: `Dockerfile:43` + `Dockerfile:94`
|
| 19 |
+
|
| 20 |
+
**Claim**: Container runs uvicorn but `uv sync --no-dev --no-install-project` doesn't include `--extra api`.
|
| 21 |
+
|
| 22 |
+
**Validation**:
|
| 23 |
+
```dockerfile
|
| 24 |
+
# Line 43: Dependencies installed without API extra
|
| 25 |
+
RUN uv sync --frozen --no-dev --no-install-project
|
| 26 |
+
|
| 27 |
+
# Line 94: But CMD requires uvicorn (which is in api extra!)
|
| 28 |
+
CMD ["uvicorn", "stroke_deepisles_demo.api.main:app", ...]
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
+
In `pyproject.toml`:
|
| 32 |
+
```toml
|
| 33 |
+
[project.optional-dependencies]
|
| 34 |
+
api = [
|
| 35 |
+
"fastapi>=0.115.0",
|
| 36 |
+
"uvicorn[standard]>=0.32.0",
|
| 37 |
+
]
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
**Impact**: Container will crash at runtime with `ModuleNotFoundError: No module named 'uvicorn'`
|
| 41 |
+
|
| 42 |
+
**Fix**:
|
| 43 |
+
```dockerfile
|
| 44 |
+
RUN uv sync --frozen --no-dev --no-install-project --extra api
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
## P1 - High Priority
|
| 50 |
+
|
| 51 |
+
### P1-001: Makefile install doesn't include extras ⚠️ CONFIRMED (Minor)
|
| 52 |
+
|
| 53 |
+
**Location**: `Makefile:4`
|
| 54 |
+
|
| 55 |
+
**Claim**: `make install` runs `uv sync` without extras.
|
| 56 |
+
|
| 57 |
+
**Validation**: `uv sync` in dev mode does include dev dependencies but NOT optional extras.
|
| 58 |
+
Tests requiring FastAPI/Gradio may fail.
|
| 59 |
+
|
| 60 |
+
**Impact**: Low for dev (most devs run tests via `uv run pytest`), but inconsistent.
|
| 61 |
+
|
| 62 |
+
**Fix**: Update Makefile to install extras needed for testing:
|
| 63 |
+
```makefile
|
| 64 |
+
install:
|
| 65 |
+
uv sync --extra api --extra gradio
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
### P1-002: Stale Dockerfile comment about StaticFiles ⚠️ CONFIRMED
|
| 69 |
+
|
| 70 |
+
**Location**: `Dockerfile:69`
|
| 71 |
+
|
| 72 |
+
**Claim**: Comment says "StaticFiles mount" but we use explicit routes.
|
| 73 |
+
|
| 74 |
+
**Validation**:
|
| 75 |
+
```dockerfile
|
| 76 |
+
# Line 69: STALE COMMENT
|
| 77 |
+
# CRITICAL: /tmp/stroke-results is required for FastAPI StaticFiles mount
|
| 78 |
+
|
| 79 |
+
# But files.py:1-16 explicitly says we REPLACED StaticFiles:
|
| 80 |
+
# "BUG-004 FIX: This module replaces the StaticFiles mount approach."
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
**Impact**: Misleads operators debugging file-serving issues.
|
| 84 |
+
|
| 85 |
+
**Fix**: Update comment to reflect explicit route implementation.
|
| 86 |
+
|
| 87 |
+
---
|
| 88 |
+
|
| 89 |
+
## P2 - Medium Priority (Dead Config → Wire In)
|
| 90 |
+
|
| 91 |
+
### P2-001: hf_dataset_id setting not used ⚠️ CONFIRMED
|
| 92 |
+
|
| 93 |
+
**Location**: `config.py:79` → `loader.py:213`
|
| 94 |
+
|
| 95 |
+
**Claim**: `Settings.hf_dataset_id` exists but `load_isles_dataset()` uses hardcoded `DEFAULT_HF_DATASET`.
|
| 96 |
+
|
| 97 |
+
**Validation**:
|
| 98 |
+
```python
|
| 99 |
+
# config.py:79
|
| 100 |
+
hf_dataset_id: str = "hugging-science/isles24-stroke"
|
| 101 |
+
|
| 102 |
+
# loader.py:158 (hardcoded duplicate!)
|
| 103 |
+
DEFAULT_HF_DATASET = "hugging-science/isles24-stroke"
|
| 104 |
+
|
| 105 |
+
# loader.py:213 (ignores settings)
|
| 106 |
+
dataset_id = str(source) if source else DEFAULT_HF_DATASET
|
| 107 |
+
```
|
| 108 |
+
|
| 109 |
+
**Fix**: Wire `get_settings().hf_dataset_id` through the data loading path.
|
| 110 |
+
|
| 111 |
+
### P2-002: hf_token setting not used ⚠️ CONFIRMED
|
| 112 |
+
|
| 113 |
+
**Location**: `config.py:81` → `loader.py:218`
|
| 114 |
+
|
| 115 |
+
**Claim**: `Settings.hf_token` exists but isn't passed to `datasets.load_dataset()`.
|
| 116 |
+
|
| 117 |
+
**Validation**:
|
| 118 |
+
```python
|
| 119 |
+
# config.py:81
|
| 120 |
+
hf_token: str | None = Field(default=None, repr=False)
|
| 121 |
+
|
| 122 |
+
# loader.py:218 (no token!)
|
| 123 |
+
ds = load_dataset(dataset_id, split="train")
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
**Fix**: Pass `token=get_settings().hf_token` to `load_dataset()`.
|
| 127 |
+
|
| 128 |
+
### P2-003: deepisles_docker_image setting ignored ⚠️ CONFIRMED
|
| 129 |
+
|
| 130 |
+
**Location**: `config.py:84` → `deepisles.py:34`
|
| 131 |
+
|
| 132 |
+
**Claim**: Settings exists but hardcoded constant `DEEPISLES_IMAGE` is used.
|
| 133 |
+
|
| 134 |
+
**Validation**:
|
| 135 |
+
```python
|
| 136 |
+
# config.py:84
|
| 137 |
+
deepisles_docker_image: str = "isleschallenge/deepisles"
|
| 138 |
+
|
| 139 |
+
# deepisles.py:34 (hardcoded!)
|
| 140 |
+
DEEPISLES_IMAGE = "isleschallenge/deepisles"
|
| 141 |
+
|
| 142 |
+
# deepisles.py:169 (uses constant, not settings)
|
| 143 |
+
run_container(DEEPISLES_IMAGE, ...)
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
**Fix**: Use `get_settings().deepisles_docker_image` in `_run_via_docker()`.
|
| 147 |
+
|
| 148 |
+
### P2-004: deepisles_timeout_seconds setting not wired through ⚠️ CONFIRMED
|
| 149 |
+
|
| 150 |
+
**Location**: `config.py:86` → `pipeline.py` → `deepisles.py:242`
|
| 151 |
+
|
| 152 |
+
**Claim**: Timeout setting exists but pipeline doesn't pass it.
|
| 153 |
+
|
| 154 |
+
**Validation**:
|
| 155 |
+
```python
|
| 156 |
+
# config.py:86
|
| 157 |
+
deepisles_timeout_seconds: int = 1800
|
| 158 |
+
|
| 159 |
+
# pipeline.py:148-153 (no timeout parameter!)
|
| 160 |
+
inference_result = run_deepisles_on_folder(
|
| 161 |
+
staged.input_dir,
|
| 162 |
+
output_dir=results_dir,
|
| 163 |
+
fast=fast,
|
| 164 |
+
gpu=gpu,
|
| 165 |
+
# timeout missing!
|
| 166 |
+
)
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
**Fix**: Pass `timeout=get_settings().deepisles_timeout_seconds` through pipeline.
|
| 170 |
+
|
| 171 |
+
### P2-005: deepisles_use_gpu setting not used by API ⚠️ CONFIRMED
|
| 172 |
+
|
| 173 |
+
**Location**: `config.py:87` → `routes.py:232`
|
| 174 |
+
|
| 175 |
+
**Claim**: GPU setting exists but API path doesn't pass it.
|
| 176 |
+
|
| 177 |
+
**Validation**:
|
| 178 |
+
```python
|
| 179 |
+
# config.py:87
|
| 180 |
+
deepisles_use_gpu: bool = True
|
| 181 |
+
|
| 182 |
+
# routes.py:232-238 (no gpu parameter!)
|
| 183 |
+
result = run_pipeline_on_case(
|
| 184 |
+
case_id,
|
| 185 |
+
output_dir=output_dir,
|
| 186 |
+
fast=fast_mode,
|
| 187 |
+
compute_dice=True,
|
| 188 |
+
cleanup_staging=True,
|
| 189 |
+
# gpu missing!
|
| 190 |
+
)
|
| 191 |
+
```
|
| 192 |
+
|
| 193 |
+
**Fix**: Pass `gpu=get_settings().deepisles_use_gpu` through API route.
|
| 194 |
+
|
| 195 |
+
### P2-006: Dataset reloaded on every /api/cases call ⚠️ CONFIRMED
|
| 196 |
+
|
| 197 |
+
**Location**: `routes.py:49` → `data/__init__.py:34`
|
| 198 |
+
|
| 199 |
+
**Claim**: `list_case_ids()` reloads dataset each time.
|
| 200 |
+
|
| 201 |
+
**Validation**:
|
| 202 |
+
```python
|
| 203 |
+
# data/__init__.py:34-41
|
| 204 |
+
def list_case_ids() -> list[str]:
|
| 205 |
+
with load_isles_dataset() as dataset: # Fresh load every call!
|
| 206 |
+
return dataset.list_case_ids()
|
| 207 |
+
```
|
| 208 |
+
|
| 209 |
+
**Impact**: Unnecessary latency on cold paths, amplifies HF wake-up time.
|
| 210 |
+
|
| 211 |
+
**Fix**: Add TTL cache for case IDs list.
|
| 212 |
+
|
| 213 |
+
### P2-007: Double dataset load on segment request ⚠️ CONFIRMED
|
| 214 |
+
|
| 215 |
+
**Location**: `routes.py:101` → `pipeline.py:90`
|
| 216 |
+
|
| 217 |
+
**Claim**: Validation loads dataset, then pipeline loads again.
|
| 218 |
+
|
| 219 |
+
**Validation**:
|
| 220 |
+
```python
|
| 221 |
+
# routes.py:101
|
| 222 |
+
valid_cases = list_case_ids() # First load
|
| 223 |
+
|
| 224 |
+
# Then in run_pipeline_on_case (routes.py:232)
|
| 225 |
+
# pipeline.py:90
|
| 226 |
+
with load_isles_dataset() as dataset: # Second load!
|
| 227 |
+
```
|
| 228 |
+
|
| 229 |
+
**Fix**: Remove validation pre-check, let pipeline raise controlled error.
|
| 230 |
+
|
| 231 |
+
### P2-008: File download has no extension allowlist ⚠️ CONFIRMED (Low Risk)
|
| 232 |
+
|
| 233 |
+
**Location**: `files.py:29`
|
| 234 |
+
|
| 235 |
+
**Claim**: Any file under job dir is servable.
|
| 236 |
+
|
| 237 |
+
**Validation**: Path traversal is blocked, but no extension filter.
|
| 238 |
+
Currently only NIfTI files end up in results dirs, but defense-in-depth is better.
|
| 239 |
+
|
| 240 |
+
**Fix**: Add extension allowlist: `.nii`, `.nii.gz`.
|
| 241 |
+
|
| 242 |
+
### P2-009: Concurrency limit check-then-create not atomic ⚠️ CONFIRMED (Mitigated)
|
| 243 |
+
|
| 244 |
+
**Location**: `routes.py:92-113`
|
| 245 |
+
|
| 246 |
+
**Claim**: TOCTOU race in concurrency limiting.
|
| 247 |
+
|
| 248 |
+
**Validation**:
|
| 249 |
+
```python
|
| 250 |
+
# routes.py:92-98
|
| 251 |
+
if store.get_active_job_count() >= max: # Check
|
| 252 |
+
raise 503
|
| 253 |
+
# ... other code ...
|
| 254 |
+
store.create_job(job_id, ...) # Create (gap!)
|
| 255 |
+
```
|
| 256 |
+
|
| 257 |
+
**Mitigation**: Single-worker uvicorn (no multi-worker race). But code smell remains.
|
| 258 |
+
|
| 259 |
+
**Fix**: Add atomic `create_job_if_under_limit()` method to JobStore.
|
| 260 |
+
|
| 261 |
+
### P2-010: Gradio cleanup scope mismatch ⚠️ CONFIRMED
|
| 262 |
+
|
| 263 |
+
**Location**: `ui/app.py:67` + `pipeline.py:107`
|
| 264 |
+
|
| 265 |
+
**Claim**: Gradio cleanup only checks `results_dir`, but pipeline creates temp in system temp.
|
| 266 |
+
|
| 267 |
+
**Validation**:
|
| 268 |
+
```python
|
| 269 |
+
# ui/app.py:67
|
| 270 |
+
allowed_root = get_settings().results_dir.resolve()
|
| 271 |
+
|
| 272 |
+
# pipeline.py:107 (when output_dir is None)
|
| 273 |
+
base_temp = Path(tempfile.mkdtemp(prefix="deepisles_pipeline_"))
|
| 274 |
+
# Creates in /tmp, NOT in results_dir!
|
| 275 |
+
```
|
| 276 |
+
|
| 277 |
+
**Impact**: Disk leak - Gradio UI's temp files never cleaned up.
|
| 278 |
+
|
| 279 |
+
**Fix**: Pass `output_dir=get_settings().results_dir / unique_id` from Gradio UI to pipeline.
|
| 280 |
+
|
| 281 |
+
---
|
| 282 |
+
|
| 283 |
+
## P3 - Low Priority (Documentation/Metadata)
|
| 284 |
+
|
| 285 |
+
### P3-001: Root app.py has stale deployment comment ⚠️ CONFIRMED
|
| 286 |
+
|
| 287 |
+
**Location**: `app.py:4`
|
| 288 |
+
|
| 289 |
+
**Claim**: Says HF Spaces uses `ui.app` but Dockerfile runs `api.main`.
|
| 290 |
+
|
| 291 |
+
**Current**:
|
| 292 |
+
```python
|
| 293 |
+
# NOTE: HuggingFace Spaces Docker deployment uses `python -m stroke_deepisles_demo.ui.app`
|
| 294 |
+
```
|
| 295 |
+
|
| 296 |
+
**Fix**: Update to reference `api.main:app` via uvicorn.
|
| 297 |
+
|
| 298 |
+
### P3-002: pyproject.toml description mentions Gradio ⚠️ CONFIRMED
|
| 299 |
+
|
| 300 |
+
**Location**: `pyproject.toml:4`
|
| 301 |
+
|
| 302 |
+
**Current**:
|
| 303 |
+
```toml
|
| 304 |
+
description = "Demo: HF datasets + DeepISLES stroke segmentation + Gradio visualization"
|
| 305 |
+
```
|
| 306 |
+
|
| 307 |
+
**Fix**: Update to mention React SPA + FastAPI as primary, Gradio as legacy.
|
| 308 |
+
|
| 309 |
+
### P3-003: README describes Gradio as visualization layer ⚠️ CONFIRMED
|
| 310 |
+
|
| 311 |
+
**Location**: `README.md:37`
|
| 312 |
+
|
| 313 |
+
**Current**:
|
| 314 |
+
```markdown
|
| 315 |
+
3. **Visualization**: Interactive 3D and multi-planar viewing with NiiVue in Gradio.
|
| 316 |
+
```
|
| 317 |
+
|
| 318 |
+
**Fix**: Update to describe React SPA + FastAPI architecture, note Gradio as legacy option.
|
| 319 |
+
|
| 320 |
+
### P3-004: requirements.txt exists alongside uv.lock ⚠️ CONFIRMED
|
| 321 |
+
|
| 322 |
+
**Location**: `requirements.txt` + `Dockerfile:31`
|
| 323 |
+
|
| 324 |
+
**Validation**: requirements.txt exists (547 bytes) but Dockerfile only uses uv.lock.
|
| 325 |
+
|
| 326 |
+
**Fix**: Either remove requirements.txt or add comment clarifying it's for pip-only environments.
|
| 327 |
+
|
| 328 |
+
---
|
| 329 |
+
|
| 330 |
+
## P4 - Nitpicks (Code Cleanliness)
|
| 331 |
+
|
| 332 |
+
### P4-001: pipeline.py dataset_id parameter ignored ⚠️ CONFIRMED
|
| 333 |
+
|
| 334 |
+
**Location**: `pipeline.py:60`
|
| 335 |
+
|
| 336 |
+
```python
|
| 337 |
+
dataset_id: str | None = None, # Accepted
|
| 338 |
+
# ...
|
| 339 |
+
_ = dataset_id # But explicitly ignored (line 84)
|
| 340 |
+
```
|
| 341 |
+
|
| 342 |
+
**Fix**: Wire `dataset_id` through to `load_isles_dataset()`.
|
| 343 |
+
|
| 344 |
+
### P4-002: pipeline.py max_workers parameter ignored ⚠️ CONFIRMED
|
| 345 |
+
|
| 346 |
+
**Location**: `pipeline.py:186`
|
| 347 |
+
|
| 348 |
+
```python
|
| 349 |
+
max_workers: int = 1, # Accepted
|
| 350 |
+
# ...
|
| 351 |
+
_ = max_workers # Explicitly ignored (line 206)
|
| 352 |
+
```
|
| 353 |
+
|
| 354 |
+
**Note**: Docstring correctly says "Currently ignored - reserved for future parallel support."
|
| 355 |
+
This is acceptable tech debt - parameter exists for API stability.
|
| 356 |
+
|
| 357 |
+
**Fix**: Leave as-is (documented intentional limitation).
|
| 358 |
+
|
| 359 |
+
### P4-003: deepisles.py has unused constants ⚠️ CONFIRMED
|
| 360 |
+
|
| 361 |
+
**Location**: `deepisles.py:35-36`
|
| 362 |
+
|
| 363 |
+
```python
|
| 364 |
+
EXPECTED_INPUT_FILES = ["dwi.nii.gz", "adc.nii.gz"]
|
| 365 |
+
OPTIONAL_INPUT_FILES = ["flair.nii.gz"]
|
| 366 |
+
```
|
| 367 |
+
|
| 368 |
+
These are defined but never used.
|
| 369 |
+
|
| 370 |
+
**Fix**: Use in `validate_input_folder()` error messages, or remove.
|
| 371 |
+
|
| 372 |
+
### P4-004: Frontend duplicated retry constants ⚠️ CONFIRMED
|
| 373 |
+
|
| 374 |
+
**Location**: `useSegmentation.ts:9-11` + `CaseSelector.tsx:5-7`
|
| 375 |
+
|
| 376 |
+
Both files define:
|
| 377 |
+
```typescript
|
| 378 |
+
const MAX_COLD_START_RETRIES = 5;
|
| 379 |
+
const INITIAL_RETRY_DELAY = 2000;
|
| 380 |
+
const MAX_RETRY_DELAY = 30000;
|
| 381 |
+
```
|
| 382 |
+
|
| 383 |
+
**Fix**: Extract to shared `frontend/src/utils/retry.ts`.
|
| 384 |
+
|
| 385 |
+
---
|
| 386 |
+
|
| 387 |
+
## Architecture Violations Check ✅ PASSED
|
| 388 |
+
|
| 389 |
+
The external audit confirmed NO architecture violations:
|
| 390 |
+
- No API importing/calling Gradio/UI code
|
| 391 |
+
- Clear React SPA / FastAPI backend separation
|
| 392 |
+
- Strong path traversal defenses in file serving
|
| 393 |
+
- Safe job-id handling and cleanup
|
| 394 |
+
|
| 395 |
+
---
|
| 396 |
+
|
| 397 |
+
## Fix Priority Order
|
| 398 |
+
|
| 399 |
+
1. **P0-001**: Docker build crash (release blocker)
|
| 400 |
+
2. **P1-001, P1-002**: Makefile + stale comment
|
| 401 |
+
3. **P2-001 through P2-005**: Wire in dead config settings
|
| 402 |
+
4. **P2-006, P2-007**: Dataset caching
|
| 403 |
+
5. **P2-008 through P2-010**: Security hardening
|
| 404 |
+
6. **P3-***: Documentation updates
|
| 405 |
+
7. **P4-***: Code cleanliness
|
| 406 |
+
|
| 407 |
+
---
|
| 408 |
+
|
| 409 |
+
## Implementation Notes
|
| 410 |
+
|
| 411 |
+
Per user directive: **Wire settings in properly rather than removing dead config.**
|
| 412 |
+
These settings were created for a reason - they should work as documented.
|
|
File without changes
|
|
@@ -1,10 +1,6 @@
|
|
| 1 |
import { useEffect, useState } from "react";
|
| 2 |
import { apiClient, ApiError } from "../api/client";
|
| 3 |
-
|
| 4 |
-
// Cold start retry configuration (matches useSegmentation.ts)
|
| 5 |
-
const MAX_COLD_START_RETRIES = 5;
|
| 6 |
-
const INITIAL_RETRY_DELAY = 2000;
|
| 7 |
-
const MAX_RETRY_DELAY = 30000;
|
| 8 |
|
| 9 |
interface CaseSelectorProps {
|
| 10 |
selectedCase: string | null;
|
|
@@ -54,12 +50,10 @@ export function CaseSelector({
|
|
| 54 |
setRetryCount(attempts);
|
| 55 |
setIsWakingUp(true);
|
| 56 |
|
| 57 |
-
// Exponential backoff
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
MAX_RETRY_DELAY,
|
| 61 |
);
|
| 62 |
-
await new Promise((resolve) => setTimeout(resolve, delay));
|
| 63 |
continue;
|
| 64 |
}
|
| 65 |
|
|
|
|
| 1 |
import { useEffect, useState } from "react";
|
| 2 |
import { apiClient, ApiError } from "../api/client";
|
| 3 |
+
import { MAX_COLD_START_RETRIES, getRetryDelay } from "../utils/retry";
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
|
| 5 |
interface CaseSelectorProps {
|
| 6 |
selectedCase: string | null;
|
|
|
|
| 50 |
setRetryCount(attempts);
|
| 51 |
setIsWakingUp(true);
|
| 52 |
|
| 53 |
+
// Exponential backoff with capped maximum
|
| 54 |
+
await new Promise((resolve) =>
|
| 55 |
+
setTimeout(resolve, getRetryDelay(attempts)),
|
|
|
|
| 56 |
);
|
|
|
|
| 57 |
continue;
|
| 58 |
}
|
| 59 |
|
|
@@ -1,21 +1,15 @@
|
|
| 1 |
import { useState, useCallback, useRef, useEffect } from "react";
|
| 2 |
import { apiClient, ApiError } from "../api/client";
|
| 3 |
import type { SegmentationResult, JobStatus } from "../types";
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
|
| 5 |
// Polling interval in milliseconds
|
| 6 |
const POLLING_INTERVAL = 2000;
|
| 7 |
|
| 8 |
-
// Cold start retry configuration
|
| 9 |
-
const MAX_COLD_START_RETRIES = 5;
|
| 10 |
-
const INITIAL_RETRY_DELAY = 2000; // 2 seconds
|
| 11 |
-
const MAX_RETRY_DELAY = 30000; // 30 seconds
|
| 12 |
-
|
| 13 |
-
/**
|
| 14 |
-
* Sleep utility for async delays
|
| 15 |
-
*/
|
| 16 |
-
const sleep = (ms: number): Promise<void> =>
|
| 17 |
-
new Promise((resolve) => setTimeout(resolve, ms));
|
| 18 |
-
|
| 19 |
/**
|
| 20 |
* Hook for running segmentation with async job polling.
|
| 21 |
*
|
|
@@ -204,12 +198,8 @@ export function useSegmentation() {
|
|
| 204 |
);
|
| 205 |
setProgress(0);
|
| 206 |
|
| 207 |
-
// Exponential backoff
|
| 208 |
-
|
| 209 |
-
INITIAL_RETRY_DELAY * Math.pow(2, retryCount - 1),
|
| 210 |
-
MAX_RETRY_DELAY,
|
| 211 |
-
);
|
| 212 |
-
await sleep(delay);
|
| 213 |
|
| 214 |
// Continue to next iteration of retry loop
|
| 215 |
continue;
|
|
|
|
| 1 |
import { useState, useCallback, useRef, useEffect } from "react";
|
| 2 |
import { apiClient, ApiError } from "../api/client";
|
| 3 |
import type { SegmentationResult, JobStatus } from "../types";
|
| 4 |
+
import {
|
| 5 |
+
MAX_COLD_START_RETRIES,
|
| 6 |
+
getRetryDelay,
|
| 7 |
+
sleep,
|
| 8 |
+
} from "../utils/retry";
|
| 9 |
|
| 10 |
// Polling interval in milliseconds
|
| 11 |
const POLLING_INTERVAL = 2000;
|
| 12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
/**
|
| 14 |
* Hook for running segmentation with async job polling.
|
| 15 |
*
|
|
|
|
| 198 |
);
|
| 199 |
setProgress(0);
|
| 200 |
|
| 201 |
+
// Exponential backoff with capped maximum
|
| 202 |
+
await sleep(getRetryDelay(retryCount));
|
|
|
|
|
|
|
|
|
|
|
|
|
| 203 |
|
| 204 |
// Continue to next iteration of retry loop
|
| 205 |
continue;
|
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
/**
|
| 2 |
+
* Shared retry configuration for cold-start handling.
|
| 3 |
+
*
|
| 4 |
+
* HuggingFace Spaces containers can take 30-60 seconds to wake from sleep.
|
| 5 |
+
* This module provides shared constants and utilities for exponential backoff retry.
|
| 6 |
+
*/
|
| 7 |
+
|
| 8 |
+
// Cold start retry configuration
|
| 9 |
+
export const MAX_COLD_START_RETRIES = 5;
|
| 10 |
+
export const INITIAL_RETRY_DELAY = 2000; // 2 seconds
|
| 11 |
+
export const MAX_RETRY_DELAY = 30000; // 30 seconds
|
| 12 |
+
|
| 13 |
+
/**
|
| 14 |
+
* Calculate exponential backoff delay with capped maximum.
|
| 15 |
+
*
|
| 16 |
+
* @param attempt - Current retry attempt (1-indexed)
|
| 17 |
+
* @returns Delay in milliseconds
|
| 18 |
+
*/
|
| 19 |
+
export function getRetryDelay(attempt: number): number {
|
| 20 |
+
return Math.min(INITIAL_RETRY_DELAY * Math.pow(2, attempt - 1), MAX_RETRY_DELAY);
|
| 21 |
+
}
|
| 22 |
+
|
| 23 |
+
/**
|
| 24 |
+
* Sleep utility for async delays.
|
| 25 |
+
*
|
| 26 |
+
* @param ms - Milliseconds to sleep
|
| 27 |
+
*/
|
| 28 |
+
export function sleep(ms: number): Promise<void> {
|
| 29 |
+
return new Promise((resolve) => setTimeout(resolve, ms));
|
| 30 |
+
}
|
|
@@ -1,7 +1,7 @@
|
|
| 1 |
[project]
|
| 2 |
name = "stroke-deepisles-demo"
|
| 3 |
version = "0.1.0"
|
| 4 |
-
description = "Demo: HF datasets + DeepISLES stroke segmentation +
|
| 5 |
readme = "README.md"
|
| 6 |
license = { text = "Apache-2.0" }
|
| 7 |
requires-python = ">=3.11"
|
|
|
|
| 1 |
[project]
|
| 2 |
name = "stroke-deepisles-demo"
|
| 3 |
version = "0.1.0"
|
| 4 |
+
description = "Demo: HF datasets + DeepISLES stroke segmentation + React SPA + FastAPI backend"
|
| 5 |
readme = "README.md"
|
| 6 |
license = { text = "Apache-2.0" }
|
| 7 |
requires-python = ">=3.11"
|
|
@@ -1,6 +1,7 @@
|
|
| 1 |
-
# requirements.txt
|
|
|
|
|
|
|
| 2 |
# Generated: December 2025
|
| 3 |
-
# See: docs/specs/07-hf-spaces-deployment.md
|
| 4 |
|
| 5 |
# Core - BIDS + NIfTI lazy loading (maintained fork)
|
| 6 |
neuroimaging-go-brrrr @ git+https://github.com/The-Obstacle-Is-The-Way/neuroimaging-go-brrrr.git@v0.2.1
|
|
|
|
| 1 |
+
# requirements.txt - Fallback for pip-only environments
|
| 2 |
+
# NOTE: Primary dependency management uses uv.lock (see Dockerfile)
|
| 3 |
+
# This file is for environments without uv (e.g., some CI systems)
|
| 4 |
# Generated: December 2025
|
|
|
|
| 5 |
|
| 6 |
# Core - BIDS + NIfTI lazy loading (maintained fork)
|
| 7 |
neuroimaging-go-brrrr @ git+https://github.com/The-Obstacle-Is-The-Way/neuroimaging-go-brrrr.git@v0.2.1
|
|
@@ -23,6 +23,10 @@ from stroke_deepisles_demo.core.logging import get_logger
|
|
| 23 |
|
| 24 |
logger = get_logger(__name__)
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
files_router = APIRouter(prefix="/files", tags=["files"])
|
| 27 |
|
| 28 |
|
|
@@ -44,6 +48,15 @@ async def get_result_file(job_id: str, case_id: str, filename: str) -> FileRespo
|
|
| 44 |
Raises:
|
| 45 |
404: File not found (job expired, invalid path, or doesn't exist)
|
| 46 |
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
# Construct file path
|
| 48 |
results_dir = get_settings().results_dir
|
| 49 |
file_path = results_dir / job_id / case_id / filename
|
|
|
|
| 23 |
|
| 24 |
logger = get_logger(__name__)
|
| 25 |
|
| 26 |
+
# Allowed file extensions (defense-in-depth)
|
| 27 |
+
# Only serve NIfTI files to prevent accidental exposure of logs/metadata
|
| 28 |
+
_ALLOWED_EXTENSIONS = {".nii", ".nii.gz"}
|
| 29 |
+
|
| 30 |
files_router = APIRouter(prefix="/files", tags=["files"])
|
| 31 |
|
| 32 |
|
|
|
|
| 48 |
Raises:
|
| 49 |
404: File not found (job expired, invalid path, or doesn't exist)
|
| 50 |
"""
|
| 51 |
+
# Security: Validate file extension (defense-in-depth)
|
| 52 |
+
# Only serve NIfTI files to prevent accidental exposure of logs/metadata
|
| 53 |
+
if not any(filename.endswith(ext) for ext in _ALLOWED_EXTENSIONS):
|
| 54 |
+
logger.warning("Blocked request for non-NIfTI file: %s", filename)
|
| 55 |
+
raise HTTPException(
|
| 56 |
+
status_code=404,
|
| 57 |
+
detail="Only NIfTI files (.nii, .nii.gz) can be served.",
|
| 58 |
+
)
|
| 59 |
+
|
| 60 |
# Construct file path
|
| 61 |
results_dir = get_settings().results_dir
|
| 62 |
file_path = results_dir / job_id / case_id / filename
|
|
@@ -195,6 +195,57 @@ class JobStore:
|
|
| 195 |
logger.info("Created job %s", job_id)
|
| 196 |
return job
|
| 197 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 198 |
def get_job(self, job_id: str) -> Job | None:
|
| 199 |
"""Get a job by ID.
|
| 200 |
|
|
|
|
| 195 |
logger.info("Created job %s", job_id)
|
| 196 |
return job
|
| 197 |
|
| 198 |
+
def create_job_if_under_limit(
|
| 199 |
+
self,
|
| 200 |
+
job_id: str,
|
| 201 |
+
case_id: str,
|
| 202 |
+
fast_mode: bool,
|
| 203 |
+
max_active: int,
|
| 204 |
+
) -> Job | None:
|
| 205 |
+
"""Atomically create a job if under concurrency limit.
|
| 206 |
+
|
| 207 |
+
This prevents TOCTOU race conditions where check-then-create
|
| 208 |
+
could exceed the limit under concurrent requests.
|
| 209 |
+
|
| 210 |
+
Args:
|
| 211 |
+
job_id: Unique identifier for the job
|
| 212 |
+
case_id: Case to process
|
| 213 |
+
fast_mode: Whether to use fast inference
|
| 214 |
+
max_active: Maximum allowed active (pending/running) jobs
|
| 215 |
+
|
| 216 |
+
Returns:
|
| 217 |
+
The created Job if under limit, None if limit reached
|
| 218 |
+
|
| 219 |
+
Raises:
|
| 220 |
+
ValueError: If job_id is invalid (contains unsafe characters)
|
| 221 |
+
KeyError: If job_id already exists
|
| 222 |
+
"""
|
| 223 |
+
if not self._is_safe_job_id(job_id):
|
| 224 |
+
raise ValueError(f"Invalid job_id: {job_id!r}")
|
| 225 |
+
|
| 226 |
+
job = Job(
|
| 227 |
+
id=job_id,
|
| 228 |
+
status=JobStatus.PENDING,
|
| 229 |
+
case_id=case_id,
|
| 230 |
+
fast_mode=fast_mode,
|
| 231 |
+
created_at=datetime.now(),
|
| 232 |
+
)
|
| 233 |
+
|
| 234 |
+
with self._lock:
|
| 235 |
+
# Check limit atomically with creation
|
| 236 |
+
active_count = sum(
|
| 237 |
+
1 for j in self._jobs.values() if j.status in (JobStatus.PENDING, JobStatus.RUNNING)
|
| 238 |
+
)
|
| 239 |
+
if active_count >= max_active:
|
| 240 |
+
return None
|
| 241 |
+
|
| 242 |
+
if job_id in self._jobs:
|
| 243 |
+
raise KeyError(f"Job already exists: {job_id}")
|
| 244 |
+
self._jobs[job_id] = job
|
| 245 |
+
|
| 246 |
+
logger.info("Created job %s", job_id)
|
| 247 |
+
return job
|
| 248 |
+
|
| 249 |
def get_job(self, job_id: str) -> Job | None:
|
| 250 |
"""Get a job by ID.
|
| 251 |
|
|
@@ -89,13 +89,8 @@ def create_segment_job(
|
|
| 89 |
- Returning immediately avoids timeout errors
|
| 90 |
"""
|
| 91 |
try:
|
| 92 |
-
# Concurrency limit to prevent GPU memory exhaustion (BUG-006 fix)
|
| 93 |
store = get_job_store()
|
| 94 |
-
|
| 95 |
-
raise HTTPException(
|
| 96 |
-
status_code=503,
|
| 97 |
-
detail="Server busy: too many active jobs. Please try again later.",
|
| 98 |
-
)
|
| 99 |
|
| 100 |
# Validate case_id exists before creating job
|
| 101 |
valid_cases = list_case_ids()
|
|
@@ -109,8 +104,15 @@ def create_segment_job(
|
|
| 109 |
job_id = uuid.uuid4().hex
|
| 110 |
backend_url = get_backend_base_url(request)
|
| 111 |
|
| 112 |
-
#
|
| 113 |
-
store.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
# Queue background task
|
| 116 |
background_tasks.add_task(
|
|
@@ -229,10 +231,12 @@ def run_segmentation_job(
|
|
| 229 |
# Run the pipeline
|
| 230 |
store.update_progress(job_id, 30, "Running DeepISLES inference...")
|
| 231 |
|
|
|
|
| 232 |
result = run_pipeline_on_case(
|
| 233 |
case_id,
|
| 234 |
output_dir=output_dir,
|
| 235 |
fast=fast_mode,
|
|
|
|
| 236 |
compute_dice=True,
|
| 237 |
cleanup_staging=True,
|
| 238 |
)
|
|
|
|
| 89 |
- Returning immediately avoids timeout errors
|
| 90 |
"""
|
| 91 |
try:
|
|
|
|
| 92 |
store = get_job_store()
|
| 93 |
+
settings = get_settings()
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
|
| 95 |
# Validate case_id exists before creating job
|
| 96 |
valid_cases = list_case_ids()
|
|
|
|
| 104 |
job_id = uuid.uuid4().hex
|
| 105 |
backend_url = get_backend_base_url(request)
|
| 106 |
|
| 107 |
+
# Atomic concurrency limit + job creation (prevents TOCTOU race)
|
| 108 |
+
job = store.create_job_if_under_limit(
|
| 109 |
+
job_id, body.case_id, body.fast_mode, settings.max_concurrent_jobs
|
| 110 |
+
)
|
| 111 |
+
if job is None:
|
| 112 |
+
raise HTTPException(
|
| 113 |
+
status_code=503,
|
| 114 |
+
detail="Server busy: too many active jobs. Please try again later.",
|
| 115 |
+
)
|
| 116 |
|
| 117 |
# Queue background task
|
| 118 |
background_tasks.add_task(
|
|
|
|
| 231 |
# Run the pipeline
|
| 232 |
store.update_progress(job_id, 30, "Running DeepISLES inference...")
|
| 233 |
|
| 234 |
+
# Note: gpu and timeout default to Settings values via pipeline
|
| 235 |
result = run_pipeline_on_case(
|
| 236 |
case_id,
|
| 237 |
output_dir=output_dir,
|
| 238 |
fast=fast_mode,
|
| 239 |
+
# gpu, timeout use Settings defaults
|
| 240 |
compute_dice=True,
|
| 241 |
cleanup_staging=True,
|
| 242 |
)
|
|
@@ -154,23 +154,22 @@ class HuggingFaceDatasetWrapper:
|
|
| 154 |
self._temp_dir = None
|
| 155 |
|
| 156 |
|
| 157 |
-
# Default HuggingFace dataset ID
|
| 158 |
-
DEFAULT_HF_DATASET = "hugging-science/isles24-stroke"
|
| 159 |
-
|
| 160 |
-
|
| 161 |
def load_isles_dataset(
|
| 162 |
source: str | Path | None = None,
|
| 163 |
*,
|
| 164 |
local_mode: bool | None = None,
|
|
|
|
| 165 |
) -> Dataset:
|
| 166 |
"""
|
| 167 |
Load ISLES24 dataset from local directory or HuggingFace Hub.
|
| 168 |
|
| 169 |
Args:
|
| 170 |
source: Local directory path or HuggingFace dataset ID.
|
| 171 |
-
If None, uses
|
| 172 |
local_mode: If True, treat source as local directory.
|
| 173 |
If None, auto-detect based on source type.
|
|
|
|
|
|
|
| 174 |
|
| 175 |
Returns:
|
| 176 |
Dataset-like object providing case access. Use as context manager
|
|
@@ -184,8 +183,8 @@ def load_isles_dataset(
|
|
| 184 |
# Load from local directory
|
| 185 |
ds = load_isles_dataset("data/isles24", local_mode=True)
|
| 186 |
|
| 187 |
-
# Load specific HuggingFace dataset
|
| 188 |
-
ds = load_isles_dataset("
|
| 189 |
"""
|
| 190 |
# Auto-detect mode if not specified
|
| 191 |
if local_mode is None:
|
|
@@ -210,12 +209,19 @@ def load_isles_dataset(
|
|
| 210 |
# HuggingFace mode
|
| 211 |
from datasets import load_dataset
|
| 212 |
|
| 213 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 214 |
|
| 215 |
# Load dataset, selecting only necessary columns to minimize decoding overhead
|
| 216 |
# We rely on neuroimaging-go-brrrr's Nifti feature for lazy loading if configured,
|
| 217 |
# but select_columns ensures we don't touch other modalities.
|
| 218 |
-
|
|
|
|
| 219 |
ds = ds.select_columns(["subject_id", "dwi", "adc", "lesion_mask"])
|
| 220 |
|
| 221 |
return HuggingFaceDatasetWrapper(ds, dataset_id)
|
|
|
|
| 154 |
self._temp_dir = None
|
| 155 |
|
| 156 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 157 |
def load_isles_dataset(
|
| 158 |
source: str | Path | None = None,
|
| 159 |
*,
|
| 160 |
local_mode: bool | None = None,
|
| 161 |
+
token: str | None = None,
|
| 162 |
) -> Dataset:
|
| 163 |
"""
|
| 164 |
Load ISLES24 dataset from local directory or HuggingFace Hub.
|
| 165 |
|
| 166 |
Args:
|
| 167 |
source: Local directory path or HuggingFace dataset ID.
|
| 168 |
+
If None, uses Settings.hf_dataset_id from config.
|
| 169 |
local_mode: If True, treat source as local directory.
|
| 170 |
If None, auto-detect based on source type.
|
| 171 |
+
token: HuggingFace token for private/gated datasets.
|
| 172 |
+
If None, uses Settings.hf_token from config.
|
| 173 |
|
| 174 |
Returns:
|
| 175 |
Dataset-like object providing case access. Use as context manager
|
|
|
|
| 183 |
# Load from local directory
|
| 184 |
ds = load_isles_dataset("data/isles24", local_mode=True)
|
| 185 |
|
| 186 |
+
# Load specific HuggingFace dataset with token
|
| 187 |
+
ds = load_isles_dataset("org/private-dataset", token="hf_xxx")
|
| 188 |
"""
|
| 189 |
# Auto-detect mode if not specified
|
| 190 |
if local_mode is None:
|
|
|
|
| 209 |
# HuggingFace mode
|
| 210 |
from datasets import load_dataset
|
| 211 |
|
| 212 |
+
from stroke_deepisles_demo.core.config import get_settings
|
| 213 |
+
|
| 214 |
+
settings = get_settings()
|
| 215 |
+
|
| 216 |
+
# Use settings defaults if not specified
|
| 217 |
+
dataset_id = str(source) if source else settings.hf_dataset_id
|
| 218 |
+
hf_token = token if token is not None else settings.hf_token
|
| 219 |
|
| 220 |
# Load dataset, selecting only necessary columns to minimize decoding overhead
|
| 221 |
# We rely on neuroimaging-go-brrrr's Nifti feature for lazy loading if configured,
|
| 222 |
# but select_columns ensures we don't touch other modalities.
|
| 223 |
+
# Token enables access to private/gated datasets
|
| 224 |
+
ds = load_dataset(dataset_id, split="train", token=hf_token)
|
| 225 |
ds = ds.select_columns(["subject_id", "dwi", "adc", "lesion_mask"])
|
| 226 |
|
| 227 |
return HuggingFaceDatasetWrapper(ds, dataset_id)
|
|
@@ -1,7 +1,6 @@
|
|
| 1 |
"""Inference module for stroke-deepisles-demo."""
|
| 2 |
|
| 3 |
from stroke_deepisles_demo.inference.deepisles import (
|
| 4 |
-
DEEPISLES_IMAGE,
|
| 5 |
DeepISLESResult,
|
| 6 |
run_deepisles_on_folder,
|
| 7 |
validate_input_folder,
|
|
@@ -19,7 +18,7 @@ from stroke_deepisles_demo.inference.docker import (
|
|
| 19 |
)
|
| 20 |
|
| 21 |
__all__ = [
|
| 22 |
-
|
| 23 |
"DeepISLESResult",
|
| 24 |
"DirectInvocationResult",
|
| 25 |
"DockerRunResult",
|
|
|
|
| 1 |
"""Inference module for stroke-deepisles-demo."""
|
| 2 |
|
| 3 |
from stroke_deepisles_demo.inference.deepisles import (
|
|
|
|
| 4 |
DeepISLESResult,
|
| 5 |
run_deepisles_on_folder,
|
| 6 |
validate_input_folder,
|
|
|
|
| 18 |
)
|
| 19 |
|
| 20 |
__all__ = [
|
| 21 |
+
# Note: Docker image is now configurable via Settings.deepisles_docker_image
|
| 22 |
"DeepISLESResult",
|
| 23 |
"DirectInvocationResult",
|
| 24 |
"DockerRunResult",
|
|
@@ -30,10 +30,13 @@ if TYPE_CHECKING:
|
|
| 30 |
|
| 31 |
logger = get_logger(__name__)
|
| 32 |
|
| 33 |
-
#
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
|
| 39 |
@dataclass(frozen=True)
|
|
@@ -58,15 +61,22 @@ def validate_input_folder(input_dir: Path) -> tuple[Path, Path, Path | None]:
|
|
| 58 |
Raises:
|
| 59 |
MissingInputError: If required files are missing
|
| 60 |
"""
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
|
|
|
| 64 |
|
| 65 |
if not dwi_path.exists():
|
| 66 |
-
raise MissingInputError(
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
if not adc_path.exists():
|
| 69 |
-
raise MissingInputError(
|
|
|
|
|
|
|
|
|
|
| 70 |
|
| 71 |
return dwi_path, adc_path, flair_path if flair_path.exists() else None
|
| 72 |
|
|
@@ -135,9 +145,14 @@ def _run_via_docker(
|
|
| 135 |
Run DeepISLES via Docker container.
|
| 136 |
|
| 137 |
This is the standard execution path for local development.
|
|
|
|
| 138 |
"""
|
| 139 |
start_time = time.time()
|
| 140 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 141 |
# Check GPU if requested
|
| 142 |
if gpu:
|
| 143 |
ensure_gpu_available_if_requested(gpu)
|
|
@@ -163,11 +178,17 @@ def _run_via_docker(
|
|
| 163 |
output_dir.resolve(): "/app/output",
|
| 164 |
}
|
| 165 |
|
| 166 |
-
logger.info(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 167 |
|
| 168 |
# Run the container
|
| 169 |
docker_result = run_container(
|
| 170 |
-
|
| 171 |
command=command,
|
| 172 |
volumes=volumes,
|
| 173 |
gpu=gpu,
|
|
|
|
| 30 |
|
| 31 |
logger = get_logger(__name__)
|
| 32 |
|
| 33 |
+
# Expected input files for validation (named constants for explicit access)
|
| 34 |
+
DWI_FILENAME = "dwi.nii.gz"
|
| 35 |
+
ADC_FILENAME = "adc.nii.gz"
|
| 36 |
+
FLAIR_FILENAME = "flair.nii.gz"
|
| 37 |
+
# Lists preserved for consumers; internal code uses named constants
|
| 38 |
+
EXPECTED_INPUT_FILES = [DWI_FILENAME, ADC_FILENAME]
|
| 39 |
+
OPTIONAL_INPUT_FILES = [FLAIR_FILENAME]
|
| 40 |
|
| 41 |
|
| 42 |
@dataclass(frozen=True)
|
|
|
|
| 61 |
Raises:
|
| 62 |
MissingInputError: If required files are missing
|
| 63 |
"""
|
| 64 |
+
# Build paths using named constants (explicit, not order-dependent)
|
| 65 |
+
dwi_path = input_dir / DWI_FILENAME
|
| 66 |
+
adc_path = input_dir / ADC_FILENAME
|
| 67 |
+
flair_path = input_dir / FLAIR_FILENAME
|
| 68 |
|
| 69 |
if not dwi_path.exists():
|
| 70 |
+
raise MissingInputError(
|
| 71 |
+
f"Required file '{DWI_FILENAME}' not found in {input_dir}. "
|
| 72 |
+
f"Expected: {EXPECTED_INPUT_FILES}"
|
| 73 |
+
)
|
| 74 |
|
| 75 |
if not adc_path.exists():
|
| 76 |
+
raise MissingInputError(
|
| 77 |
+
f"Required file '{ADC_FILENAME}' not found in {input_dir}. "
|
| 78 |
+
f"Expected: {EXPECTED_INPUT_FILES}"
|
| 79 |
+
)
|
| 80 |
|
| 81 |
return dwi_path, adc_path, flair_path if flair_path.exists() else None
|
| 82 |
|
|
|
|
| 145 |
Run DeepISLES via Docker container.
|
| 146 |
|
| 147 |
This is the standard execution path for local development.
|
| 148 |
+
Uses Settings.deepisles_docker_image for the container image.
|
| 149 |
"""
|
| 150 |
start_time = time.time()
|
| 151 |
|
| 152 |
+
# Get docker image from settings (allows override via env var)
|
| 153 |
+
settings = get_settings()
|
| 154 |
+
docker_image = settings.deepisles_docker_image
|
| 155 |
+
|
| 156 |
# Check GPU if requested
|
| 157 |
if gpu:
|
| 158 |
ensure_gpu_available_if_requested(gpu)
|
|
|
|
| 178 |
output_dir.resolve(): "/app/output",
|
| 179 |
}
|
| 180 |
|
| 181 |
+
logger.info(
|
| 182 |
+
"Running DeepISLES via Docker: image=%s, input=%s, fast=%s, gpu=%s",
|
| 183 |
+
docker_image,
|
| 184 |
+
input_dir,
|
| 185 |
+
fast,
|
| 186 |
+
gpu,
|
| 187 |
+
)
|
| 188 |
|
| 189 |
# Run the container
|
| 190 |
docker_result = run_container(
|
| 191 |
+
docker_image,
|
| 192 |
command=command,
|
| 193 |
volumes=volumes,
|
| 194 |
gpu=gpu,
|
|
@@ -59,8 +59,9 @@ def run_pipeline_on_case(
|
|
| 59 |
*,
|
| 60 |
dataset_id: str | None = None,
|
| 61 |
output_dir: Path | None = None,
|
| 62 |
-
fast: bool =
|
| 63 |
-
gpu: bool =
|
|
|
|
| 64 |
compute_dice: bool = True,
|
| 65 |
cleanup_staging: bool = True,
|
| 66 |
) -> PipelineResult:
|
|
@@ -69,25 +70,35 @@ def run_pipeline_on_case(
|
|
| 69 |
|
| 70 |
Args:
|
| 71 |
case_id: Case identifier (string) or index (int)
|
| 72 |
-
dataset_id: HF dataset ID (default from
|
| 73 |
output_dir: Directory for results (default: temp dir)
|
| 74 |
-
fast: Use SEALS-only mode (
|
| 75 |
-
gpu: Use GPU acceleration
|
|
|
|
| 76 |
compute_dice: Compute Dice score if ground truth available
|
| 77 |
cleanup_staging: Remove staging directory after inference
|
| 78 |
|
| 79 |
Returns:
|
| 80 |
PipelineResult with all paths and optional metrics
|
| 81 |
"""
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
|
| 86 |
start_time = time.time()
|
| 87 |
|
| 88 |
# Use context manager to ensure HuggingFace temp files are cleaned up
|
| 89 |
# This prevents unbounded disk usage from accumulating temp NIfTI files
|
| 90 |
-
|
|
|
|
| 91 |
# Resolve ID if integer
|
| 92 |
if isinstance(case_id, int):
|
| 93 |
all_ids = dataset.list_case_ids()
|
|
@@ -150,6 +161,7 @@ def run_pipeline_on_case(
|
|
| 150 |
output_dir=results_dir,
|
| 151 |
fast=fast,
|
| 152 |
gpu=gpu,
|
|
|
|
| 153 |
)
|
| 154 |
|
| 155 |
# 4. Compute Metrics (using copied ground truth)
|
|
|
|
| 59 |
*,
|
| 60 |
dataset_id: str | None = None,
|
| 61 |
output_dir: Path | None = None,
|
| 62 |
+
fast: bool | None = None,
|
| 63 |
+
gpu: bool | None = None,
|
| 64 |
+
timeout: float | None = None,
|
| 65 |
compute_dice: bool = True,
|
| 66 |
cleanup_staging: bool = True,
|
| 67 |
) -> PipelineResult:
|
|
|
|
| 70 |
|
| 71 |
Args:
|
| 72 |
case_id: Case identifier (string) or index (int)
|
| 73 |
+
dataset_id: HF dataset ID (default from Settings.hf_dataset_id)
|
| 74 |
output_dir: Directory for results (default: temp dir)
|
| 75 |
+
fast: Use SEALS-only mode (default from Settings.deepisles_fast_mode)
|
| 76 |
+
gpu: Use GPU acceleration (default from Settings.deepisles_use_gpu)
|
| 77 |
+
timeout: Maximum inference time in seconds (default from Settings.deepisles_timeout_seconds)
|
| 78 |
compute_dice: Compute Dice score if ground truth available
|
| 79 |
cleanup_staging: Remove staging directory after inference
|
| 80 |
|
| 81 |
Returns:
|
| 82 |
PipelineResult with all paths and optional metrics
|
| 83 |
"""
|
| 84 |
+
from stroke_deepisles_demo.core.config import get_settings
|
| 85 |
+
|
| 86 |
+
settings = get_settings()
|
| 87 |
+
|
| 88 |
+
# Apply settings defaults if not specified
|
| 89 |
+
if fast is None:
|
| 90 |
+
fast = settings.deepisles_fast_mode
|
| 91 |
+
if gpu is None:
|
| 92 |
+
gpu = settings.deepisles_use_gpu
|
| 93 |
+
if timeout is None:
|
| 94 |
+
timeout = settings.deepisles_timeout_seconds
|
| 95 |
|
| 96 |
start_time = time.time()
|
| 97 |
|
| 98 |
# Use context manager to ensure HuggingFace temp files are cleaned up
|
| 99 |
# This prevents unbounded disk usage from accumulating temp NIfTI files
|
| 100 |
+
# dataset_id is wired through to loader (defaults to Settings.hf_dataset_id)
|
| 101 |
+
with load_isles_dataset(dataset_id) as dataset:
|
| 102 |
# Resolve ID if integer
|
| 103 |
if isinstance(case_id, int):
|
| 104 |
all_ids = dataset.list_case_ids()
|
|
|
|
| 161 |
output_dir=results_dir,
|
| 162 |
fast=fast,
|
| 163 |
gpu=gpu,
|
| 164 |
+
timeout=timeout,
|
| 165 |
)
|
| 166 |
|
| 167 |
# 4. Compute Metrics (using copied ground truth)
|