File size: 2,326 Bytes
7a12bf7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
# MedGemma StructCore Demo App

This is the implementation-focused demo app for:

**MedGemma StructCore: Local-First Clinical Structuring Engine for EHR**

## Run

```bash
python3 apps/challenge_demo/app_challenge.py
```

Open: `http://localhost:7863`

## Deploy to Hugging Face Spaces Zero

Prepare a minimal Space bundle:

```bash
bash scripts/prepare_hf_zero_challenge_space.sh
```

Bundle output:

```text
.dist/hf_zero_challenge_demo_space/
```

Push that directory to your HF Space repository. The bundle includes:

- Space entrypoint `app.py`
- minimal `requirements.txt`
- demo code (`apps/challenge_demo`)
- parser/risk dependencies (`kvt_utils.py`, `Analysis_Readmission/readmission_risk_engine.py`, config JSONs)

Note: in HF Space, default mode should remain `mock`. `pipeline` mode requires external Stage1/Stage2 servers reachable from the Space.

### Two-stage model artifacts on HF

Model repo (source-of-truth artifacts):

- `https://huggingface.co/DocUA/medgemma-1.5-4b-it-gguf-q5-k-m-two-stage`

Upload/update artifacts:

```bash
python3 scripts/hf_upload_two_stage_models.py \
  --repo-id DocUA/medgemma-1.5-4b-it-gguf-q5-k-m-two-stage \
  --stage1-file /absolute/path/to/stage1.gguf \
  --stage2-file /absolute/path/to/stage2.gguf
```

Space should be configured via env vars:

- `STRUCTCORE_STAGE1_URL`, `STRUCTCORE_STAGE1_MODEL`
- `STRUCTCORE_STAGE2_URL`, `STRUCTCORE_STAGE2_MODEL`
- optional: `STRUCTCORE_BACKEND_MODE=mock|pipeline`

## Modes

- `mock`:
  - offline deterministic extraction (fast, no model server required),
  - useful for demo recording and UI development.

- `pipeline`:
  - runs real Stage1/Stage2 using existing runners,
  - requires local OpenAI-compatible model servers.

If pipeline mode fails and fallback is enabled, app falls back to mock mode.

## Architecture

- `app_challenge.py`: Gradio UI and orchestration glue.
- `services/structcore_service.py`: execution modes, normalization, risk scoring.
- `services/case_library.py`: synthetic demo cases.
- `services/evidence_service.py`: claim/evidence board data.
- `config/evidence_claims.json`: status-labeled claims.
- `data/synthetic_cases.json`: synthetic note samples.

## Notes

- This demo is extraction-first.
- Readmission risk is presented as a downstream use case.
- Public demos should use synthetic notes only.