Spaces:
Sleeping
Sleeping
File size: 1,938 Bytes
7a12bf7 04dfd71 7a12bf7 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | ---
title: MedGemma StructCore Demo
emoji: 🩺
colorFrom: blue
colorTo: indigo
sdk: gradio
python_version: "3.10"
app_file: app.py
pinned: false
---
# MedGemma StructCore Demo (HF Spaces Zero)
This directory contains deployment assets for Hugging Face Spaces Zero.
## What is included
- `app.py`: Space entrypoint for the StructCore demo UI.
- `requirements.txt`: minimal dependencies for this demo.
## Recommended deployment flow
Use the packaging script from the repository root:
```bash
bash scripts/prepare_hf_zero_challenge_space.sh
```
It creates a ready-to-push bundle in:
```text
.dist/hf_zero_challenge_demo_space/
```
Then push that bundle to your HF Space repository.
## Model repository (two-stage)
Target model repo:
- `https://huggingface.co/DocUA/medgemma-1.5-4b-it-gguf-q5-k-m-two-stage`
Upload/update Stage1 and Stage2 artifacts from this project repo:
```bash
python3 scripts/hf_upload_two_stage_models.py \
--repo-id DocUA/medgemma-1.5-4b-it-gguf-q5-k-m-two-stage \
--stage1-file /absolute/path/to/stage1.gguf \
--stage2-file /absolute/path/to/stage2.gguf \
--stage1-path-in-repo stage1/medgemma-stage1-q5_k_m.gguf \
--stage2-path-in-repo stage2/medgemma-stage2-q5_k_m.gguf
```
Requires `HF_TOKEN` with write access to the model repo.
## Space runtime configuration
Set these variables/secrets in the HF Space settings:
- `STRUCTCORE_BACKEND_MODE=pipeline` (or `mock` as safe default)
- `STRUCTCORE_STAGE1_URL=<your_openai_compat_stage1_url>`
- `STRUCTCORE_STAGE1_MODEL=<model_alias_from_stage1_/v1/models>`
- `STRUCTCORE_STAGE2_URL=<your_openai_compat_stage2_url>`
- `STRUCTCORE_STAGE2_MODEL=<model_alias_from_stage2_/v1/models>`
Important:
- Space itself does not serve GGUF automatically from the model repo.
- GGUF files in HF model repo are the source-of-truth artifacts.
- Actual inference in `pipeline` mode requires reachable OpenAI-compatible endpoints running those artifacts.
|