Japhari/medgemma-stg-cds-lora
LoRA adapter for google/medgemma-4b-it, tuned for converting clinical narratives into structured payloads that support Tanzania STG-aligned clinical decision support (CDS) workflows.
Model Details
Model Description
- Developed by: Japhari
- Model type: PEFT LoRA adapter (not a standalone base model)
- Base model:
google/medgemma-4b-it - Primary task: Clinical information extraction and CDS-oriented JSON structuring
- Language(s): Primarily English clinical text
- Domain: Clinical medicine / decision support
Uses
Direct Use
This adapter is intended to be loaded on top of google/medgemma-4b-it and used in a controlled backend or Space application to:
- transform patient stories into structured request payloads for CDS APIs
- produce clinician-facing explanatory summaries of CDS outputs
- support prototyping and workflow acceleration in supervised settings
Downstream Use [optional]
Typical integration pattern:
- Narrative input -> adapter-powered extraction
- Structured JSON -> deterministic rules engine (
/cds/evaluate) - Returned alerts/interventions -> adapter-powered explanation
The deterministic CDS engine should remain the source of truth for recommendations.
Out-of-Scope Use
- Autonomous diagnosis or treatment decisions without clinician oversight
- Emergency triage as sole decision authority
- Any use where hallucinated values could directly trigger unsupervised patient-facing actions
- Use in legal/compliance-critical pathways without validation and governance
Bias, Risks, and Limitations
- May hallucinate fields, values, or confidence if prompts are underspecified
- Clinical language and practice patterns can vary by region and facility
- Performance depends strongly on input quality and prompt constraints
- Not validated as a medical device
Recommendations
- Keep strict output schemas and validation in downstream systems
- Require clinician review for all high-risk outputs
- Log prompts/outputs and run periodic audit evaluation
- Use deterministic guideline logic as final arbiter
How to Get Started with the Model
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model_id = "google/medgemma-4b-it"
adapter_id = "Japhari/medgemma-stg-cds-lora"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
device_map="auto" if torch.cuda.is_available() else None
)
model = PeftModel.from_pretrained(base_model, adapter_id)
model.eval()
google/medgemma-4b-itis a gated model. Ensure your Hugging Face account has accepted access before loading.
Prompting Guidance
- Ask for strict JSON output with explicit key requirements
- Include only evidence present in the story
- Disallow invented lab values and unsupported claims
- Validate output with a schema before sending to downstream APIs
Training Details
This adapter was fine-tuned for CDS-focused structured extraction and explanation tasks.
Training artifacts in this repository are PEFT adapter weights and metadata.
Evaluation Notes
No formal benchmark results are published in this card yet.
For production use, evaluate on local clinical scenarios, monitor error types, and implement safety gates.
Clinical Safety Notice
This model is for decision support and workflow assistance only.
It must not replace clinical judgment, local protocols, or emergency escalation procedures.
Model Card Contact
- Hugging Face:
Japhari
Framework versions
- PEFT 0.13.0
- Downloads last month
- 30