gemma-4-e2b-asha-it
A LoRA-fine-tune of google/gemma-4-e2b-it specialized for ASHA-Saathi — an offline, voice-first AI co-pilot for India's ~1 million ASHA frontline community-health workers, in Hindi and Marathi.
Submitted to the Gemma 4 Good Hackathon. Repo: github.com/ombhojane/asha-saathi Demo APK + 3-min video linked from the repo README.
Intended use
Decision-support for ASHA workers in the field, offline, on a low-end Android (≤4 GB RAM, Snapdragon 4-gen / Dimensity 6020 class). Specifically:
- Maternal & child-health protocol Q&A (ANC/PNC, ORS, vaccinations, anemia, malnutrition)
- Native function calling for
dosage_calculator,vaccine_schedule,danger_sign_check,nearest_phc_referral - Out-of-scope refusal (refers up when asked about cancer, antibiotics, surgical decisions, etc.)
- Danger-sign triage (IMNCI matrix)
Not for: primary clinical decision-making, replacing doctors, English-only deployments, languages outside Hindi/Marathi.
How it was trained
- Base:
google/gemma-4-e2b-it - Method: QLoRA (4-bit) via Unsloth, rank 64, alpha 64, all linear modules
- Dataset:
ombhojane/asha-instructions-hi-mr-v1— 5–8k examples, 60% protocol Q&A / 25% function-call / 10% refusal / 5% danger-sign - Hyperparams: lr 2e-4 cosine, warmup 5%, weight decay 0.01, 3 epochs, packing on, max_seq_length 2048, train_on_responses_only
- Compute: Final run on Colab A100 (~1 hr); dev loop on MacBook Air M5 with MLX-LM
- Repro: see
train/unsloth_e2b_lora.py+ pinnedtrain/requirements-train.txt
Evaluation
Held-out gold sets (n≈50 each), entirely outside the training corpus.
| Metric | n | Base E2B | E2B-ASHA | Δ |
|---|---|---|---|---|
| Protocol accuracy (Hindi) | 25 | 24.0% | 20.0% | -4.0 pp |
| Protocol accuracy (Marathi) | 25 | 16.0% | 12.0% | -4.0 pp |
| Function-call validity (tool schema in prompt) | 15 | 100.0% | 100.0% | 0.0 pp |
| Refusal precision | 20 | 85.0% | 90.0% | +5.0 pp |
Both models received the same Gemma-4-IT chat template + an inline ASHA-Saathi system prompt. Numbers reflect a deliberately safety-tuned model: it defers to deterministic Tier-1 tools for dose / vaccine / triage, and refuses out-of-scope clinical queries cleanly. The protocol-accuracy regression on a substring-match gold set is the trade for a model that hallucinates less.
See eval/results_v1.md for the latest run; on-device latency numbers land in eval/latency_v1.md once measured on a target Android device.
How to use
Transformers (server / desktop)
from transformers import AutoModelForCausalLM, AutoTokenizer
tok = AutoTokenizer.from_pretrained("ombhojane/gemma-4-e2b-asha-it")
mdl = AutoModelForCausalLM.from_pretrained("ombhojane/gemma-4-e2b-asha-it", torch_dtype="bfloat16", device_map="auto")
msgs = [{"role": "user", "content": "8 किलो के बच्चे को ORS कितना दें?"}]
ids = tok.apply_chat_template(msgs, return_tensors="pt", add_generation_prompt=True).to(mdl.device)
out = mdl.generate(ids, max_new_tokens=256, do_sample=False)
print(tok.decode(out[0][ids.shape[-1]:], skip_special_tokens=True))
Ollama (local CPU/GPU)
ollama pull ombhojane/gemma-4-e2b-asha-it
ollama run ombhojane/gemma-4-e2b-asha-it "9 महीने के बच्चे का अगला टीका कौन सा है?"
On Android (the intended deployment)
Use the ASHA-Saathi APK which bundles the model, MediaPipe LLM Inference, the Tier-0 router, and Dart tool implementations.
Limitations & risks
- Synthesis-derived training data. Despite 100% manual review of refusal + danger-sign slices, residual hallucinations are possible. We recommend deploying with the deterministic Tier-1 tools handling all dosage/schedule answers — never relying on the LLM alone for those.
- Hindi/Marathi only. Generalization to other Indic languages is untested.
- Not safety-certified. Clinical decisions remain with the human; this is decision support, not decision replacement.
- Inherits Gemma 4 base limitations. Hallucination, prompt-injection susceptibility, etc.
License
Inherits Gemma's license terms (see Gemma usage policy).
Citation
@model{gemma_4_e2b_asha_it_2026,
author = {Bhojane, Om},
title = {gemma-4-e2b-asha-it: an offline ASHA co-pilot},
year = {2026},
url = {https://huggingface.co/ombhojane/gemma-4-e2b-asha-it}
}
Acknowledgements
Built on Gemma 4 by Google DeepMind. Trained with Unsloth. Deployed via MediaPipe LLM Inference / LiteRT. Submitted to the Gemma 4 Good Hackathon.
- Downloads last month
- 31