MedGemma-27B-Text-IT GPTQ (4-bit)
This is a 4-bit GPTQ quantization of google/medgemma-27b-text-it, the text-only variant of Google's MedGemma optimized for medical text reasoning.
Created For
This quantized model was generated by Ben Barnard Ph.D and Oladimeji Adaramewa for MPART — the Medical Policy Applied Research Team at Innovate Springfield and University of Illinois Springfield. MPART focuses on applied research in healthcare policy, Medicaid concerns, and health system funding analysis.
Why this model?
The text-only MedGemma 27B scores higher on medical text benchmarks than the multimodal version (89.8 vs 87.0 on MedQA, 74.2 vs 70.2 on MedMCQA) while being simpler to deploy. This quantized version reduces memory requirements from ~55GB to ~15GB, making it runnable on a single GPU with 24GB+ VRAM. We found that it is very good with understanding healthcare policy and finance.
Quantization Details
- Method: GPTQ via llmcompressor (Neural Magic)
- Bits: 4 (W4A16 — 4-bit weights, 16-bit activations)
- Group size: 128
- Calibration data: 256 samples from C4 (English)
- Ignored layers: lm_head (kept at full precision)
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained(
"bbarn4/medgemma-27b-text-it-GPTQ",
device_map="auto",
dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained("bbarn4/medgemma-27b-text-it-GPTQ")
prompt = "What are the key differences between Type 1 and Type 2 diabetes?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.inference_mode():
output = model.generate(**inputs, max_new_tokens=500, do_sample=False)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Base Model Performance (pre-quantization)
These are the published scores for the full-precision base model:
| Benchmark | MedGemma 27B Text | Gemma 3 27B |
|---|---|---|
| MedQA (4-op) | 89.8 | 74.9 |
| MedMCQA | 74.2 | 62.6 |
| PubMedQA | 76.8 | 73.4 |
| MMLU Med | 87.0 | 83.3 |
| MedXpertQA | 25.7 | 15.7 |
Intended Use
This model is intended as a starting point for developers and researchers building healthcare applications involving medical text. It is NOT intended for direct clinical use. All outputs require independent verification by qualified professionals.
License
Use is governed by the Health AI Developer Foundations terms of use.
- Downloads last month
- 158