Apertus-8B-MeditronFO

Apertus-8B-MeditronFO is a 8B-parameter medical specialist LLM, produced by supervised fine-tuning of Apertus-8B-Instruct on the Fully Open Meditron Corpus.

This model is part of the Fully Open Meditron family — the first end-to-end auditable pipeline for clinical LLMs, with open weights, open data, open training recipe, and clinician-vetted corpus construction.

Apertus-8B-MeditronFO improves +13.35 points over its base on aggregate medical benchmarks — the largest gain in the MeditronFO family.

Performance

Accuracy (%) on standard medical benchmarks. See the paper for full evaluation details, confidence intervals, and open-ended Auto-MOOVE results.

Benchmark Apertus-8B-Instruct Apertus-8B-MeditronFO Δ
MedMCQA 45.80 48.74 +2.94
MedQA 51.14 58.44 +7.30
PubMedQA 37.60 75.60 +38.00
MedXpertQA 11.71 13.67 +1.96
HealthBench Hard 21.55 38.11 +16.56
Average 33.56 46.91 +13.35

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "EPFLiGHT/Apertus-8B-MeditronFO"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

messages = [
    {"role": "user", "content": "A 62-year-old woman presents with a three-day history of dyspnea on exertion and a productive cough. What is the differential diagnosis?"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))

Training

  • Base model: Apertus-8B-Instruct
  • Corpus: Fully Open Meditron601k examples (150M tokens), aggregating eight public medical QA datasets with three clinician-vetted synthetic components: exam-style QA, guideline-grounded QA from 46,469 clinical practice guidelines, and open-ended clinical vignettes
  • Hardware: NVIDIA GH200 nodes
  • Framework: Axolotl with FSDP v2 / DeepSpeed ZeRO-3, Flash Attention 2, bf16 mixed precision
  • Decontamination: System-wide two-stage n-gram and token-alignment decontamination against all evaluation benchmarks

Full hyperparameters are in Appendix I of the paper.

Intended Use

Research only. This model is intended to support research on medical LLMs, auditing of clinical AI systems, and reproducibility of the Fully Open Meditron pipeline.

It is not validated for clinical deployment, individual patient advice, autonomous decision-making, or any other deployment-adjacent use. Conduct independent domain-specific safety evaluation before any such use.

Citation

todo
}

License

Released under the apache-2.0 license. Permissive use including commercial, subject to attribution.

Downloads last month
34
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for EPFLiGHT/Apertus-8B-MeditronFO

Finetuned
(16)
this model

Dataset used to train EPFLiGHT/Apertus-8B-MeditronFO

Collection including EPFLiGHT/Apertus-8B-MeditronFO