MedPsy-4B

MedPsy-4B is a state-of-the-art, text-only medical and healthcare language model purpose-built for edge deployment. Built on top of Qwen3-4B-Thinking-2507 and post-trained with a multi-stage pipeline (supervised fine-tuning + reinforcement learning) on curated medical data, it surpasses models nearly 7x its size on medical benchmarks.

Developed by Tether AI Research
Model type Text-only causal language model (decoder-only transformer)
Base model Qwen3-4B-Thinking-2507
Language English
License Apache 2.0
Technical report MedPsy Technical Report
Collection MedPsy on Hugging Face
All MedPsy variants MedPsy-4B · MedPsy-1.7B · MedPsy-4B-GGUF · MedPsy-1.7B-GGUF

Key Highlights

  • Surpasses 27B models at 4B scale: Scores 70.54 on closed-ended medical benchmarks, outperforming MedGemma-27B-text-it (69.95) despite being nearly 7x smaller
  • Real-world clinical strength: Achieves 74.00 on HealthBench and 58.00 on HealthBench Hard, beating MedGemma-27B (65.00 / 42.00) by +9.00 and +16.00 points
  • 3.2x token efficiency: Produces accurate medical answers in ~909 tokens vs ~2,953 for Qwen3-4B Thinking, reducing latency and compute cost
  • Privacy-first: Enables fully on-device inference via the QVAC SDK and QVAC Fabric, patient data never leaves the device.

    MedPsy 4B: Benchmarks

Benchmark Results

MedPsy-4B MedGemma-27B-text-it Qwen3-4B-Thinking-2507 MedGemma-1.5-4B-it
Closed-Ended Medical Benchmarks
Average70.5469.9563.1051.20
MMLU (Health)89.7090.4885.9267.69
AfriMedQA71.5073.0764.1254.38
MMLU-Pro Health70.4572.9467.7347.31
MedMCQA72.1572.7761.7850.08
MedQA (USMLE)84.3983.2970.9164.39
MedXpertQA30.6125.1816.6915.80
PubMedQA75.0071.9374.5358.73
HealthBench
Overall74.0065.0063.0054.00
Expertise-Tailored Communication79.3373.0071.0062.67
Response Depth63.6761.3358.0048.67
Context Seeking71.6758.6757.6746.00
Emergency Referrals81.6773.0074.0064.00
Global Health73.6761.0059.0047.67
Health Data Tasks60.6756.6754.6744.67
Responding Under Uncertainty76.3366.3364.3358.33
HealthBench Hard
Overall58.0042.0042.6729.67
Expertise-Tailored Communication55.3344.6745.0031.67
Response Depth47.6738.6738.6729.00
Context Seeking63.3342.0043.0028.00
Emergency Referrals62.3339.6747.3329.00
Global Health60.0042.6743.3329.00
Health Data Tasks46.6739.3339.6723.67
Responding Under Uncertainty61.0042.6742.0035.00

* MMLU (Health): averaged accuracy across 6 sub-domains: anatomy, clinical_knowledge, college_biology, college_medicine, medical_genetics, professional_medicine.
* HealthBench evaluated using CompassJudger-2-32B-Instruct as judge.
* All results are averaged over 3 runs with generation parameters: temperature=0.6, top_k=20, top_p=0.95, max_output_tokens=16384.

Token Efficiency

Beyond raw accuracy, MedPsy-4B achieves a 3.2x reduction in average response length compared to its backbone model (Qwen3-4B-Thinking-2507). This means faster inference, lower compute costs, and reduced latency - critical for real-time clinical decision support on edge devices.

Qwen3-4B-Thinking-2507 MedPsy-4B
Avg. Response Length (Tokens) 2,953 909
Δ Reduction 3.2x fewer tokens

The chart below shows per-benchmark response lengths. The largest reductions appear on reasoning-intensive tasks (MedXpertQA, MedQA-USMLE, MMLU-Pro Health), where the base model's extended thinking produces substantially longer outputs without accuracy gains over our post-trained model.

Average Response Length (Tokens) - 4B model class

Average response length (tokens) per benchmark. Lower is better. MedPsy-4B consistently produces shorter responses than Qwen3-4B-Thinking-2507 while achieving higher overall accuracy.

Model Details

Parameter Value
Architecture Qwen3ForCausalLM
Parameters 4B
Hidden size 2,560
FFN hidden size 9,728
Layers 36
Attention heads 32
KV groups (GQA) 8
Vocab size 151,936
Max position embeddings 262,144
Precision bfloat16
Position embedding RoPE
Normalization RMSNorm
Activation SwiGLU

Usage

Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "qvac/MedPsy-4B"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")

messages = [
    {"role": "user", "content": "What are the common symptoms and first-line treatments for community-acquired pneumonia?"}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=1024)
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[-1]:], skip_special_tokens=True)
print(response)

Training

The model was post-trained through a multi-stage pipeline on the Qwen3-4B-Thinking-2507 backbone:

  1. SFT Stage 1 (Corpus 1): Broad medical adaptation on a large-scale synthetic corpus spanning biology, medicine, and health (including a new health domain not yet publicly released), built from Genesis II–style medical seeds and open-source medical QA prompts used purely as questions, with all reasoning targets freshly generated by Baichuan-M3-235B.
  2. SFT Stage 2 (Corpus 2): Reasoning specialization on a smaller, higher-value clinical QA corpus with teacher-generated chain-of-thought reasoning from Baichuan-M3-235B.
  3. RL Stage 1: Reinforcement learning (DAPO) on the easy/moderate subset of AlphaMedQA (Liu et al., 2025), annotated with the SFT checkpoint.
  4. RL Stage 2: Focused RL on a hard-enriched AlphaMedQA subset re-annotated with the best Stage 1 checkpoint, targeting persistent failure modes.

For full methodology details, see the MedPsy Technical Report.

Use and Limitations

Intended Use

MedPsy-4B is an open language model intended as a starting point for developers and researchers building downstream healthcare applications involving medical text. Developers are expected to validate, adapt, and make meaningful modifications to the model for their specific use cases.

Appropriate use cases include:

  • Research on medical language understanding and reasoning
  • Building developer tools and prototypes for health-related applications
  • On-device medical information retrieval for privacy-sensitive environments

Always with appropriate disclaimers.

Limitations

This model is NOT a substitute for professional medical judgment and the model outputs are NOT a substitute for proper clinical diagnosis. Always consult with a certified physician. Despite strong benchmark performance, MedPsy-4B is a compact 4B-parameter language model that will make errors. Medical AI systems can produce outputs that appear confident and authoritative while being factually incorrect, incomplete, or clinically inappropriate.

Known limitations include:

  • Hallucinations: The model may generate plausible-sounding but incorrect medical information.
  • English only: The model was trained and evaluated primarily in English. Performance in other languages is not validated.
  • Text only: This model processes text inputs only. It cannot interpret medical images, lab results in non-text formats, or other modalities.
  • No real-time knowledge: The model's knowledge has a training data cutoff and does not reflect the latest medical guidelines, drug approvals, or clinical evidence.
  • Bias in training data: As with any model trained on synthetic and public medical data, biases in the source material may propagate to model outputs. Developers should validate performance across diverse patient populations, demographics, and clinical contexts.
  • Not designed for emergencies: This model should never be used as the sole decision-making tool in emergency or life-threatening situations.

Safety Recommendations

When integrating this model into any application:

  1. Always include visible disclaimers informing users that outputs are AI-generated and not a substitute for professional medical advice
  2. Do not use for direct clinical diagnosis or treatment without oversight by qualified healthcare professionals
  3. Monitor for harmful outputs and implement appropriate safety filters in production systems

Ethics and Safety

The model was evaluated on medical safety dimensions through the HealthBench evaluation framework, which assesses Emergency Referrals, Responding Under Uncertainty, and Context Seeking, all critical safety dimensions for medical AI. However, no dedicated red-teaming or adversarial safety testing has been conducted on this model to date. Developers deploying this model in production should conduct their own safety evaluations appropriate to their use case.

Related Resources

Citation

@article{medpsy2026,
  title={MedPsy: State-of-the-Art Medical and Healthcare Language Models for Edge Devices},
  author={Vitabile, Davide and Buffa, Alexandro and Nambiar, Akshay and Nazir, Amril},
  year={2026},
  url={https://huggingface.co/blog/qvac/medpsy}
  institution={Tether AI Research}
}

Copyright

We will take appropriate actions in response to notices of copyright infringement. If you believe your work has been used or copied in a manner that infringes upon your intellectual property rights, please email data-apps@tether.io identifying and describing both the copyrighted work and alleged infringing content.

Licensing

This model, which was trained as described in the MedPsy Technical Report, is licensed by Tether Data, S.A. de C.V. under the Apache 2.0 license for research and educational purposes. As described above, this model is a version of Qwen3-4B-Thinking-2507, which is also under the Apache 2.0 license.

As described above, a subset of the Genesis I and Genesis II datasets was used by the Baichuan-M3-235B model—which itself is also available under the Apache 2.0 license to generate synthetic data for training this model. The Genesis I dataset is made available under the CC-BY-NC 4.0 (Creative Commons Attribution-NonCommercial 4.0) license. The Genesis II dataset is also made available under the CC-BY-NC 4.0 license.

Downloads last month
-
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for qvac/MedPsy-4B

Finetuned
(232)
this model
Quantizations
1 model

Collection including qvac/MedPsy-4B

Paper for qvac/MedPsy-4B