G-Health
G-Health is a family of large language models for medical and preventive health use cases. Built on Qwen3, the models are aligned with large-scale medical dialogues and further adapted for health checkup report interpretation.
Model family
- G-Health-14B-Base / G-Health-32B-Base: medical-domain aligned models on top of Qwen3.
- G-Health-14B-instruct / G-Health-32B-instruct: built on the corresponding Base models, then fine-tuned specifically for health checkup reports (more structured report-to-action outputs).
Training (brief)
Base models (medical-domain alignment)
Starting from Qwen3, we apply two-stage alignment:
- SFT (Supervised Fine-Tuning): 2,817,556 dialogue samples
- DPO (Direct Preference Optimization): 1,643,350 preference samples
This produces a medical-domain model with improved robustness and communication quality.
Instruct models (health checkup specialization)
On top of the Base models, we perform additional fine-tuning on health checkup report data to improve:
- interpretation of lab values and imaging conclusions
- cautious risk signaling under uncertainty
- enhanced personalization awareness for tailoring explanations and recommendations to individual contexts
- Downloads last month
- 36