Model Description
MedMistral-CPT-SFT-7B is a French medical language model based on Mistral-7B-v0.1, adapted for medical domain applications through a combined approach of Continual Pre-Training (CPT) followed by Supervised Fine-Tuning (SFT).
Model Details
- Model Type: Causal Language Model
- Base Model: Mistral-7B-v0.1
- Language: French
- Domain: Medical/Healthcare
- License: Apache 2.0
- Paper: Adaptation des connaissances médicales pour les grands modèles de langue : Stratégies et analyse comparative
Training Details
Continual Pre-Training (CPT)
- Dataset: NACHOS corpus (opeN crAwled frenCh Healthcare cOrpuS)
- Size: 7.4 GB of French medical texts
- Word Count: Over 1 billion words (1,088,867,950 words)
- Sources: 24 French medical websites
- Training Duration: 2.8 epochs
- Hardware: 32 NVIDIA H100 80GB GPUs
- Training Time: 12 hours
- Optimizer: AdamW
- Learning Rate: 2e-5
- Weight Decay: 0.01
- Batch Size: 16 with gradient accumulation of 2
Supervised Fine-Tuning (SFT)
- Dataset: 30K French medical question-answer pairs
- 10K native French medical questions
- 10K translated medical questions from English resources
- 10K generated questions from French medical texts
- Method: DoRA (Weight-Decomposed Low-Rank Adaptation)
- Training Duration: 10 epochs
- Hardware: 1 NVIDIA A100 80GB GPU
- Training Time: 75 hours
- Rank: 16
- Alpha: 16
- Learning Rate: 2e-5
- Batch Size: 4
Computational Impact
- Total Training Time: 87 hours (12h CPT + 75h SFT)
- Carbon Emissions: 11.78 kgCO2e (9.86 + 1.92)
Ethical Considerations
- Medical Accuracy: This model is for research and educational purposes only. All outputs should be verified by qualified medical professionals
- Bias: Training data may contain biases present in medical literature and online medical resources
Citation
If you use this model, please cite:
@inproceedings{belmadani-etal-2025-adaptation,
title = "Adaptation des connaissances m{\'e}dicales pour les grands mod{\`e}les de langue : Strat{\'e}gies et analyse comparative",
author = "Belmadani, Ikram and
Favre, Benoit and
Dufour, Richard and
B{\'e}chet, Fr{\'e}d{\'e}ric and
Ramisch, Carlos",
editor = "Bechet, Fr{\'e}d{\'e}ric and
Chifu, Adrian-Gabriel and
Pinel-sauvagnat, Karen and
Favre, Benoit and
Maes, Eliot and
Nurbakova, Diana",
booktitle = "Actes des 32{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles (TALN), volume 1 : articles scientifiques originaux",
month = "6",
year = "2025",
address = "Marseille, France",
publisher = "ATALA {\textbackslash}{\textbackslash}{\&} ARIA",
url = "https://aclanthology.org/2025.jeptalnrecital-taln.3/",
pages = "50--72",
language = "fra",
abstract = "Cet article pr{\'e}sente une {\'e}tude sur l{'}adaptation des grands mod{\`e}les de langue (LLMs) {\`a} des domaines sp{\'e}cialis{\'e}s disposant de donn{\'e}es limit{\'e}es. Bien que certaines recherches remettent en question le pr{\'e}-entra{\^i}nement adaptatif (DAPT) dans le contexte m{\'e}dical en anglais, nous montrons que l{'}adaptation au domaine peut {\^e}tre efficace sous certaines conditions. En prenant comme exemple l{'}adaptation au domaine m{\'e}dical en fran{\c{c}}ais, nous comparons de mani{\`e}re syst{\'e}matique le pr{\'e}-entra{\^i}nement continu (CPT), l{'}affinage supervis{\'e} (SFT) et une approche combin{\'e}e (CPT suivi de SFT). Nos r{\'e}sultats indiquent que l{'}adaptation d{'}un mod{\`e}le g{\'e}n{\'e}raliste {\`a} de nouvelles donn{\'e}es dans le domaine m{\'e}dical offre des am{\'e}liorations notables (taux de r{\'e}ussite de 87{\%}), tandis que l{'}adaptation suppl{\'e}mentaire de mod{\`e}les d{\'e}j{\`a} familiaris{\'e}s avec ce domaine procure des b{\'e}n{\'e}fices limit{\'e}s. Bien que CPT+SFT offre les meilleures performances globales, SFT-seul pr{\'e}sente des r{\'e}sultats solides et requiert moins de ressources mat{\'e}rielles."
}
Contact
For questions about this model, please contact: ikram.belmadani@lis-lab.fr
- Downloads last month
- 17