Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
json
Languages:
French
Size:
10K - 100K
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -11,13 +11,24 @@ size_categories:
|
|
| 11 |
- 10K<n<100K
|
| 12 |
---
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
# MedInjection-FR — Native Subset 🇫🇷
|
| 15 |
|
| 16 |
## Summary
|
| 17 |
|
| 18 |
The **Native** component of **MedInjection-FR** comprises **French biomedical instructions and question–answer pairs** natively written in French.
|
| 19 |
-
It forms the core of the dataset and reflects **authentic medical reasoning and linguistic formulations**, sourced from curated corpora
|
| 20 |
-
|
| 21 |
This subset serves as the **high-quality reference supervision** for instruction tuning of large language models (LLMs) in French biomedical contexts.
|
| 22 |
|
| 23 |
## Motivation
|
|
@@ -27,34 +38,33 @@ The Native subset bridges this gap, providing instruction–response data derive
|
|
| 27 |
|
| 28 |
## Composition
|
| 29 |
|
| 30 |
-
-
|
| 31 |
-
|
| 32 |
-
- **
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
- **
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
}
|
|
|
|
| 11 |
- 10K<n<100K
|
| 12 |
---
|
| 13 |
|
| 14 |
+
---
|
| 15 |
+
pretty_name: MedInjection-FR — Native Subset
|
| 16 |
+
language:
|
| 17 |
+
- fr
|
| 18 |
+
license: mit
|
| 19 |
+
task_categories:
|
| 20 |
+
- text-generation
|
| 21 |
+
- question-answering
|
| 22 |
+
size_categories:
|
| 23 |
+
- 100K<n<300K
|
| 24 |
+
---
|
| 25 |
+
|
| 26 |
# MedInjection-FR — Native Subset 🇫🇷
|
| 27 |
|
| 28 |
## Summary
|
| 29 |
|
| 30 |
The **Native** component of **MedInjection-FR** comprises **French biomedical instructions and question–answer pairs** natively written in French.
|
| 31 |
+
It forms the core of the dataset and reflects **authentic medical reasoning and linguistic formulations**, sourced from curated corpora and educational materials.
|
|
|
|
| 32 |
This subset serves as the **high-quality reference supervision** for instruction tuning of large language models (LLMs) in French biomedical contexts.
|
| 33 |
|
| 34 |
## Motivation
|
|
|
|
| 38 |
|
| 39 |
## Composition
|
| 40 |
|
| 41 |
+
The native component combines curated datasets and web-scraped French medical resources to reflect authentic domain knowledge. It integrates the following resources:
|
| 42 |
+
|
| 43 |
+
- **S-Editions**~\cite{S-Editions}:
|
| 44 |
+
526 question–answer pairs from a French educational platform for medical students.
|
| 45 |
+
|
| 46 |
+
- **MediQAl**~\cite{bazoge2025mediqal}:
|
| 47 |
+
32 603 items from national medical examinations covering 41 medical specialties.
|
| 48 |
+
|
| 49 |
+
- **FrenchMedMCQA**~\cite{labrak2023frenchmedmcqa}:
|
| 50 |
+
3 105 pharmacy-focused multiple-choice questions.
|
| 51 |
+
|
| 52 |
+
- **mlabonne/medical-cases-fr**~\cite{mlabonne_medical_cases_fr} and **mlabonne/medical-mcqa-fr**~\cite{mlabonne_medical_mqca_fr}:
|
| 53 |
+
12 194 examples originating from French medical exam databases.
|
| 54 |
+
|
| 55 |
+
- **FrBMedQA**~\cite{kaddari2022frbmedqa}:
|
| 56 |
+
19 836 questions derived from French biomedical Wikipedia articles spanning eight UMLS semantic groups (chemicals and drugs, anatomy, physiology, disorders, phenomena, procedures, genes and molecular sequences, and devices).
|
| 57 |
+
Originally closed-form, these questions were reformulated into multiple-choice format using *GPT-4o-mini*~\cite{hurst2024gpt} for standardization.
|
| 58 |
+
|
| 59 |
+
- **Parallel Biomedical Translation Corpora**~\cite{biomedical_translation_corpora}:
|
| 60 |
+
Bilingual biomedical translation data from the WMT challenge repositories were reformulated into instruction–response pairs.
|
| 61 |
+
Each instruction requests the **French translation** of an English biomedical passage, reframing translation as an instruction-following task aligned with the native portion.
|
| 62 |
+
|
| 63 |
+
## Use
|
| 64 |
+
|
| 65 |
+
Used to fine-tune biomedical LLMs for:
|
| 66 |
+
- Question answering and clinical reasoning
|
| 67 |
+
- Instruction-following in French
|
| 68 |
+
- Evaluating cross-domain instruction generalization
|
| 69 |
+
|
| 70 |
+
|
|
|