MedInjection-FR commited on
Commit
67fe52f
·
verified ·
1 Parent(s): 7f8203a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -33
README.md CHANGED
@@ -11,13 +11,24 @@ size_categories:
11
  - 10K<n<100K
12
  ---
13
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  # MedInjection-FR — Native Subset 🇫🇷
15
 
16
  ## Summary
17
 
18
  The **Native** component of **MedInjection-FR** comprises **French biomedical instructions and question–answer pairs** natively written in French.
19
- It forms the core of the dataset and reflects **authentic medical reasoning and linguistic formulations**, sourced from curated corpora such as *FrBMedQA*, *FrenchMedMCQA*, and *MediQAl*.
20
-
21
  This subset serves as the **high-quality reference supervision** for instruction tuning of large language models (LLMs) in French biomedical contexts.
22
 
23
  ## Motivation
@@ -27,34 +38,33 @@ The Native subset bridges this gap, providing instruction–response data derive
27
 
28
  ## Composition
29
 
30
- - **Languages:** French (`fr`)
31
- - **Domain:** Medicine, Clinical reasoning, Biology
32
- - **Task types:**
33
- - Multiple-choice (single and multiple answers)
34
- - Open-ended questions (diagnostic reasoning, treatment, translation-like reformulations)
35
- - **Structure:**
36
- Train/
37
- mcq/
38
- mcqu/
39
- oeq/
40
- Validation/
41
- mcq/
42
- mcqu/
43
- oeq/
44
- Test/
45
- mcq/
46
- mcqu/
47
- oeq/
48
-
49
- java
50
- Copy code
51
- Each JSON record includes:
52
- ```json
53
- {
54
- "instruction": "...",
55
- "context": "...", // optional
56
- "options": ["..."], // for MCQ/MCQU
57
- "answer": "...",
58
- "origin": "FrBMedQA",
59
- "type": "mcqu"
60
- }
 
11
  - 10K<n<100K
12
  ---
13
 
14
+ ---
15
+ pretty_name: MedInjection-FR — Native Subset
16
+ language:
17
+ - fr
18
+ license: mit
19
+ task_categories:
20
+ - text-generation
21
+ - question-answering
22
+ size_categories:
23
+ - 100K<n<300K
24
+ ---
25
+
26
  # MedInjection-FR — Native Subset 🇫🇷
27
 
28
  ## Summary
29
 
30
  The **Native** component of **MedInjection-FR** comprises **French biomedical instructions and question–answer pairs** natively written in French.
31
+ It forms the core of the dataset and reflects **authentic medical reasoning and linguistic formulations**, sourced from curated corpora and educational materials.
 
32
  This subset serves as the **high-quality reference supervision** for instruction tuning of large language models (LLMs) in French biomedical contexts.
33
 
34
  ## Motivation
 
38
 
39
  ## Composition
40
 
41
+ The native component combines curated datasets and web-scraped French medical resources to reflect authentic domain knowledge. It integrates the following resources:
42
+
43
+ - **S-Editions**~\cite{S-Editions}:
44
+ 526 question–answer pairs from a French educational platform for medical students.
45
+
46
+ - **MediQAl**~\cite{bazoge2025mediqal}:
47
+ 32 603 items from national medical examinations covering 41 medical specialties.
48
+
49
+ - **FrenchMedMCQA**~\cite{labrak2023frenchmedmcqa}:
50
+ 3 105 pharmacy-focused multiple-choice questions.
51
+
52
+ - **mlabonne/medical-cases-fr**~\cite{mlabonne_medical_cases_fr} and **mlabonne/medical-mcqa-fr**~\cite{mlabonne_medical_mqca_fr}:
53
+ 12 194 examples originating from French medical exam databases.
54
+
55
+ - **FrBMedQA**~\cite{kaddari2022frbmedqa}:
56
+ 19 836 questions derived from French biomedical Wikipedia articles spanning eight UMLS semantic groups (chemicals and drugs, anatomy, physiology, disorders, phenomena, procedures, genes and molecular sequences, and devices).
57
+ Originally closed-form, these questions were reformulated into multiple-choice format using *GPT-4o-mini*~\cite{hurst2024gpt} for standardization.
58
+
59
+ - **Parallel Biomedical Translation Corpora**~\cite{biomedical_translation_corpora}:
60
+ Bilingual biomedical translation data from the WMT challenge repositories were reformulated into instruction–response pairs.
61
+ Each instruction requests the **French translation** of an English biomedical passage, reframing translation as an instruction-following task aligned with the native portion.
62
+
63
+ ## Use
64
+
65
+ Used to fine-tune biomedical LLMs for:
66
+ - Question answering and clinical reasoning
67
+ - Instruction-following in French
68
+ - Evaluating cross-domain instruction generalization
69
+
70
+