Datasets:

Modalities:
Text
Formats:
json
Languages:
French
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 3,495 Bytes
a10100f
 
 
 
 
 
 
 
 
 
 
 
 
37bfc89
a10100f
37bfc89
a10100f
37bfc89
532e667
a10100f
37bfc89
a10100f
37bfc89
a10100f
37bfc89
532e667
a10100f
37bfc89
a10100f
37bfc89
a10100f
37bfc89
532e667
a10100f
37bfc89
 
 
 
 
 
 
a10100f
532e667
37bfc89
a10100f
37bfc89
a10100f
37bfc89
 
 
 
a10100f
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
license: mit
task_categories:
- question-answering
language:
- fr
tags:
- medical
- biology
size_categories:
- 10K<n<100K
---

# MedInjection-FR — Translated Subset 🌍

## Summary

The **Translated** component of **MedInjection-FR** adapts large-scale **English biomedical instruction datasets** into French through high-quality automatic translation.  
It represents the most extensive part of the collection, comprising **416 401 instruction–response pairs**, and provides a bridge between English biomedical resources and French medical instruction tuning.

This subset was designed to ensure **broad domain coverage** while maintaining **semantic and linguistic fidelity**, enabling robust cross-lingual transfer and comparative evaluation between native, synthetic, and translated supervision sources.

## Motivation

The scarcity of large-scale French biomedical instruction data limits the capacity of LLMs to generalize across complex medical domains.  
To address this, the Translated subset leverages **trusted English benchmarks** spanning medicine, biology, psychology, and clinical knowledge and systematically translates them into French while preserving the structure of instruction–response pairs.

This allows for rigorous experiments on **cross-lingual instruction adaptation**, complementing the native and synthetic subsets of MedInjection-FR.

## Composition

To expand data coverage while maintaining linguistic fidelity, a large collection of **English biomedical instruction datasets** was translated into French using **[Gemini 2.0 Flash](https://blog.google/products/gemini/google-gemini-2/)** and **[GPT-4o-mini](https://openai.com/research/gpt-4o)**.  
The translated component comprises **416 401 instruction–response pairs** derived from several established English resources:

- **[MedQA](https://arxiv.org/abs/2009.13081)** – medical board–style multiple-choice QA.  
- **[PubMedQA](https://aclanthology.org/D19-1259/)**  – factoid biomedical QA from PubMed abstracts.  
- **[MedMCQA](https://proceedings.mlr.press/v174/pal22a.html)**  – clinical multiple-choice QA covering 21 medical specialties.  
- **[MMLU](https://arxiv.org/abs/2009.03300)**  – six medical categories (e.g., anatomy, clinical knowledge, college biology, college medicine, medical genetics, professional medicine).  
- **[K-QA](https://aclanthology.org/2024.bionlp-1.22/)**  – open-ended biomedical question answering for reasoning over scientific text.  
- **[MMLU-PRO](https://arxiv.org/abs/2406.01574)**  – professional-level QA across psychology, biology, and health domains.  
- **[MedXpertQA](https://arxiv.org/abs/2501.18362)**  – clinical reasoning dataset focusing on multi-hop and expert-level diagnostic questions.

To assess translation quality, outputs were evaluated using **BLEU** and **COMET** on the **[WMT 2024 Biomedical Translation Task](https://github.com/biomedical-translation-corpora/corpora)** corpus.  
Both *Gemini 2.0 Flash* and *GPT-4o-mini* achieved results comparable to the **best WMT 2024 system**, indicating that the Translated subset maintains high semantic fidelity and linguistic quality suitable for French biomedical instruction tuning.

## Use

Intended for:
- Fine-tuning French biomedical LLMs with translated instruction–response pairs  
- Studying cross-lingual transfer and translation quality in instruction tuning  
- Evaluating the interplay between translation fidelity and domain adaptation performance