| --- |
| license: |
| - cc-by-4.0 |
| - etalab-2.0 |
| language: |
| - fr |
| tags: |
| - medical |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: finetuning/train-* |
| - config_name: finetuning |
| data_files: |
| - split: train |
| path: finetuning/*.parquet |
| - config_name: instruction-tuning |
| data_files: |
| - split: train |
| path: instruction-tuning/*.parquet |
| dataset_info: |
| - config_name: finetuning |
| features: |
| - name: input |
| dtype: string |
| - name: source |
| dtype: string |
| - name: document_type |
| dtype: string |
| splits: |
| - name: train |
| num_examples: 891196 |
| - config_name: instruction-tuning |
| features: |
| - name: input |
| dtype: string |
| - name: instruction |
| dtype: string |
| - name: output |
| dtype: string |
| - name: source |
| dtype: string |
| - name: document_type |
| dtype: string |
| splits: |
| - name: train |
| num_examples: 22390 |
| --- |
| # PARCOMED - PARTAGES Corpus of Open MEdical Documents |
|
|
| This document describes the first version of the **commercial** corpus. |
|
|
| ## Overview |
|
|
| The availability of French biomedical data remains a major challenge for improving the multilingual capabilities of large language models (LLMs) in the medical domain. |
| We introduce and release the PARCOMED corpus, a collection of French biomedical texts compiled from a wide range of sources for commercial use. |
|
|
| While similar datasets have been released in the past couple of years (NACHOS from DrBERT, JARGON), ours is the result of a greater scrutiny of the licensing terms of each source. Therefore, the PARTAGES corpus is fully compatible with research usage and is also distributed with a version compatible with commercial usage. |
| Here, we present the commercial corpus released. |
|
|
|
|
| ## Document types and data sources |
|
|
| The selected datasets for our corpus come from a variety of sources which can be categorized as follows: |
|
|
| ### Clinical |
| **FRASIMED**: Annotated corpus of synthetic clinical cases written in French. Available at https://zenodo.org/records/8355629. License CC-BY-4.0. |
| ### Dialogue |
| **PXCORPUS**: French corpus of medical dialogues on prescriptions, transcripted and annotated. Available at https://doi.org/10.5281/zenodo.6482586. License CC-BY-4.0. |
| ### Education |
| **CERIMES**: Indexing of digital pedagogical resources proposed by higher education institutions and research organizations in France. NACHOS versioning. Available at https://data.enseignementsup-recherche.gouv.fr/explore/dataset/fr_esr_ressources-pedagogiques/export/?flg=en-gb&refine.lom_lifecycle_contribute_entity_fn=CERIMES. License Etalab. |
| ### Encyclopedic |
| **WIKIPEDIA**: Corpus extracted from Wikipedia in French, collected via the python wikipediaapi on medical, pharmaceutical and biological categories. License CC-BY-SA 3.0, GNU Free Documentation License. |
| ### Medical |
| **ECDC_TM**: Corpus of medical texts from the European Centre for Disease Prevention and Control (ECDC) for machine translation tasks. NACHOS versioning. Available at https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction. Free License. |
| ### Medicinal |
| **EMEA_V3**: Corpus of multilingual medical documents from the European Medicines Agency (EMEA), 3rd version. NACHOS versioning. Available at https://huggingface.co/datasets/qanastek/EMEA-V3. License CC-BY-4.0. |
| |
| **BDPM**: Public database of medicines. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/base-de-donnees-publique-des-medicaments-base-officielle/. License Etalab. |
| ### Question Answering |
| **DEFT2021**: Corpus from the DEFT challenge for three tasks: extraction of clinical profiles, evaluation of student responses and existing ratings. Available at https://huggingface.co/datasets/DrBenchmark/DEFT2021. License CC-BY-4.0. |
| |
| **FRENCHMEDMCQA** (INSTRUCT): Francophone corpus of questions in the medical domain with 5 response options (single or multiple choice) and their manual corrections. Available at https://huggingface.co/datasets/qanastek/frenchmedmcqa. License Apache 2.0. |
| |
| **MEDIQAL** (INSTRUCT): MediQAl is a French medical question answering dataset designed to evaluate the capabilities of language models in factual medical recall and clinical reasoning. Disponible à https://huggingface.co/datasets/ANR-MALADES/MediQAl. Licence CC-BY-4.0 |
| ### Regulation |
| **QUALISCOPE**: Data on the quality of healthcare establishments in France, extracted from Scope Santé. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/base-sur-la-qualite-et-la-securite-des-soins-anciennement-scope-sante/. License Etalab. |
| |
| **CNEDIMTS**: Dataset from a specialized commission of the HAS that evaluates individual medical devices as well as diagnostic, therapeutic or assistive products (excluding medications), as well as associated services. NACHOS versioning. Available at https://www.data.gouv.fr/datasets/evaluation-des-dispositifs-medicaux/. License Etalab. |
| ### Scientific |
| **WMT16**: Biomedical variant of the WMT16 corpus built from PubMed scientific publications, containing multilingual data used for machine translation. Available at https://huggingface.co/datasets/qanastek/WMT-16-PubMed. License CC-BY-4.0. |
| |
| **HAL**: Corpus extracted from the HAL platform, grouping French scientific publications in the biomedical domain. NACHOS versioning. Available via harvesting following the api protocol https://api.documentation-administrative.gouv.fr/oai. License Etalab. |
| |
| **HAS**: Data from the High Authority of Health. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/textes-des-publications-de-la-has-7/. License Etalab. |
| |
| **QUAERO**: Corpus of multilingual medical documents from MEDLINE titles and documents from the European Medicines Agency (EMEA-V3), used for training and evaluating models of automatic medical language processing. NACHOS versioning. Available at https://huggingface.co/datasets/DrBenchmark/QUAERO. License GNU Free Documentation License. |
| |
| **ISTEX**: Corpus of scientific publications from the ISTEX platform, gathering French scientific literature. NACHOS versioning. Available at https://data.istex.fr/. License Etalab. |
| |
| **MANTRA_GSC**: Dataset extracted from biomedical corpora (Medline abstract titles, pharmaceutical notices, biomedical patents), with independent concept annotation according to a subset of the UMLS. NACHOS versioning. Available at https://huggingface.co/datasets/bigbio/mantra_gsc. License CC-BY-4.0. |
|
|
|
|
| ## Preprocessing steps |
|
|
| ### Text cleaning |
|
|
| All the documents were preprocessed using a pipeline inspired by FlauBERT (Le et al., 2020), including Unicode conversion and normalization, removal of characters outside standard French encoding, removal of multiple spaces, and removal of URLs. |
|
|
| To this initial cleaning script, additional steps were added due to the lack of relevant content in some documents included in the corpus. These were based on criteria such as a minimum word count (=5; a higher number would have been too restrictive for dialogues) in the texts that were retained. |
|
|
| ### De-duplication |
|
|
| To avoid overfitting on redundant samples in our dataset, we added an additional deduplication step during preprocessing. We used a very “classic” method based on MinHash similarity, with a similarity threshold of 0.85 and a number of permitted permutations set to 128. |
|
|
| This deduplication was applied during the transfer of the sourced datasets to the ready-to-use, unsourced corpus. Indeed, since some corpora intersect, the granularity of the source becomes less relevant because the documents are compared in an inter-corpus manner. |
|
|
|
|
| ## Features Scheme |
|
|
| | Column Name | Data Type | Description | |
| |:--------------|:------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------| |
| | instruction | string | instruction-tuning only feature, corresponding to the system prompt for instruction-tuning samples. | |
| | input | string | input text, regardless of the adaptation method (e.g., finetuning or instruction-tuning). For instruction-tuning, this is the "user prompt" or "question". | |
| | output | string | **instruction-tuning only feature** gold standard output for supervised instruction-tuning. | |
| | source | string | dataset name of the data sample. | |
| | document_type | string | typology of document (e.g., Scientific, Encyclopedic, Clinical, Medication, Question-Answering, Dialogue, Regulation). | |
| |
| ## Statistics |
| |
| ### Document-type granularity |
| |
| **FINETUNING** data |
| |
| | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars | |
| |:-------------------|----------:|-----------------:|-------------:|------------:|-----------------:|-------------:|------------:| |
| | Total | 891196 | 8.83648e+08 | 991.53 | 6768.64 | 5.50441e+09 | 6176.42 | 41398.6 | |
| | Scientific | 640257 | 8.49351e+08 | 1326.58 | 7931.16 | 5.27612e+09 | 8240.63 | 48468.1 | |
| | Medicinal | 233960 | 2.44849e+07 | 104.654 | 647.2 | 1.63167e+08 | 697.415 | 4332.35 | |
| | Wiki | 9957 | 6.53102e+06 | 655.923 | 1252.04 | 4.32721e+07 | 4345.89 | 8209.94 | |
| | Education | 22 | 1.71519e+06 | 77963.1 | 47413.5 | 1.16235e+07 | 528341 | 321525 | |
| | Clinical | 2048 | 1.3229e+06 | 645.946 | 333.903 | 8.73342e+06 | 4264.37 | 2207.73 | |
| | Question Answering | 275 | 111792 | 406.516 | 264.436 | 626549 | 2278.36 | 1402.57 | |
| | Regulation | 1111 | 70081 | 63.0792 | 54.7356 | 478447 | 430.645 | 365.089 | |
| | Medical | 2152 | 42460 | 19.7305 | 13.3516 | 280626 | 130.402 | 92.0109 | |
| | Dialogue | 1414 | 18372 | 12.9929 | 6.0802 | 103531 | 73.2185 | 33.7791 | |
|
|
| **INSTRUCTION-TUNING** data |
|
|
| | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars | |
| |:-------------------|----------:|------------:|-------------:|------------:|------------:|-------------:|------------:| |
| | Question Answering | 22390 | 1.78385e+06 | 79.6716 | 59.3966 | 1.17989e+07 | 526.971 | 372.088 | |
| | Total | 22390 | 1.78385e+06 | 79.6716 | 59.3966 | 1.17989e+07 | 526.971 | 372.088 | |
| |
| ### Source-wise granularity |
| |
| **FINETUNING** data |
| |
| | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars | |
| |:-----------|----------:|-----------------:|-------------:|------------:|-----------------:|-------------:|------------:| |
| | Total | 891196 | 8.83648e+08 | 991.53 | 6768.64 | 5.50441e+09 | 6176.42 | 41398.6 | |
| | HAL | 26987 | 7.03474e+08 | 26067.1 | 26603.8 | 4.32567e+09 | 160287 | 160053 | |
| | HAS | 11334 | 9.61734e+07 | 8485.39 | 16098.9 | 6.20009e+08 | 54703.4 | 102858 | |
| | ISTEX | 12179 | 4.31384e+07 | 3542.03 | 2156.57 | 2.82624e+08 | 23205.9 | 14238.5 | |
| | BDPM | 11023 | 2.00358e+07 | 1817.63 | 2409.58 | 1.35081e+08 | 12254.5 | 16062.4 | |
| | WIKIPEDIA | 9957 | 6.53102e+06 | 655.923 | 1252.04 | 4.32721e+07 | 4345.89 | 8209.94 | |
| | WMT16 | 587562 | 6.49552e+06 | 11.055 | 5.40784 | 4.73973e+07 | 80.6677 | 37.5055 | |
| | EMEA_V3 | 222937 | 4.44909e+06 | 19.9567 | 15.5252 | 2.80864e+07 | 125.984 | 99.953 | |
| | CERIMES | 22 | 1.71519e+06 | 77963.1 | 47413.5 | 1.16235e+07 | 528341 | 321525 | |
| | FRASIMED | 2048 | 1.3229e+06 | 645.946 | 333.903 | 8.73342e+06 | 4264.37 | 2207.73 | |
| | DEFT2021 | 275 | 111792 | 406.516 | 264.436 | 626549 | 2278.36 | 1402.57 | |
| | QUAERO | 2083 | 66877 | 32.1061 | 161.208 | 394933 | 189.598 | 905.512 | |
| | CNEDIMTS | 813 | 58345 | 71.7651 | 60.599 | 398478 | 490.133 | 403.23 | |
| | ECDC_TM | 2152 | 42460 | 19.7305 | 13.3516 | 280626 | 130.402 | 92.0109 | |
| | PXCORPUS | 1414 | 18372 | 12.9929 | 6.0802 | 103531 | 73.2185 | 33.7791 | |
| | QUALISCOPE | 298 | 11736 | 39.3826 | 19.5879 | 79969 | 268.352 | 131.707 | |
| | MANTRA_GSC | 112 | 3085 | 27.5446 | 39.6518 | 22356 | 199.607 | 306.097 | |
| |
| **INSTRUCTION-TUNING** data |
| |
| | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars | |
| |:--------------|----------:|-----------------:|-------------:|------------:|-----------------:|-------------:|------------:| |
| | Total | 22390 | 1.78385e+06 | 79.6716 | 59.3966 | 1.17989e+07 | 526.971 | 372.088 | |
| | MEDIQAL | 19907 | 1.6593e+06 | 83.3526 | 61.6255 | 1.09334e+07 | 549.225 | 386.325 | |
| | FRENCHMEDMCQA | 2483 | 124547 | 50.1599 | 19.6412 | 865475 | 348.56 | 126.799 | |
|
|
| ## File Organization |
|
|
| PARTAGES/ |
|
|
| ├── fine-tuning/ |
|
|
| │ ├── dataset1_part1.parquet |
| |
| │ ├── dataset1_part2.parquet |
|
|
| │ └── ... |
|
|
| ├── instruction-tuning/ |
|
|
| │ ├── dataset2_part1.parquet |
| |
| │ ├── dataset2_part2.parquet |
|
|
| │ └── ... |
|
|
| └── README.md |
|
|
|
|
| ## Usage |
|
|
| ```python |
| from dataset import load_dataset |
| |
| data = load_dataset( |
| "LIMICS/PARTAGES", |
| split="train", |
| data_dir="finetuning" # or "instruction-tuning" |
| download_mode="force_redownload", |
| verification_mode="no_checks", |
| ) |
| ``` |
|
|