Datasets:

Modalities:
Text
Formats:
parquet
Languages:
French
License:
armandviolle commited on
Commit
556e138
·
verified ·
1 Parent(s): 8e6e620

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +206 -8
README.md CHANGED
@@ -1,5 +1,37 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
 
 
 
 
 
 
 
 
 
 
 
 
3
  features:
4
  - name: input
5
  dtype: string
@@ -13,13 +45,179 @@ dataset_info:
13
  dtype: string
14
  splits:
15
  - name: train
16
- num_bytes: 23768340
17
  num_examples: 22390
18
- download_size: 7446529
19
- dataset_size: 23768340
20
- configs:
21
- - config_name: default
22
- data_files:
23
- - split: train
24
- path: instruction-tuning/train-*
25
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license:
3
+ - cc-by-4.0
4
+ - etalab-2.0
5
+ language:
6
+ - fr
7
+ tags:
8
+ - medical
9
+ configs:
10
+ - config_name: default
11
+ data_files:
12
+ - split: train
13
+ path: finetuning/train-*
14
+ - config_name: finetuning
15
+ data_files:
16
+ - split: train
17
+ path: finetuning/*.parquet
18
+ - config_name: instruction-tuning
19
+ data_files:
20
+ - split: train
21
+ path: instruction-tuning/*.parquet
22
  dataset_info:
23
+ - config_name: finetuning
24
+ features:
25
+ - name: input
26
+ dtype: string
27
+ - name: source
28
+ dtype: string
29
+ - name: document_type
30
+ dtype: string
31
+ splits:
32
+ - name: train
33
+ num_examples: 891196
34
+ - config_name: instruction-tuning
35
  features:
36
  - name: input
37
  dtype: string
 
45
  dtype: string
46
  splits:
47
  - name: train
 
48
  num_examples: 22390
 
 
 
 
 
 
 
49
  ---
50
+ # PARCOMED - PARTAGES Corpus of Open MEdical Documents
51
+
52
+ This document describes the first version of the **commercial** corpus.
53
+
54
+ ## Overview
55
+
56
+ The availability of French biomedical data remains a major challenge for improving the multilingual capabilities of large language models (LLMs) in the medical domain.
57
+ We introduce and release the PARCOMED corpus, a collection of French biomedical texts compiled from a wide range of sources for commercial use.
58
+
59
+ While similar datasets have been released in the past couple of years (NACHOS from DrBERT, JARGON), ours is the result of a greater scrutiny of the licensing terms of each source. Therefore, the PARTAGES corpus is fully compatible with research usage and is also distributed with a version compatible with commercial usage.
60
+ Here, we present the commercial corpus released.
61
+
62
+
63
+ ## Document types and data sources
64
+
65
+ The selected datasets for our corpus come from a variety of sources which can be categorized as follows:
66
+
67
+ ### Clinical
68
+ **FRASIMED**: Annotated corpus of synthetic clinical cases written in French. Available at https://zenodo.org/records/8355629. License CC-BY-4.0.
69
+ ### Dialogue
70
+ **PXCORPUS**: French corpus of medical dialogues on prescriptions, transcripted and annotated. Available at https://doi.org/10.5281/zenodo.6482586. License CC-BY-4.0.
71
+ ### Education
72
+ **CERIMES**: Indexing of digital pedagogical resources proposed by higher education institutions and research organizations in France. NACHOS versioning. Available at https://data.enseignementsup-recherche.gouv.fr/explore/dataset/fr_esr_ressources-pedagogiques/export/?flg=en-gb&refine.lom_lifecycle_contribute_entity_fn=CERIMES. License Etalab.
73
+ ### Encyclopedic
74
+ **WIKIPEDIA**: Corpus extracted from Wikipedia in French, collected via the python wikipediaapi on medical, pharmaceutical and biological categories. License CC-BY-SA 3.0, GNU Free Documentation License.
75
+ ### Medical
76
+ **ECDC_TM**: Corpus of medical texts from the European Centre for Disease Prevention and Control (ECDC) for machine translation tasks. NACHOS versioning. Available at https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction. Free License.
77
+ ### Medicinal
78
+ **EMEA_V3**: Corpus of multilingual medical documents from the European Medicines Agency (EMEA), 3rd version. NACHOS versioning. Available at https://huggingface.co/datasets/qanastek/EMEA-V3. License CC-BY-4.0.
79
+
80
+ **BDPM**: Public database of medicines. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/base-de-donnees-publique-des-medicaments-base-officielle/. License Etalab.
81
+ ### Question Answering
82
+ **DEFT2021**: Corpus from the DEFT challenge for three tasks: extraction of clinical profiles, evaluation of student responses and existing ratings. Available at https://huggingface.co/datasets/DrBenchmark/DEFT2021. License CC-BY-4.0.
83
+
84
+ **FRENCHMEDMCQA** (INSTRUCT): Francophone corpus of questions in the medical domain with 5 response options (single or multiple choice) and their manual corrections. Available at https://huggingface.co/datasets/qanastek/frenchmedmcqa. License Apache 2.0.
85
+
86
+ **MEDIQAL** (INSTRUCT): MediQAl is a French medical question answering dataset designed to evaluate the capabilities of language models in factual medical recall and clinical reasoning. Disponible à https://huggingface.co/datasets/ANR-MALADES/MediQAl. Licence CC-BY-4.0
87
+ ### Regulation
88
+ **QUALISCOPE**: Data on the quality of healthcare establishments in France, extracted from Scope Santé. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/base-sur-la-qualite-et-la-securite-des-soins-anciennement-scope-sante/. License Etalab.
89
+
90
+ **CNEDIMTS**: Dataset from a specialized commission of the HAS that evaluates individual medical devices as well as diagnostic, therapeutic or assistive products (excluding medications), as well as associated services. NACHOS versioning. Available at https://www.data.gouv.fr/datasets/evaluation-des-dispositifs-medicaux/. License Etalab.
91
+ ### Scientific
92
+ **WMT16**: Biomedical variant of the WMT16 corpus built from PubMed scientific publications, containing multilingual data used for machine translation. Available at https://huggingface.co/datasets/qanastek/WMT-16-PubMed. License CC-BY-4.0.
93
+
94
+ **HAL**: Corpus extracted from the HAL platform, grouping French scientific publications in the biomedical domain. NACHOS versioning. Available via harvesting following the api protocol https://api.documentation-administrative.gouv.fr/oai. License Etalab.
95
+
96
+ **HAS**: Data from the High Authority of Health. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/textes-des-publications-de-la-has-7/. License Etalab.
97
+
98
+ **QUAERO**: Corpus of multilingual medical documents from MEDLINE titles and documents from the European Medicines Agency (EMEA-V3), used for training and evaluating models of automatic medical language processing. NACHOS versioning. Available at https://huggingface.co/datasets/DrBenchmark/QUAERO. License GNU Free Documentation License.
99
+
100
+ **ISTEX**: Corpus of scientific publications from the ISTEX platform, gathering French scientific literature. NACHOS versioning. Available at https://data.istex.fr/. License Etalab.
101
+
102
+ **MANTRA_GSC**: Dataset extracted from biomedical corpora (Medline abstract titles, pharmaceutical notices, biomedical patents), with independent concept annotation according to a subset of the UMLS. NACHOS versioning. Available at https://huggingface.co/datasets/bigbio/mantra_gsc. License CC-BY-4.0.
103
+
104
+
105
+ ## Preprocessing steps
106
+
107
+ ### Text cleaning
108
+
109
+ All the documents were preprocessed using a pipeline inspired by FlauBERT (Le et al., 2020), including Unicode conversion and normalization, removal of characters outside standard French encoding, removal of multiple spaces, and removal of URLs.
110
+
111
+ To this initial cleaning script, additional steps were added due to the lack of relevant content in some documents included in the corpus. These were based on criteria such as a minimum word count (=5; a higher number would have been too restrictive for dialogues) in the texts that were retained.
112
+
113
+ ### De-duplication
114
+
115
+ To avoid overfitting on redundant samples in our dataset, we added an additional deduplication step during preprocessing. We used a very “classic” method based on MinHash similarity, with a similarity threshold of 0.85 and a number of permitted permutations set to 128.
116
+
117
+ This deduplication was applied during the transfer of the sourced datasets to the ready-to-use, unsourced corpus. Indeed, since some corpora intersect, the granularity of the source becomes less relevant because the documents are compared in an inter-corpus manner.
118
+
119
+
120
+ ## Features Scheme
121
+
122
+ | Column Name | Data Type | Description |
123
+ |:--------------|:------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------|
124
+ | instruction | string | instruction-tuning only feature, corresponding to the system prompt for instruction-tuning samples. |
125
+ | input | string | input text, regardless of the adaptation method (e.g., finetuning or instruction-tuning). For instruction-tuning, this is the "user prompt" or "question". |
126
+ | output | string | **instruction-tuning only feature** gold standard output for supervised instruction-tuning. |
127
+ | source | string | dataset name of the data sample. |
128
+ | document_type | string | typology of document (e.g., Scientific, Encyclopedic, Clinical, Medication, Question-Answering, Dialogue, Regulation). |
129
+
130
+ ## Statistics
131
+
132
+ ### Document-type granularity
133
+
134
+ **FINETUNING** data
135
+
136
+ | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars |
137
+ |:-------------------|----------:|-----------------:|-------------:|------------:|-----------------:|-------------:|------------:|
138
+ | Total | 891196 | 8.83648e+08 | 991.53 | 6768.64 | 5.50441e+09 | 6176.42 | 41398.6 |
139
+ | Scientific | 640257 | 8.49351e+08 | 1326.58 | 7931.16 | 5.27612e+09 | 8240.63 | 48468.1 |
140
+ | Medicinal | 233960 | 2.44849e+07 | 104.654 | 647.2 | 1.63167e+08 | 697.415 | 4332.35 |
141
+ | Wiki | 9957 | 6.53102e+06 | 655.923 | 1252.04 | 4.32721e+07 | 4345.89 | 8209.94 |
142
+ | Education | 22 | 1.71519e+06 | 77963.1 | 47413.5 | 1.16235e+07 | 528341 | 321525 |
143
+ | Clinical | 2048 | 1.3229e+06 | 645.946 | 333.903 | 8.73342e+06 | 4264.37 | 2207.73 |
144
+ | Question Answering | 275 | 111792 | 406.516 | 264.436 | 626549 | 2278.36 | 1402.57 |
145
+ | Regulation | 1111 | 70081 | 63.0792 | 54.7356 | 478447 | 430.645 | 365.089 |
146
+ | Medical | 2152 | 42460 | 19.7305 | 13.3516 | 280626 | 130.402 | 92.0109 |
147
+ | Dialogue | 1414 | 18372 | 12.9929 | 6.0802 | 103531 | 73.2185 | 33.7791 |
148
+
149
+ **INSTRUCTION-TUNING** data
150
+
151
+ | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars |
152
+ |:-------------------|----------:|------------:|-------------:|------------:|------------:|-------------:|------------:|
153
+ | Question Answering | 22390 | 1.78385e+06 | 79.6716 | 59.3966 | 1.17989e+07 | 526.971 | 372.088 |
154
+ | Total | 22390 | 1.78385e+06 | 79.6716 | 59.3966 | 1.17989e+07 | 526.971 | 372.088 |
155
+
156
+ ### Source-wise granularity
157
+
158
+ **FINETUNING** data
159
+
160
+ | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars |
161
+ |:-----------|----------:|-----------------:|-------------:|------------:|-----------------:|-------------:|------------:|
162
+ | Total | 891196 | 8.83648e+08 | 991.53 | 6768.64 | 5.50441e+09 | 6176.42 | 41398.6 |
163
+ | HAL | 26987 | 7.03474e+08 | 26067.1 | 26603.8 | 4.32567e+09 | 160287 | 160053 |
164
+ | HAS | 11334 | 9.61734e+07 | 8485.39 | 16098.9 | 6.20009e+08 | 54703.4 | 102858 |
165
+ | ISTEX | 12179 | 4.31384e+07 | 3542.03 | 2156.57 | 2.82624e+08 | 23205.9 | 14238.5 |
166
+ | BDPM | 11023 | 2.00358e+07 | 1817.63 | 2409.58 | 1.35081e+08 | 12254.5 | 16062.4 |
167
+ | WIKIPEDIA | 9957 | 6.53102e+06 | 655.923 | 1252.04 | 4.32721e+07 | 4345.89 | 8209.94 |
168
+ | WMT16 | 587562 | 6.49552e+06 | 11.055 | 5.40784 | 4.73973e+07 | 80.6677 | 37.5055 |
169
+ | EMEA_V3 | 222937 | 4.44909e+06 | 19.9567 | 15.5252 | 2.80864e+07 | 125.984 | 99.953 |
170
+ | CERIMES | 22 | 1.71519e+06 | 77963.1 | 47413.5 | 1.16235e+07 | 528341 | 321525 |
171
+ | FRASIMED | 2048 | 1.3229e+06 | 645.946 | 333.903 | 8.73342e+06 | 4264.37 | 2207.73 |
172
+ | DEFT2021 | 275 | 111792 | 406.516 | 264.436 | 626549 | 2278.36 | 1402.57 |
173
+ | QUAERO | 2083 | 66877 | 32.1061 | 161.208 | 394933 | 189.598 | 905.512 |
174
+ | CNEDIMTS | 813 | 58345 | 71.7651 | 60.599 | 398478 | 490.133 | 403.23 |
175
+ | ECDC_TM | 2152 | 42460 | 19.7305 | 13.3516 | 280626 | 130.402 | 92.0109 |
176
+ | PXCORPUS | 1414 | 18372 | 12.9929 | 6.0802 | 103531 | 73.2185 | 33.7791 |
177
+ | QUALISCOPE | 298 | 11736 | 39.3826 | 19.5879 | 79969 | 268.352 | 131.707 |
178
+ | MANTRA_GSC | 112 | 3085 | 27.5446 | 39.6518 | 22356 | 199.607 | 306.097 |
179
+
180
+ **INSTRUCTION-TUNING** data
181
+
182
+ | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars |
183
+ |:--------------|----------:|-----------------:|-------------:|------------:|-----------------:|-------------:|------------:|
184
+ | Total | 22390 | 1.78385e+06 | 79.6716 | 59.3966 | 1.17989e+07 | 526.971 | 372.088 |
185
+ | MEDIQAL | 19907 | 1.6593e+06 | 83.3526 | 61.6255 | 1.09334e+07 | 549.225 | 386.325 |
186
+ | FRENCHMEDMCQA | 2483 | 124547 | 50.1599 | 19.6412 | 865475 | 348.56 | 126.799 |
187
+
188
+ ## File Organization
189
+
190
+ PARTAGES/
191
+
192
+ ├── fine-tuning/
193
+
194
+ │ ├── dataset1_part1.parquet
195
+
196
+ │ ├── dataset1_part2.parquet
197
+
198
+ │ └── ...
199
+
200
+ ├── instruction-tuning/
201
+
202
+ │ ├── dataset2_part1.parquet
203
+
204
+ │ ├── dataset2_part2.parquet
205
+
206
+ │ └── ...
207
+
208
+ └── README.md
209
+
210
+
211
+ ## Usage
212
+
213
+ ```python
214
+ from dataset import load_dataset
215
+
216
+ data = load_dataset(
217
+ "LIMICS/PARTAGES",
218
+ split="train",
219
+ data_dir="finetuning" # or "instruction-tuning"
220
+ download_mode="force_redownload",
221
+ verification_mode="no_checks",
222
+ )
223
+ ```