armandviolle commited on
Commit
6f07cfa
·
verified ·
1 Parent(s): 0d6b382

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +222 -8
README.md CHANGED
@@ -1,5 +1,35 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
 
 
 
 
 
 
 
 
 
 
 
 
3
  features:
4
  - name: input
5
  dtype: string
@@ -13,13 +43,197 @@ dataset_info:
13
  dtype: string
14
  splits:
15
  - name: train
16
- num_bytes: 23768340
17
  num_examples: 22390
18
- download_size: 7446529
19
- dataset_size: 23768340
20
- configs:
21
- - config_name: default
22
- data_files:
23
- - split: train
24
- path: instruction-tuning/train-*
25
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - fr
5
+ tags:
6
+ - medical
7
+ configs:
8
+ - config_name: default
9
+ data_files:
10
+ - split: train
11
+ path: finetuning/train-*
12
+ - config_name: finetuning
13
+ data_files:
14
+ - split: train
15
+ path: finetuning/*.parquet
16
+ - config_name: instruction-tuning
17
+ data_files:
18
+ - split: train
19
+ path: instruction-tuning/*.parquet
20
  dataset_info:
21
+ - config_name: finetuning
22
+ features:
23
+ - name: input
24
+ dtype: string
25
+ - name: source
26
+ dtype: string
27
+ - name: document_type
28
+ dtype: string
29
+ splits:
30
+ - name: train
31
+ num_examples: 905342
32
+ - config_name: instruction-tuning
33
  features:
34
  - name: input
35
  dtype: string
 
43
  dtype: string
44
  splits:
45
  - name: train
 
46
  num_examples: 22390
 
 
 
 
 
 
 
47
  ---
48
+ # PARCOMED - PARTAGES Corpus of Open MEdical Documents
49
+
50
+ This document describes the first version of the **research-only** corpus.
51
+
52
+ ## Overview
53
+
54
+ The availability of French biomedical data remains a major challenge for improving the multilingual capabilities of large language models (LLMs) in the medical domain.
55
+ We introduce and release the PARCOMED_research_only corpus, a collection of French biomedical texts compiled from a wide range of sources for research-only use.
56
+
57
+ While similar datasets have been released in the past couple of years (NACHOS from DrBERT, JARGON), ours is the result of a greater scrutiny of the licensing terms of each source. Therefore, the PARTAGES corpus is fully compatible with research usage and is also distributed with a version compatible with commercial usage.
58
+ Here, we present the research-only corpus released.
59
+
60
+
61
+ ## Document types and data sources
62
+
63
+ The selected datasets for our corpus come from a variety of sources which can be categorized as follows:
64
+
65
+ ### Clinical
66
+ **E3C**: E3C corpus of clinical cases in French, used for training and evaluating medical models. License 'libre for research'.
67
+
68
+ **CAS**: Corpus built from clinical cases reported in the scientific literature published in French, of which a subset of the corpus is annotated. NACHOS versioning. Visible at https://huggingface.co/datasets/bigbio/cas/tree/main and available upon request to the author. Research-only license.
69
+
70
+ **FRASIMED**: Annotated corpus of synthetic clinical cases written in French. Available at https://zenodo.org/records/8355629. License CC-BY-4.0.
71
+
72
+ **ESSAI**: Dataset ESSAI containing annotations of medical texts in French. Not available online but possible upon request. Research-only license.
73
+ ### Dialogue
74
+ **PXCORPUS**: French corpus of medical dialogues on prescriptions, transcripted and annotated. Available at https://doi.org/10.5281/zenodo.6482586. License CC-BY-4.0.
75
+
76
+ **MQC**: Annotated corpus of medical dialogues in French, simulating consultations between doctor and patient. Available at https://github.com/kleag/labforsims2-corpus. License CC-BY-SA-NC-4.0.
77
+ ### Education
78
+ **CERIMES**: Indexing of digital pedagogical resources proposed by higher education institutions and research organizations in France. NACHOS versioning. Available at https://data.enseignementsup-recherche.gouv.fr/explore/dataset/fr_esr_ressources-pedagogiques/export/?flg=en-gb&refine.lom_lifecycle_contribute_entity_fn=CERIMES. License Etalab.
79
+ ### Encyclopedic
80
+ **WIKIPEDIA**: Corpus extracted from Wikipedia in French, collected via the python wikipediaapi on medical, pharmaceutical and biological categories. License CC-BY-SA 3.0, GNU Free Documentation License.
81
+ ### Medical
82
+ **ECDC_TM**: Corpus of medical texts from the European Centre for Disease Prevention and Control (ECDC) for machine translation tasks. NACHOS versioning. Available at https://joint-research-centre.ec.europa.eu/language-technology-resources/ecdc-translation-memory_en#Introduction. Free License.
83
+ ### Medicinal
84
+ **EMEA_V3**: Corpus of multilingual medical documents from the European Medicines Agency (EMEA), 3rd version. NACHOS versioning. Available at https://huggingface.co/datasets/qanastek/EMEA-V3. License CC-BY-4.0.
85
+
86
+ **BDPM**: Public database of medicines. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/base-de-donnees-publique-des-medicaments-base-officielle/. License Etalab.
87
+ ### Question Answering
88
+ **DEFT2021**: Corpus from the DEFT challenge for three tasks: extraction of clinical profiles, evaluation of student responses and existing ratings. Available at https://huggingface.co/datasets/DrBenchmark/DEFT2021. License CC-BY-4.0.
89
+
90
+ **FRENCHMEDMCQA** (INSTRUCT): Francophone corpus of questions in the medical domain with 5 response options (single or multiple choice) and their manual corrections. Available at https://huggingface.co/datasets/qanastek/frenchmedmcqa. License Apache 2.0.
91
+
92
+ **MEDIQAL** (INSTRUCT): MediQAl is a French medical question answering dataset designed to evaluate the capabilities of language models in factual medical recall and clinical reasoning. Disponible à https://huggingface.co/datasets/ANR-MALADES/MediQAl. Licence CC-BY-4.0
93
+ ### Regulation
94
+ **QUALISCOPE**: Data on the quality of healthcare establishments in France, extracted from Scope Santé. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/base-sur-la-qualite-et-la-securite-des-soins-anciennement-scope-sante/. License Etalab.
95
+
96
+ **CNEDIMTS**: Dataset from a specialized commission of the HAS that evaluates individual medical devices as well as diagnostic, therapeutic or assistive products (excluding medications), as well as associated services. NACHOS versioning. Available at https://www.data.gouv.fr/datasets/evaluation-des-dispositifs-medicaux/. License Etalab.
97
+ ### Scientific
98
+ **WMT16**: Biomedical variant of the WMT16 corpus built from PubMed scientific publications, containing multilingual data used for machine translation. Available at https://huggingface.co/datasets/qanastek/WMT-16-PubMed. License CC-BY-4.0.
99
+
100
+ **HAL**: Corpus extracted from the HAL platform, grouping French scientific publications in the biomedical domain. NACHOS versioning. Available via harvesting following the api protocol https://api.documentation-administrative.gouv.fr/oai. License Etalab.
101
+
102
+ **HAS**: Data from the High Authority of Health. NACHOS versioning. Available at https://www.data.gouv.fr/fr/datasets/textes-des-publications-de-la-has-7/. License Etalab.
103
+
104
+ **QUAERO**: Corpus of multilingual medical documents from MEDLINE titles and documents from the European Medicines Agency (EMEA-V3), used for training and evaluating models of automatic medical language processing. NACHOS versioning. Available at https://huggingface.co/datasets/DrBenchmark/QUAERO. License GNU Free Documentation License.
105
+
106
+ **WMT18_MEDLINE**: Corpus of biomedical texts from Medline, used in the context of the WMT18 challenge for automatic translation. NACHOS versioning. Available at https://www.statmt.org/wmt18/biomedical-translation-task.html. License CC BY-NC-SA 3.0, CC BY-NC-ND 4.0.
107
+
108
+ **ISTEX**: Corpus of scientific publications from the ISTEX platform, gathering French scientific literature. NACHOS versioning. Available at https://data.istex.fr/. License Etalab.
109
+
110
+ **CLEAR**: Corpus containing texts from 3 sources: encyclopedia, pharmaceutical notices and medical article abstracts. NACHOS versioning. Available at https://shs.hal.science/halshs-01968355. Research-only license.
111
+
112
+ **MANTRA_GSC**: Dataset extracted from biomedical corpora (Medline abstract titles, pharmaceutical notices, biomedical patents), with independent concept annotation according to a subset of the UMLS. NACHOS versioning. Available at https://huggingface.co/datasets/bigbio/mantra_gsc. License CC-BY-4.0.
113
+
114
+
115
+ ## Preprocessing steps
116
+
117
+ ### Text cleaning
118
+
119
+ All the documents were preprocessed using a pipeline inspired by FlauBERT (Le et al., 2020), including Unicode conversion and normalization, removal of characters outside standard French encoding, removal of multiple spaces, and removal of URLs.
120
+
121
+ To this initial cleaning script, additional steps were added due to the lack of relevant content in some documents included in the corpus. These were based on criteria such as a minimum word count (=5; a higher number would have been too restrictive for dialogues) in the texts that were retained.
122
+
123
+ ### De-duplication
124
+
125
+ To avoid overfitting on redundant samples in our dataset, we added an additional deduplication step during preprocessing. We used a very “classic” method based on MinHash similarity, with a similarity threshold of 0.85 and a number of permitted permutations set to 128.
126
+
127
+ This deduplication was applied during the transfer of the sourced datasets to the ready-to-use, unsourced corpus. Indeed, since some corpora intersect, the granularity of the source becomes less relevant because the documents are compared in an inter-corpus manner.
128
+
129
+
130
+ ## Features Scheme
131
+
132
+ | Column Name | Data Type | Description |
133
+ |:--------------|:------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------|
134
+ | instruction | string | instruction-tuning only feature, corresponding to the system prompt for instruction-tuning samples. |
135
+ | input | string | input text, regardless of the adaptation method (e.g., finetuning or instruction-tuning). For instruction-tuning, this is the "user prompt" or "question". |
136
+ | output | string | **instruction-tuning only feature** gold standard output for supervised instruction-tuning. |
137
+ | source | string | dataset name of the data sample. |
138
+ | document_type | string | typology of document (e.g., Scientific, Encyclopedic, Clinical, Medication, Question-Answering, Dialogue, Regulation). |
139
+
140
+ ## Statistics
141
+
142
+ ### Document-type granularity
143
+
144
+ **FINETUNING** data
145
+
146
+ | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars |
147
+ |:-------------------|----------:|-----------------:|-------------:|------------:|-----------------:|-------------:|------------:|
148
+ | Total | 905342 | 9.00141e+08 | 994.255 | 6719.46 | 5.61243e+09 | 6199.24 | 41099.6 |
149
+ | Scientific | 640313 | 8.49585e+08 | 1326.83 | 7932.88 | 5.27754e+09 | 8242.13 | 48478.3 |
150
+ | Medicinal | 233960 | 2.44849e+07 | 104.654 | 647.2 | 1.63167e+08 | 697.415 | 4332.35 |
151
+ | Clinical | 16100 | 1.75665e+07 | 1091.08 | 1290.35 | 1.15255e+08 | 7158.72 | 8430.4 |
152
+ | Encyclopedic | 9957 | 6.53102e+06 | 655.923 | 1252.04 | 4.32721e+07 | 4345.89 | 8209.94 |
153
+ | Education | 22 | 1.71519e+06 | 77963.1 | 47413.5 | 1.16235e+07 | 528341 | 321525 |
154
+ | Question Answering | 275 | 111792 | 406.516 | 264.436 | 626549 | 2278.36 | 1402.57 |
155
+ | Regulation | 1111 | 70081 | 63.0792 | 54.7356 | 478447 | 430.645 | 365.089 |
156
+ | Medical | 2152 | 42460 | 19.7305 | 13.3516 | 280626 | 130.402 | 92.0109 |
157
+ | Dialogue | 1452 | 34044 | 23.4463 | 73.5192 | 188202 | 129.616 | 394.801 |
158
+
159
+ **INSTRUCTION-TUNING** data
160
+
161
+ | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars |
162
+ |:-------------------|----------:|------------:|-------------:|------------:|------------:|-------------:|------------:|
163
+ | Question Answering | 22390 | 1.78385e+06 | 79.6716 | 59.3966 | 1.17989e+07 | 526.971 | 372.088 |
164
+ | Total | 22390 | 1.78385e+06 | 79.6716 | 59.3966 | 1.17989e+07 | 526.971 | 372.088 |
165
+
166
+ ### Source-wise granularity
167
+
168
+ **FINETUNING** data
169
+
170
+ | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars |
171
+ |:--------------|----------:|-----------------:|-------------:|------------:|-----------------:|-------------:|------------:|
172
+ | Total | 905342 | 9.00141e+08 | 994.255 | 6719.46 | 5.61243e+09 | 6199.24 | 41099.6 |
173
+ | HAL | 26987 | 7.03474e+08 | 26067.1 | 26603.8 | 4.32567e+09 | 160287 | 160053 |
174
+ | HAS | 11334 | 9.61734e+07 | 8485.39 | 16098.9 | 6.20009e+08 | 54703.4 | 102858 |
175
+ | ISTEX | 12179 | 4.31384e+07 | 3542.03 | 2156.57 | 2.82624e+08 | 23205.9 | 14238.5 |
176
+ | BDPM | 11023 | 2.00358e+07 | 1817.63 | 2409.58 | 1.35081e+08 | 12254.5 | 16062.4 |
177
+ | E3C | 7499 | 1.58646e+07 | 2115.57 | 1222.36 | 1.0414e+08 | 13887.2 | 7923.95 |
178
+ | WIKIPEDIA | 9957 | 6.53102e+06 | 655.923 | 1252.04 | 4.32721e+07 | 4345.89 | 8209.94 |
179
+ | WMT16 | 587563 | 6.49552e+06 | 11.055 | 5.40785 | 4.73973e+07 | 80.6676 | 37.5056 |
180
+ | EMEA_V3 | 222937 | 4.44909e+06 | 19.9567 | 15.5252 | 2.80864e+07 | 125.984 | 99.953 |
181
+ | CERIMES | 22 | 1.71519e+06 | 77963.1 | 47413.5 | 1.16235e+07 | 528341 | 321525 |
182
+ | FRASIMED | 2048 | 1.3229e+06 | 645.945 | 333.9 | 8.73338e+06 | 4264.34 | 2207.72 |
183
+ | CAS | 712 | 232389 | 326.389 | 242.842 | 1.52772e+06 | 2145.68 | 1501.74 |
184
+ | CLEAR | 6 | 226123 | 37687.2 | 46388.3 | 1.36912e+06 | 228188 | 280743 |
185
+ | ESSAI | 5841 | 146530 | 25.0865 | 14.2491 | 854518 | 146.297 | 83.1409 |
186
+ | DEFT2021 | 275 | 111792 | 406.516 | 264.436 | 626549 | 2278.36 | 1402.57 |
187
+ | QUAERO | 2083 | 66877 | 32.1061 | 161.208 | 394933 | 189.598 | 905.512 |
188
+ | CNEDIMTS | 813 | 58345 | 71.7651 | 60.599 | 398478 | 490.133 | 403.23 |
189
+ | ECDC_TM | 2152 | 42460 | 19.7305 | 13.3516 | 280626 | 130.402 | 92.0109 |
190
+ | PXCORPUS | 1414 | 18372 | 12.9929 | 6.0802 | 103531 | 73.2185 | 33.7791 |
191
+ | MQC | 38 | 15672 | 412.421 | 223.131 | 84671 | 2228.18 | 1179.41 |
192
+ | QUALISCOPE | 298 | 11736 | 39.3826 | 19.5879 | 79969 | 268.352 | 131.707 |
193
+ | WMT18_MEDLINE | 49 | 7719 | 157.531 | 65.3727 | 51627 | 1053.61 | 416.966 |
194
+ | MANTRA_GSC | 112 | 3085 | 27.5446 | 39.6518 | 22356 | 199.607 | 306.097 |
195
+
196
+ **INSTRUCTION-TUNING** data
197
+
198
+ | | nb_docs | nb_words | mean_words | std_words | nb_chars | mean_chars | std_chars |
199
+ |:--------------|----------:|-----------------:|-------------:|------------:|-----------------:|-------------:|------------:|
200
+ | Total | 22390 | 1.78385e+06 | 79.6716 | 59.3966 | 1.17989e+07 | 526.971 | 372.088 |
201
+ | MEDIQAL | 19907 | 1.6593e+06 | 83.3526 | 61.6255 | 1.09334e+07 | 549.225 | 386.325 |
202
+ | FRENCHMEDMCQA | 2483 | 124547 | 50.1599 | 19.6412 | 865475 | 348.56 | 126.799 |
203
+
204
+ ## File Organization
205
+
206
+ PARTAGES/
207
+
208
+ ├── fine-tuning/
209
+
210
+ │ ├── dataset1_part1.parquet
211
+
212
+ │ ├── dataset1_part2.parquet
213
+
214
+ │ └── ...
215
+
216
+ ├── instruction-tuning/
217
+
218
+ │ ├── dataset2_part1.parquet
219
+
220
+ │ ├── dataset2_part2.parquet
221
+
222
+ │ └── ...
223
+
224
+ └── README.md
225
+
226
+
227
+ ## Usage
228
+
229
+ ```python
230
+ from dataset import load_dataset
231
+
232
+ data = load_dataset(
233
+ "LIMICS/PARTAGES",
234
+ split="train",
235
+ data_dir="finetuning" # or "instruction-tuning"
236
+ download_mode="force_redownload",
237
+ verification_mode="no_checks",
238
+ )
239
+ ```