zinzinmit commited on
Commit
eb6f169
·
1 Parent(s): 1d74d4f

Remove unnecessary files and add Benchmark QA dataset

Browse files
Files changed (36) hide show
  1. .gitattributes +1 -0
  2. ChemDisGene/data/ctd_derived/dev_relationships.tsv +0 -0
  3. ChemDisGene/data/ctd_derived/test_relationships.tsv +0 -0
  4. ChemDisGene/data/ctd_derived/train_relationships.tsv +0 -0
  5. ChemDisGene/data/{curated → for_test}/abstracts.txt +0 -0
  6. ChemDisGene/data/{curated → for_test}/approved_relns_ctd_v1.tsv +0 -0
  7. ChemDisGene/data/{curated → for_test}/approved_relns_new_v1.tsv +0 -0
  8. ChemDisGene/data/{ctd_derived → main}/ctd_full_data.csv +0 -0
  9. ChemDisGene/data/{ctd_derived → main}/ctd_lookup_table.csv +0 -0
  10. README.md +78 -3
  11. bc5cdr/data/training/CDR_DevelopmentSet.PubTator.txt +0 -3
  12. bc5cdr/data/training/CDR_TestSet.PubTator.txt +0 -3
  13. bc5cdr/data/training/CDR_TrainingSet.PubTator.txt +0 -3
  14. bioasq/task_b/bioasq_task_b.py +836 -0
  15. bioasq/task_b/training14b.json +3 -0
  16. medqa/medqa_form1.csv +3 -0
  17. medqa/medqa_form2.csv +3 -0
  18. ChemDisGene/data/ctd_derived/dev_abstracts.txt → medqa/textbooks/Anatomy_Gray.txt +2 -2
  19. ChemDisGene/data/ctd_derived/test_abstracts.txt → medqa/textbooks/Biochemistry_Lippincott.txt +2 -2
  20. medqa/textbooks/Cell_Biology_Alberts.txt +3 -0
  21. ChemDisGene/data/ctd_derived/ctd_stats.txt → medqa/textbooks/First_Aid_Step1.txt +2 -2
  22. medqa/textbooks/First_Aid_Step2.txt +3 -0
  23. medqa/textbooks/Gynecology_Novak.txt +3 -0
  24. medqa/textbooks/Histology_Ross.txt +3 -0
  25. medqa/textbooks/Immunology_Janeway.txt +3 -0
  26. medqa/textbooks/InternalMed_Harrison.txt +3 -0
  27. medqa/textbooks/Neurology_Adams.txt +3 -0
  28. medqa/textbooks/Obstentrics_Williams.txt +3 -0
  29. medqa/textbooks/Pathology_Robbins.txt +3 -0
  30. ChemDisGene/data/curated/drugprot_pmids.txt → medqa/textbooks/Pathoma_Husain.txt +2 -2
  31. medqa/textbooks/Pediatrics_Nelson.txt +3 -0
  32. medqa/textbooks/Pharmacology_Katzung.txt +3 -0
  33. medqa/textbooks/Physiology_Levy.txt +3 -0
  34. medqa/textbooks/Psichiatry_DSM-5.txt +3 -0
  35. medqa/textbooks/Surgery_Schwartz.txt +3 -0
  36. ChemDisGene/data/ctd_derived/train_abstracts.txt → pubmedqa/pubmedqa.csv +2 -2
.gitattributes CHANGED
@@ -61,3 +61,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
61
  *.csv filter=lfs diff=lfs merge=lfs -text
62
  *.pdf filter=lfs diff=lfs merge=lfs -text
63
  *.txt filter=lfs diff=lfs merge=lfs -text
 
 
61
  *.csv filter=lfs diff=lfs merge=lfs -text
62
  *.pdf filter=lfs diff=lfs merge=lfs -text
63
  *.txt filter=lfs diff=lfs merge=lfs -text
64
+ *.json filter=lfs diff=lfs merge=lfs -text
ChemDisGene/data/ctd_derived/dev_relationships.tsv DELETED
The diff for this file is too large to render. See raw diff
 
ChemDisGene/data/ctd_derived/test_relationships.tsv DELETED
The diff for this file is too large to render. See raw diff
 
ChemDisGene/data/ctd_derived/train_relationships.tsv DELETED
The diff for this file is too large to render. See raw diff
 
ChemDisGene/data/{curated → for_test}/abstracts.txt RENAMED
File without changes
ChemDisGene/data/{curated → for_test}/approved_relns_ctd_v1.tsv RENAMED
File without changes
ChemDisGene/data/{curated → for_test}/approved_relns_new_v1.tsv RENAMED
File without changes
ChemDisGene/data/{ctd_derived → main}/ctd_full_data.csv RENAMED
File without changes
ChemDisGene/data/{ctd_derived → main}/ctd_lookup_table.csv RENAMED
File without changes
README.md CHANGED
@@ -16,10 +16,13 @@ pretty_name: MedNLPCombined
16
 
17
  **MedNLPCombined** is a collected repository of medical Natural Language Processing (NLP) datasets, primarily focused on **Chemical-Disease Relations (CDR)**, **Toxicogenomics**, and **Gene Interactions**. This repository is designed to facilitate research in Named Entity Recognition (NER) and Relation Extraction (RE) within the biomedical domain.
18
 
19
- The repository currently includes three major components:
20
  1. **BioCreative V CDR (BC5CDR) Task Corpus**
21
  2. **Comparative Toxicogenomics Database (CTD) Derived Data**
22
  3. **ChemDisGene Dataset**
 
 
 
23
 
24
  ## Repository Structure
25
 
@@ -46,6 +49,18 @@ Contains the **ChemDisGene** dataset for distant supervision of biomedical relat
46
  - **`related_documents/`**: Documentation and guidelines.
47
  - `AnnotationGuidelines.pdf`
48
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ## Dataset Details
50
 
51
  ### BioCreative V CDR (BC5CDR)
@@ -57,7 +72,32 @@ The BC5CDR corpus consists of **1,500 PubMed articles** with annotated chemical
57
  The CTD data provides manually curated information about chemical-gene/protein interactions, chemical-disease and gene-disease relationships. This component of the repository is likely a snapshot or a derived subset focusing on specific interaction types (e.g., chemical-disease-gene networks).
58
 
59
  ### ChemDisGene
60
- ChemDisGene is a large-scale, distant-supervision dataset for extracting biomedical relationships between chemicals, diseases, and genes. It provides a valuable resource for training models on a broader range of biomedical interactions.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
 
62
  ## Usage
63
 
@@ -113,6 +153,41 @@ Please refer to the [CTD citation policy](http://ctdbase.org/about/publications/
113
  }
114
  ```
115
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
116
  ## License
117
 
118
- This repository is licensed under **Apache 2.0**. Please also adhere to the specific license agreements of the original datasets (BC5CDR, CTD, and ChemDisGene) if applicable.
 
16
 
17
  **MedNLPCombined** is a collected repository of medical Natural Language Processing (NLP) datasets, primarily focused on **Chemical-Disease Relations (CDR)**, **Toxicogenomics**, and **Gene Interactions**. This repository is designed to facilitate research in Named Entity Recognition (NER) and Relation Extraction (RE) within the biomedical domain.
18
 
19
+ The repository currently includes six major components:
20
  1. **BioCreative V CDR (BC5CDR) Task Corpus**
21
  2. **Comparative Toxicogenomics Database (CTD) Derived Data**
22
  3. **ChemDisGene Dataset**
23
+ 4. **MedQA Dataset**
24
+ 5. **PubMedQA Dataset**
25
+ 6. **BioASQ Dataset**
26
 
27
  ## Repository Structure
28
 
 
49
  - **`related_documents/`**: Documentation and guidelines.
50
  - `AnnotationGuidelines.pdf`
51
 
52
+ ### 4. `medqa/`
53
+ Contains the **MedQA** dataset.
54
+ - A large-scale open-domain multiple-choice dataset for medical problems, collected from professional exams.
55
+
56
+ ### 5. `pubmedqa/`
57
+ Contains the **PubMedQA** dataset.
58
+ - A biomedical research question answering dataset requiring reasoning over PubMed abstracts.
59
+
60
+ ### 6. `bioasq/`
61
+ Contains the **BioASQ** dataset.
62
+ - A benchmark dataset for large-scale biomedical semantic indexing and question answering.
63
+
64
  ## Dataset Details
65
 
66
  ### BioCreative V CDR (BC5CDR)
 
72
  The CTD data provides manually curated information about chemical-gene/protein interactions, chemical-disease and gene-disease relationships. This component of the repository is likely a snapshot or a derived subset focusing on specific interaction types (e.g., chemical-disease-gene networks).
73
 
74
  ### ChemDisGene
75
+ ChemDisGene is a large-scale, distant-supervision dataset for extracting biomedical relationships between chemicals, diseases, and genes. It provides a valuable resource for training models on a broader range of biomedical interactions. The dataset contains approximately 80,000 biomedical research abstracts annotated with mentions of chemical, disease, and gene/gene-product entities, along with their pairwise relationships.
76
+
77
+ - **Paper**: [A Distant Supervision Corpus for Extracting Biomedical Relationships Between Chemicals, Diseases and Genes](https://aclanthology.org/2022.lrec-1.666/)
78
+ - **GitHub**: [chanzuckerberg/ChemDisGene](https://github.com/chanzuckerberg/ChemDisGene)
79
+ - **Hugging Face**: [bigbio/chem_dis_gene](https://huggingface.co/datasets/bigbio/chem_dis_gene)
80
+
81
+ ### MedQA
82
+ MedQA is a large-scale open-domain question answering dataset derived from professional medical board exams in the US, Mainland China, and Taiwan. It evaluates models on professional medical knowledge and clinical decision-making through multiple-choice questions.
83
+
84
+ - **Paper**: [What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams](https://arxiv.org/abs/2009.13081)
85
+ - **GitHub**: [jind11/MedQA](https://github.com/jind11/MedQA)
86
+ - **Hugging Face**: [bigbio/med_qa](https://huggingface.co/datasets/bigbio/med_qa)
87
+
88
+ ### PubMedQA
89
+ PubMedQA is a biomedical question answering dataset designed to answer research questions with yes/no/maybe using the corresponding abstracts. It requires reasoning over quantitative content and scientific texts.
90
+
91
+ - **Paper**: [PubMedQA: A Dataset for Biomedical Research Question Answering](https://arxiv.org/abs/1909.06146)
92
+ - **GitHub**: [pubmedqa/pubmedqa](https://github.com/pubmedqa/pubmedqa)
93
+ - **Hugging Face**: [bigbio/pubmed_qa](https://huggingface.co/datasets/bigbio/pubmed_qa)
94
+
95
+ ### BioASQ
96
+ BioASQ is a large-scale biomedical semantic indexing and question answering benchmark dataset. It provides various QA tasks challenging systems with realistic information needs from biomedical experts.
97
+
98
+ - **Paper**: [An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-015-0564-6)
99
+ - **GitHub**: [BioASQ Organization](https://github.com/BioASQ)
100
+ - **Hugging Face**: [bigbio/bioasq_task_b](https://huggingface.co/datasets/bigbio/bioasq_task_b)
101
 
102
  ## Usage
103
 
 
153
  }
154
  ```
155
 
156
+ **MedQA:**
157
+ ```bibtex
158
+ @article{jin2020disease,
159
+ title={What disease does this patient have? a large-scale open domain question answering dataset from medical exams},
160
+ author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
161
+ journal={arXiv preprint arXiv:2009.13081},
162
+ year={2020}
163
+ }
164
+ ```
165
+
166
+ **PubMedQA:**
167
+ ```bibtex
168
+ @inproceedings{jin2019pubmedqa,
169
+ title={PubMedQA: A Dataset for Biomedical Research Question Answering},
170
+ author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
171
+ booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
172
+ pages={2567--2577},
173
+ year={2019}
174
+ }
175
+ ```
176
+
177
+ **BioASQ:**
178
+ ```bibtex
179
+ @article{tsatsaronis2015overview,
180
+ title={An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition},
181
+ author={Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and Weissenborn, Dirk and Krithara, Anastasia and Petasis, Pethes and Polychronopoulos, Dimitris and others},
182
+ journal={BMC bioinformatics},
183
+ volume={16},
184
+ number={1},
185
+ pages={1--28},
186
+ year={2015},
187
+ publisher={BioMed Central}
188
+ }
189
+ ```
190
+
191
  ## License
192
 
193
+ This repository is licensed under **Apache 2.0**. Please also adhere to the specific license agreements of the original datasets (BC5CDR, CTD, ChemDisGene, MedQA, PubMedQA, and BioASQ) if applicable.
bc5cdr/data/training/CDR_DevelopmentSet.PubTator.txt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:9e18d8e700168887a7919e62f67ac2ce2357e3f409a48ee992906abdd80fe3e5
3
- size 1127990
 
 
 
 
bc5cdr/data/training/CDR_TestSet.PubTator.txt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a3c48d5f35bdfce20c2371778abedc9fd24d1da092207072e5181ad53c8d0e2f
3
- size 1168687
 
 
 
 
bc5cdr/data/training/CDR_TrainingSet.PubTator.txt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:7afcec134e3d4871e74b0398d92850d3c9ac94c0bdc780003f2d60184c012bd8
3
- size 1129998
 
 
 
 
bioasq/task_b/bioasq_task_b.py ADDED
@@ -0,0 +1,836 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """
16
+ BioASQ Task B On Biomedical Semantic QA (Involves IR, QA, Summarization qnd
17
+ More). This task uses benchmark datasets containing development and test
18
+ questions, in English, along with gold standard (reference) answers constructed
19
+ by a team of biomedical experts. The participants have to respond with relevant
20
+ concepts, articles, snippets and RDF triples, from designated resources, as well
21
+ as exact and 'ideal' answers.
22
+
23
+ Fore more information about the challenge, the organisers and the relevant
24
+ publications please visit: http://bioasq.org/
25
+ """
26
+ import glob
27
+ import json
28
+ import os
29
+ import re
30
+
31
+ import datasets
32
+
33
+ from .bigbiohub import qa_features
34
+ from .bigbiohub import BigBioConfig
35
+ from .bigbiohub import Tasks
36
+
37
+ _LANGUAGES = ["English"]
38
+ _PUBMED = True
39
+ _LOCAL = True
40
+ _CITATION = """\
41
+ @article{tsatsaronis2015overview,
42
+ title = {
43
+ An overview of the BIOASQ large-scale biomedical semantic indexing and
44
+ question answering competition
45
+ },
46
+ author = {
47
+ Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos
48
+ and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and
49
+ Weissenborn, Dirk and Krithara, Anastasia and Petridis, Sergios and
50
+ Polychronopoulos, Dimitris and others
51
+ },
52
+ year = 2015,
53
+ journal = {BMC bioinformatics},
54
+ publisher = {BioMed Central Ltd},
55
+ volume = 16,
56
+ number = 1,
57
+ pages = 138
58
+ }
59
+ """
60
+
61
+ _DATASETNAME = "bioasq_task_b"
62
+ _DISPLAYNAME = "BioASQ Task B"
63
+
64
+ _BIOASQ_11B_DESCRIPTION = """\
65
+ The data are intended to be used as training and development data for BioASQ
66
+ 11, which will take place during 2023. There is one file containing the data:
67
+ - training11b.json
68
+
69
+ The file contains the data of the first ten editions of the challenge: 4719
70
+ questions [1] with their relevant documents, snippets, concepts and RDF
71
+ triples, exact and ideal answers.
72
+
73
+ Differences with BioASQ-training10b.json
74
+ - 485 new questions added from BioASQ10
75
+ - The question with id 621ecf1a3a8413c653000061 had identical body with
76
+ 5ac0a36f19833b0d7b000002. All relevant elements from both questions
77
+ are available in the merged question with id 5ac0a36f19833b0d7b000002.
78
+
79
+ [1] The distribution of 4719 questions : 1417 factoid, 1271 yesno, 1130 summary, 901 list
80
+ """
81
+
82
+ _BIOASQ_10B_DESCRIPTION = """\
83
+ The data are intended to be used as training and development data for BioASQ
84
+ 10, which will take place during 2022. There is one file containing the data:
85
+ - training10b.json
86
+
87
+ The file contains the data of the first nine editions of the challenge: 4234
88
+ questions [1] with their relevant documents, snippets, concepts and RDF
89
+ triples, exact and ideal answers.
90
+
91
+ Differences with BioASQ-training9b.json
92
+ - 492 new questions added from BioASQ9
93
+ - The question with id 56c1f01eef6e394741000046 had identical body with
94
+ 602498cb1cb411341a00009e. All relevant elements from both questions
95
+ are available in the merged question with id 602498cb1cb411341a00009e.
96
+ - The question with id 5c7039207c78d69471000065 had identical body with
97
+ 601c317a1cb411341a000014. All relevant elements from both questions
98
+ are available in the merged question with id 601c317a1cb411341a000014.
99
+ - The question with id 5e4b540b6d0a27794100001c had identical body with
100
+ 602828b11cb411341a0000fc. All relevant elements from both questions
101
+ are available in the merged question with id 602828b11cb411341a0000fc.
102
+ - The question with id 5fdb42fba43ad31278000027 had identical body with
103
+ 5d35eb01b3a638076300000f. All relevant elements from both questions
104
+ are available in the merged question with id 5d35eb01b3a638076300000f.
105
+ - The question with id 601d76311cb411341a000045 had identical body with
106
+ 6060732b94d57fd87900003d. All relevant elements from both questions
107
+ are available in the merged question with id 6060732b94d57fd87900003d.
108
+
109
+ [1] 4234 questions : 1252 factoid, 1148 yesno, 1018 summary, 816 list
110
+ """
111
+
112
+ _BIOASQ_9B_DESCRIPTION = """\
113
+ The data are intended to be used as training and development data for BioASQ 9,
114
+ which will take place during 2021. There is one file containing the data:
115
+ - training9b.json
116
+
117
+ The file contains the data of the first seven editions of the challenge: 3742
118
+ questions [1] with their relevant documents, snippets, concepts and RDF triples,
119
+ exact and ideal answers.
120
+
121
+ Differences with BioASQ-training8b.json
122
+ - 499 new questions added from BioASQ8
123
+ - The question with id 5e30e689fbd6abf43b00003a had identical body with
124
+ 5880e417713cbdfd3d000001. All relevant elements from both questions
125
+ are available in the merged question with id 5880e417713cbdfd3d000001.
126
+
127
+ [1] 3742 questions : 1091 factoid, 1033 yesno, 899 summary, 719 list
128
+ """
129
+
130
+ _BIOASQ_8B_DESCRIPTION = """\
131
+ The data are intended to be used as training and development data for BioASQ 8,
132
+ which will take place during 2020. There is one file containing the data:
133
+ - training8b.json
134
+
135
+ The file contains the data of the first seven editions of the challenge: 3243
136
+ questions [1] with their relevant documents, snippets, concepts and RDF triples,
137
+ exact and ideal answers.
138
+
139
+ Differences with BioASQ-training7b.json
140
+ - 500 new questions added from BioASQ7
141
+ - 4 questions were removed
142
+ - The question with id 5717fb557de986d80d000009 had identical body with
143
+ 571e06447de986d80d000016. All relevant elements from both questions
144
+ are available in the merged question with id 571e06447de986d80d000016.
145
+ - The question with id 5c589ddb86df2b917400000b had identical body with
146
+ 5c6b7a9e7c78d69471000029. All relevant elements from both questions
147
+ are available in the merged question with id 5c6b7a9e7c78d69471000029.
148
+ - The question with id 52ffb5d12059c6d71c00007c had identical body with
149
+ 52e7870a98d023950500001a. All relevant elements from both questions
150
+ are available in the merged question with id 52e7870a98d023950500001a.
151
+ - The question with id 53359338d6d3ac6a3400004f had identical body with
152
+ 589a246878275d0c4a000030. All relevant elements from both questions
153
+ are available in the merged question with id 589a246878275d0c4a000030.
154
+
155
+ **** UPDATE 25/02/2020 *****
156
+ The previous version of the dataset contained an inconsistency on question with
157
+ id "5c9904eaecadf2e73f00002e", where the "ideal_answer" field was missing.
158
+ This has been fixed.
159
+ """
160
+
161
+ _BIOASQ_7B_DESCRIPTION = """\
162
+ The data are intended to be used as training and development data for BioASQ 7,
163
+ which will take place during 2019. There is one file containing the data:
164
+ - BioASQ-trainingDataset7b.json
165
+
166
+ The file contains the data of the first six editions of the challenge: 2747
167
+ questions [1] with their relevant documents, snippets, concepts and RDF triples,
168
+ exact and ideal answers.
169
+
170
+ Differences with BioASQ-trainingDataset6b.json
171
+ - 500 new questions added from BioASQ6
172
+ - 4 questions were removed
173
+ - The question with id 569ed752ceceede94d000004 had identical body with
174
+ a new question from BioASQ6. All relevant elements from both questions
175
+ are available in the merged question with id 5abd31e0fcf456587200002c
176
+ - 3 questions were removed as incomplete: 54d643023706e89528000007,
177
+ 532819afd6d3ac6a3400000f, 517545168ed59a060a00002b
178
+ - 4 questions were revised for various confusions that have been identified
179
+ - In 2 questions the ideal answer has been revised :
180
+ 51406e6223fec90375000009, 5172f8118ed59a060a000019
181
+ - In 4 questions the snippets and documents list has been revised :
182
+ 51406e6223fec90375000009, 5172f8118ed59a060a000019,
183
+ 51593dc8d24251bc05000099, 5158a5b8d24251bc05000097
184
+ - In 198 questions the documents list has updated with missing
185
+ documents from the relevant snippets list. [2]
186
+
187
+ [1] 2747 questions : 779 factoid, 745 yesno, 667 summary, 556 list
188
+ [2] 55031181e9bde69634000014, 51406e6223fec90375000009, 54d643023706e89528000007,
189
+ 52bf1b0a03868f1b06000009, 52bf19c503868f1b06000001, 51593dc8d24251bc05000099,
190
+ 530a5117970c65fa6b000007, 553a8d78f321868558000003, 531a3fe3b166e2b806000038,
191
+ 532819afd6d3ac6a3400000f, 5158a5b8d24251bc05000097, 553653a5bc4f83e828000007,
192
+ 535d2cf09a4572de6f000004, 53386282d6d3ac6a3400005a, 517a8ce98ed59a060a000045,
193
+ 55391ce8bc4f83e828000018, 5547d700f35db75526000007, 5713bf261174fb1755000011,
194
+ 6f15c5a2ac5ed1459000012, 52b2e498f828ad283c000010, 570a7594cf1c325851000026,
195
+ 530cefaaad0bf1360c000012, 530f685c329f5fcf1e000002, 550c4011a103b78016000009,
196
+ 552faababc4f83e828000005, 54cf48acf693c3b16b00000b, 550313aae9bde6963400001f,
197
+ 551177626a8cde6b72000005, 54eded8c94afd6150400000c, 550c3754a103b78016000007,
198
+ 56f555b609dd18d46b000007, 54c26e29f693c3b16b000003, 54da0c524b1fd0d33c00000b,
199
+ 52bf1d3c03868f1b0600000d, 5343bdd6aeec6fbd07000001, 52cb9b9b03868f1b0600002d,
200
+ 55423875ec76f5e50c000002, 571366ba1174fb1755000005, 56c4d14ab04e159d0e000003,
201
+ 550c44d1a103b7801600000a, 5547a01cf35db75526000005, 55422640ccca0ce74b000004,
202
+ 54ecb66d445c3b5a5f000002, 553656c4bc4f83e828000009, 5172f8118ed59a060a000019,
203
+ 513711055274a5fb0700000e, 54d892ee014675820d000005, 52e6c92598d0239505000019,
204
+ 5353aedb288f4dae47000006, 52bf1f1303868f1b06000014, 5519113b622b19434500000f,
205
+ 52b2f1724003448f5500000b, 5525317687ecba3764000007, 554a0cadf35db7552600000f,
206
+ 55152bd246478f2f2c000002, 516c3960298dcd4e51000073, 571e417bbb137a4b0c00000a,
207
+ 551910d3622b194345000008, 54dc8ed6c0bb8dce23000002, 511a4ec01159fa8212000004,
208
+ 54d8ea2c4b1fd0d33c000002, 5148e1d6d24251bc0500003a, 515dbb3b298dcd4e51000018,
209
+ 56f7c15a09dd18d46b000012, 51475d5cd24251bc0500001b, 54db7c4ac0bb8dce23000001,
210
+ 57152ebbcb4ef8864c000002, 57134d511174fb1755000002, 55149f156a8cde6b72000013,
211
+ 56bcd422d36b5da378000005, 54ede5c394afd61504000006, 517545168ed59a060a00002b,
212
+ 5710ed19a5ed216440000003, 53442472aeec6fbd07000008, 55088e412e93f0133a000001,
213
+ 54d762653706e89528000014, 550aef0ec2af5d5b7000000a, 552435602c8b63434a000009,
214
+ 552446612c8b63434a00000c, 54d901ec4b1fd0d33c000006, 54cf45e7f693c3b16b00000a,
215
+ 52fc8b772059c6d71c00006e, 5314d05adae131f84700000d, 5512c91b6a8cde6b7200000b,
216
+ 56c5a7605795f9a73e000002, 55030a6ce9bde6963400000f, 553fac39c6a5098552000001,
217
+ 531a3a58b166e2b806000037, 5509bd6a1180f13250000002, 54f9c40ddd3fc62544000001,
218
+ 553c8fd1f32186855800000a, 56bce51cd36b5da37800000a, 550316a6e9bde69634000029,
219
+ 55031286e9bde6963400001b, 536e46f27d100faa09000012, 5502abd1e9bde69634000008,
220
+ 551af9106b348bb82c000002, 54edeb4394afd6150400000b, 5717cdd2070aa3d072000001,
221
+ 56c5ade15795f9a73e000003, 531464a6e3eabad021000014, 58a0d87a78275d0c4a000053,
222
+ 58a3160d60087bc10a00000a, 58a5d54860087bc10a000025, 58a0da5278275d0c4a000054,
223
+ 58a3264e60087bc10a00000d, 589c8ef878275d0c4a000042, 58a3428d60087bc10a00001b,
224
+ 58a3196360087bc10a00000b, 58a341eb60087bc10a000018, 58a3275960087bc10a00000f,
225
+ 58a342e760087bc10a00001c, 58bd645702b8c60953000010, 58bc8e5002b8c60953000006,
226
+ 58bc8e7a02b8c60953000007, 58a1da4e78275d0c4a000059, 58bcb83d02b8c6095300000f,
227
+ 58bc9a5002b8c60953000008, 589dee3778275d0c4a000050, 58a32efe60087bc10a000013,
228
+ 58a327bf60087bc10a000011, 58bca08702b8c6095300000a, 58bc9dbb02b8c60953000009,
229
+ 58c99fcc02b8c60953000029, 58bca2f302b8c6095300000c, 58cbf1f402b8c60953000036,
230
+ 58cdb41302b8c60953000042, 58cdb80302b8c60953000043, 58cdbaf302b8c60953000044,
231
+ 58cb305c02b8c60953000032, 58caf86f02b8c60953000030, 58c1b2f702b8c6095300001e,
232
+ 58bde18b02b8c60953000014, 58eb7898eda5a57672000006, 58caf88c02b8c60953000031,
233
+ 58e11bf76fddd3e83e00000c, 58cdbbd102b8c60953000045, 58df779d6fddd3e83e000001,
234
+ 58dbb4f08acda3452900001a, 58dbb8968acda3452900001b, 58add7699ef3c34033000009,
235
+ 58dbbbf08acda3452900001d, 58dbba438acda3452900001c, 58dd2cb08acda34529000029,
236
+ 58eb9542eda5a57672000007, 58f3ca5c70f9fc6f0f00000d, 58e9e7aa3e8b6dc87c00000d,
237
+ 58e3d9ab3e8b6dc87c000002, 58eb4ce7eda5a57672000004, 58f3c8f470f9fc6f0f00000c,
238
+ 58f3c62970f9fc6f0f00000b, 58adca6d9ef3c34033000007, 58f4b3ee70f9fc6f0f000013,
239
+ 593ff22b70f9fc6f0f000023, 5a679875b750ff4455000004, 5a774585faa1ab7d2e000005,
240
+ 5a6f7245b750ff4455000050, 5a787544faa1ab7d2e00000b, 5a74d9980384be9551000008,
241
+ 5a6a02a3b750ff4455000021, 5a6e47b1b750ff4455000049, 5a87124561bb38fb24000001,
242
+ 5a6e42f1b750ff4455000046, 5a8b1264fcd1d6a10c00001d, 5a981e66fcd1d6a10c00002f,
243
+ 5a8718c861bb38fb24000008, 5a7615af83b0d9ea6600001f, 5a87140a61bb38fb24000003,
244
+ 5a77072c9e632bc06600000a, 5a897601fcd1d6a10c000008, 5a871a6861bb38fb24000009,
245
+ 5a74e9ad0384be955100000a, 5a79d25dfaa1ab7d2e00000f, 5a6900ebb750ff445500001d,
246
+ 5a87145861bb38fb24000004, 5a871b8d61bb38fb2400000a, 5a897a06fcd1d6a10c00000b,
247
+ 5a8dc6b4fcd1d6a10c000026, 5a8712af61bb38fb24000002, 5a8714e261bb38fb24000005,
248
+ 5aa304f1d6d6b54f79000004, 5a981bcffcd1d6a10c00002d, 5aa3fa73d6d6b54f79000008,
249
+ 5aa55b45d6d6b54f7900000d, 5a981dd0fcd1d6a10c00002e, 5a9700adfcd1d6a10c00002c,
250
+ 5a9d8ffe1d1251d03b000022, 5a96c74cfcd1d6a10c000029, 5aa50086d6d6b54f7900000c,
251
+ 5a95765bfcd1d6a10c000028, 5a96f40cfcd1d6a10c00002b, 5ab144fefcf4565872000012,
252
+ 5aa67b4fd6d6b54f7900000f, 5abd5a62fcf4565872000031, 5abbe429fcf456587200001c,
253
+ 5aaef38dfcf456587200000f, 5abce6acfcf4565872000022, 5aae6499fcf456587200000c
254
+ """
255
+
256
+ _BIOASQ_6B_DESCRIPTION = """\
257
+ The data are intended to be used as training and development data for BioASQ 6,
258
+ which will take place during 2018. There is one file containing the data:
259
+ - BioASQ-trainingDataset6b.json
260
+
261
+ Differences with BioASQ-trainingDataset5b.json
262
+ - 500 new questions added from BioASQ5
263
+ - 48 pairs of questions with identical bodies have been merged into one
264
+ question having only one question-id, but all the documents, snippets,
265
+ concepts, RDF triples and answers of both questions of the pair.
266
+ - This normalization lead to the removal of 48 deprecated question
267
+ ids [2] from the dataset and to the update of the 48 remaining
268
+ questions [3].
269
+ - In cases where a pair of questions with identical bodies had some
270
+ inconsistency (e.g. different question type), the inconsistency has
271
+ been solved merging the pair manually consulting the BioASQ expert team.
272
+ - 12 questions were revised for various confusions that have been
273
+ identified
274
+ - In 8 questions the question type has been changed to better suit to
275
+ the question body. The change of type lead to corresponding changes
276
+ in exact answers existence and format : 54fc4e2e6ea36a810c000003,
277
+ 530b01a6970c65fa6b000008, 530cf54dab4de4de0c000009,
278
+ 531b2fc3b166e2b80600003c, 532819afd6d3ac6a3400000f,
279
+ 532aad53d6d3ac6a34000010, 5710ade4cf1c32585100002c,
280
+ 52f65f372059c6d71c000027
281
+ - In 6 questions the ideal answer has been revised :
282
+ 532aad53d6d3ac6a34000010, 5710ade4cf1c32585100002c,
283
+ 53147b52e3eabad021000015, 5147c8a6d24251bc05000027,
284
+ 5509bd6a1180f13250000002, 58bbb71f22d3005309000016
285
+ - In 5 questions the exact answer has been revised :
286
+ 5314bd7ddae131f847000006, 53130a77e3eabad02100000f,
287
+ 53148a07dae131f847000002, 53147b52e3eabad021000015,
288
+ 5147c8a6d24251bc05000027
289
+ - In 2 questions the question body has been revised :
290
+ 52f65f372059c6d71c000027, 5503145ee9bde69634000022
291
+ - In lists of ideal answers, documents, snippets, concepts and RDF triples
292
+ any duplicate identical elements have been removed.
293
+ - Ideal answers in format of one string have been converted to a list with
294
+ one element for consistency with cases where more than one golden ideal
295
+ answers are available. (i.e. "ideal_ans1" converted to ["ideal_ans1"])
296
+ - For yesno questions: All exact answers have been normalized to "yes" or
297
+ "no" (replacing "Yes", "YES" and "No")
298
+ - For factoid questions: The format of the exact answer was normalized to a
299
+ list of strings for each question, representing a set of synonyms
300
+ answering the question (i.e. [`ans1`, `syn11`, ... ]).
301
+ - For list questions: The format of the exact answer was normalized to a
302
+ list of lists. Each internal list represents one element of the answer
303
+ as a set of synonyms
304
+ (i.e. [[`ans1`, `syn11`, `syn12`], [`ans2`], [`ans3`, `syn31`] ...]).
305
+ - Empty elements, e.g. empty lists of documents have been removed.
306
+
307
+ [1] 2251 questions : 619 factoid, 616 yesno, 531 summary, 485 list
308
+ [2] The 48 deprecated question ids are : 52f8b2902059c6d71c000053,
309
+ 52f11bf22059c6d71c000005, 52f77edb2059c6d71c000028, 52ed795098d0239505000032,
310
+ 56d1a9baab2fed4a47000002, 52f7d3472059c6d71c00002f, 52fbe2bf2059c6d71c00006c,
311
+ 52ec961098d023950500002a, 52e8e98298d0239505000020, 56cae5125795f9a73e000024,
312
+ 530cefaaad0bf1360c000007, 530cefaaad0bf1360c000005, 52d63b2803868f1b0600003a,
313
+ 530cefaaad0bf1360c00000a, 516425ff298dcd4e51000051, 55191149622b194345000010,
314
+ 52fa70142059c6d71c000056, 52f77f4d2059c6d71c00002a, 52efc016c8da89891000001a,
315
+ 52efc001c8da898910000019, 52f896ae2059c6d71c000045, 52eceada98d023950500002d,
316
+ 52efc05cc8da89891000001c, 515e078e298dcd4e51000031, 52fe54252059c6d71c000079,
317
+ 514217a6d24251bc05000005, 52d1389303868f1b06000032, 530cf4d5e2bfff940c000003,
318
+ 52fc946d2059c6d71c000071, 52e8e99e98d0239505000021, 52ef7786c8da898910000015,
319
+ 52d8494698d0239505000007, 530cf51d5610acba0c000001, 52f637972059c6d71c000025,
320
+ 52e9f99798d0239505000025, 515de572298dcd4e51000021, 52fe4ad52059c6d71c000077,
321
+ 52f65bf02059c6d71c000026, 52e8e9d298d0239505000022, 52fa74052059c6d71c00005a,
322
+ 52ffbddf2059c6d71c00007d, 56bc932aac7ad1001900001c, 56c02883ef6e394741000017,
323
+ 52d2b75403868f1b06000035, 52f118aa2059c6d71c000003, 52e929eb98d0239505000023,
324
+ 532c12f2d6d3ac6a3400001d, 52d8466298d0239505000006'
325
+ [3] The 48 questions resulting from merging with their pair have the
326
+ following ids: 5149aafcd24251bc05000045, 515db020298dcd4e51000011,
327
+ 515db54c298dcd4e51000016, 51680a49298dcd4e51000062, 52b06a68f828ad283c000005,
328
+ 52bf1aa503868f1b06000006, 52bf1af803868f1b06000008, 52bf1d6003868f1b0600000e,
329
+ 52cb9b9b03868f1b0600002d, 52d2818403868f1b06000033, 52df887498d023950500000c,
330
+ 52e0c9a298d0239505000010, 52e203bc98d0239505000011, 52e62bae98d0239505000015,
331
+ 52e6c92598d0239505000019, 52e7bbf698d023950500001d, 52ea605098d0239505000028,
332
+ 52ece29f98d023950500002c, 52ecf2dd98d023950500002e, 52ef7754c8da898910000014,
333
+ 52f112bb2059c6d71c000002, 52f65f372059c6d71c000027, 52f77f752059c6d71c00002b,
334
+ 52f77f892059c6d71c00002c, 52f89ee42059c6d71c00004d, 52f89f4f2059c6d71c00004e,
335
+ 52f89fba2059c6d71c00004f, 52f89fc62059c6d71c000050, 52f89fd32059c6d71c000051,
336
+ 52fa6ac72059c6d71c000055, 52fa73c62059c6d71c000058, 52fa73e82059c6d71c000059,
337
+ 52fa74252059c6d71c00005b, 52fc8b772059c6d71c00006e, 52fc94572059c6d71c000070,
338
+ 52fc94ae2059c6d71c000073, 52fc94db2059c6d71c000074, 52fe52702059c6d71c000078,
339
+ 52fe58f82059c6d71c00007a, 530cefaaad0bf1360c000008, 530cefaaad0bf1360c000010,
340
+ 533ba218fd9a95ea0d000007, 534bb147aeec6fbd07000014, 55167dec46478f2f2c00000a,
341
+ 56c04412ef6e39474100001b, 56c1f01eef6e394741000046, 56c81fd15795f9a73e00000c,
342
+ 587d016ed673c3eb14000002
343
+ """
344
+
345
+ _BIOASQ_5B_DESCRIPTION = """\
346
+ The data are intended to be used as training and development data for BioASQ 5,
347
+ which will take place during 2017. There is one file containing the data:
348
+ - BioASQ-trainingDataset5b.json
349
+
350
+ The file contains the data of the first four editions of the challenge: 1799
351
+ questions with their relevant documents, snippets, concepts and rdf triples,
352
+ exact and ideal answers.
353
+ """
354
+
355
+ _BIOASQ_4B_DESCRIPTION = """\
356
+ The data are intended to be used as training and development data for BioASQ 4,
357
+ which will take place during 2016. There is one file containing the data:
358
+ - BioASQ-trainingDataset4b.json
359
+
360
+ The file contains the data of the first three editions of the challenge: 1307
361
+ questions with their relevant documents, snippets, concepts and rdf triples,
362
+ exact and ideal answers from the first two editions and 497 questions with
363
+ similar annotations from the third editions of the challenge.
364
+ """
365
+
366
+ _BIOASQ_3B_DESCRIPTION = """No README provided."""
367
+
368
+ _BIOASQ_2B_DESCRIPTION = """No README provided."""
369
+
370
+ _BIOASQ_BLURB_DESCRIPTION = """The BioASQ corpus contains multiple question
371
+ answering tasks annotated by biomedical experts, including yes/no, factoid, list,
372
+ and summary questions. Pertaining to our objective of comparing neural language
373
+ models, we focus on the the yes/no questions (Task 7b), and leave the inclusion
374
+ of other tasks to future work. Each question is paired with a reference text
375
+ containing multiple sentences from a PubMed abstract and a yes/no answer. We use
376
+ the official train/dev/test split of 670/75/140 questions.
377
+
378
+ See 'Domain-Specific Language Model Pretraining for Biomedical
379
+ Natural Language Processing' """
380
+
381
+ _DESCRIPTION = {
382
+ "bioasq_11b": _BIOASQ_11B_DESCRIPTION,
383
+ "bioasq_10b": _BIOASQ_10B_DESCRIPTION,
384
+ "bioasq_9b": _BIOASQ_9B_DESCRIPTION,
385
+ "bioasq_8b": _BIOASQ_8B_DESCRIPTION,
386
+ "bioasq_7b": _BIOASQ_7B_DESCRIPTION,
387
+ "bioasq_6b": _BIOASQ_6B_DESCRIPTION,
388
+ "bioasq_5b": _BIOASQ_5B_DESCRIPTION,
389
+ "bioasq_4b": _BIOASQ_4B_DESCRIPTION,
390
+ "bioasq_3b": _BIOASQ_3B_DESCRIPTION,
391
+ "bioasq_2b": _BIOASQ_2B_DESCRIPTION,
392
+ "bioasq_blurb": _BIOASQ_BLURB_DESCRIPTION,
393
+ }
394
+
395
+ _HOMEPAGE = "http://participants-area.bioasq.org/datasets/"
396
+
397
+ # Data access reqires registering with BioASQ.
398
+ # See http://participants-area.bioasq.org/accounts/register/
399
+ _LICENSE = "NLM_LICENSE"
400
+
401
+ _URLs = {
402
+ "bioasq_11b": ["BioASQ-training11b.zip", "Task11BGoldenEnriched.zip"],
403
+ "bioasq_10b": ["BioASQ-training10b.zip", "Task10BGoldenEnriched.zip"],
404
+ "bioasq_9b": ["BioASQ-training9b.zip", "Task9BGoldenEnriched.zip"],
405
+ "bioasq_8b": ["BioASQ-training8b.zip", "Task8BGoldenEnriched.zip"],
406
+ "bioasq_7b": ["BioASQ-training7b.zip", "Task7BGoldenEnriched.zip"],
407
+ "bioasq_6b": ["BioASQ-training6b.zip", "Task6BGoldenEnriched.zip"],
408
+ "bioasq_5b": ["BioASQ-training5b.zip", "Task5BGoldenEnriched.zip"],
409
+ "bioasq_4b": ["BioASQ-training4b.zip", "Task4BGoldenEnriched.zip"],
410
+ "bioasq_3b": ["BioASQ-trainingDataset3b.zip", "Task3BGoldenEnriched.zip"],
411
+ "bioasq_2b": ["BioASQ-trainingDataset2b.zip", "Task2BGoldenEnriched.zip"],
412
+ "bioasq_blurb": ["BioASQ-training7b.zip", "Task7BGoldenEnriched.zip"],
413
+ }
414
+
415
+ # BLURB train and dev contain all yesno questions from the offical training split
416
+ # test is all yesno question from the official test split
417
+ _BLURB_SPLITS = {
418
+ "dev": {
419
+ "5313b049e3eabad021000013",
420
+ "553a8d78f321868558000003",
421
+ "5158a5b8d24251bc05000097",
422
+ "571e3d42bb137a4b0c000007",
423
+ "5175b97a8ed59a060a00002f",
424
+ "56c9e9d15795f9a73e00001d",
425
+ "56d19ffaab2fed4a47000001",
426
+ "518ccac0310faafe0800000b",
427
+ "56f12ca92ac5ed145900000e",
428
+ "51680a49298dcd4e51000062",
429
+ "5339ed7bd6d3ac6a34000060",
430
+ "516e5f33298dcd4e5100007e",
431
+ "5327139ad6d3ac6a3400000d",
432
+ "54e12ae3ae9738404b000004",
433
+ "5321b8579b2d7acc7e000008",
434
+ "514a4679d24251bc0500005b",
435
+ "54c12fd1f693c3b16b000001",
436
+ "52df887498d023950500000c",
437
+ "52f20d802059c6d71c00000a",
438
+ "532f0c4ed6d3ac6a3400002e",
439
+ "52b2f3b74003448f5500000c",
440
+ "52b2f1724003448f5500000b",
441
+ "515d9a42298dcd4e5100000d",
442
+ "5159b990d24251bc050000a3",
443
+ "54e12c30ae9738404b000005",
444
+ "553a6a9fbc4f83e82800001c",
445
+ "5509ec41c2af5d5b70000006",
446
+ "56cae40b5795f9a73e000022",
447
+ "51680b0e298dcd4e51000065",
448
+ "515df89e298dcd4e5100002f",
449
+ "54f49e56d0d681a040000004",
450
+ "571e3e2abb137a4b0c000008",
451
+ "515debe7298dcd4e51000026",
452
+ "56f6ab7009dd18d46b00000d",
453
+ "53302bced6d3ac6a34000039",
454
+ "5322de919b2d7acc7e000012",
455
+ "5709f212cf1c325851000020",
456
+ "5502abd1e9bde69634000008",
457
+ "516c220e298dcd4e51000071",
458
+ "5894597e7d9090f353000004",
459
+ "5895ec5e7d9090f353000015",
460
+ "58bbb8ae22d3005309000018",
461
+ "58bc58c302b8c60953000001",
462
+ "58c276bc02b8c60953000020",
463
+ "58c0825502b8c6095300001b",
464
+ "58ab1f6c9ef3c34033000002",
465
+ "58adbe999ef3c34033000005",
466
+ "58df3e408acda3452900002d",
467
+ "58dfec676fddd3e83e000006",
468
+ "58d8d0cc8acda34529000008",
469
+ "58b67fae22d3005309000009",
470
+ "58dbbbf08acda3452900001d",
471
+ "58dbba438acda3452900001c",
472
+ "58dbbdac8acda3452900001e",
473
+ "58dcbb8c8acda34529000021",
474
+ "5a468785966455904c00000d",
475
+ "5a70de5199e2c3af26000005",
476
+ "5a67a550b750ff4455000009",
477
+ "5a679875b750ff4455000004",
478
+ "5a7a44b4faa1ab7d2e000010",
479
+ "5a67ade5b750ff445500000c",
480
+ "5a8881118cb19eca6b000006",
481
+ "5a67b48cb750ff4455000010",
482
+ "5a679be1b750ff4455000005",
483
+ "5a7340962dc08e987e000017",
484
+ "5a737e233b9d13c70800000d",
485
+ "5a8dc57ffcd1d6a10c000025",
486
+ "5a6d186db750ff4455000031",
487
+ "5a70d43b99e2c3af26000003",
488
+ "5a70ec6899e2c3af2600000c",
489
+ "5a9ac4161d1251d03b000010",
490
+ "5a733d2a2dc08e987e000015",
491
+ "5a74acd80384be9551000006",
492
+ "5aa6800ad6d6b54f79000011",
493
+ "5a9d9ab94e03427e73000003",
494
+ }
495
+ }
496
+
497
+ _SUPPORTED_TASKS = [Tasks.QUESTION_ANSWERING]
498
+ _SOURCE_VERSION = "1.0.0"
499
+ _BIGBIO_VERSION = "1.0.0"
500
+
501
+
502
+ class BioasqTaskBDataset(datasets.GeneratorBasedBuilder):
503
+ """
504
+ BioASQ Task B On Biomedical Semantic QA.
505
+ Creates configs for BioASQ2 through BioASQ10.
506
+ """
507
+
508
+ DEFAULT_CONFIG_NAME = "bioasq_9b_source"
509
+ SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
510
+ BIGBIO_VERSION = datasets.Version(_BIGBIO_VERSION)
511
+
512
+ # BioASQ2 through BioASQ11
513
+ BUILDER_CONFIGS = []
514
+ for version in range(2, 12):
515
+ BUILDER_CONFIGS.append(
516
+ BigBioConfig(
517
+ name=f"bioasq_{version}b_source",
518
+ version=SOURCE_VERSION,
519
+ description=f"bioasq{version} Task B source schema",
520
+ schema="source",
521
+ subset_id=f"bioasq_{version}b",
522
+ )
523
+ )
524
+
525
+ BUILDER_CONFIGS.append(
526
+ BigBioConfig(
527
+ name=f"bioasq_{version}b_bigbio_qa",
528
+ version=BIGBIO_VERSION,
529
+ description=f"bioasq{version} Task B in simplified BigBio schema",
530
+ schema="bigbio_qa",
531
+ subset_id=f"bioasq_{version}b",
532
+ )
533
+ )
534
+
535
+ # BLURB Benchmark config https://microsoft.github.io/BLURB/
536
+ BUILDER_CONFIGS.append(
537
+ BigBioConfig(
538
+ name=f"bioasq_blurb_bigbio_qa",
539
+ version=BIGBIO_VERSION,
540
+ description=f"BLURB benchmark in simplified BigBio schema",
541
+ schema="bigbio_qa",
542
+ subset_id=f"bioasq_blurb",
543
+ )
544
+ )
545
+
546
+ def _info(self):
547
+
548
+ # BioASQ Task B source schema
549
+ if self.config.schema == "source":
550
+ features = datasets.Features(
551
+ {
552
+ "id": datasets.Value("string"),
553
+ "type": datasets.Value("string"),
554
+ "body": datasets.Value("string"),
555
+ "documents": datasets.Sequence(datasets.Value("string")),
556
+ "concepts": datasets.Sequence(datasets.Value("string")),
557
+ "ideal_answer": datasets.Sequence(datasets.Value("string")),
558
+ "exact_answer": datasets.Sequence(datasets.Value("string")),
559
+ "triples": [
560
+ {
561
+ "p": datasets.Value("string"),
562
+ "s": datasets.Value("string"),
563
+ "o": datasets.Value("string"),
564
+ }
565
+ ],
566
+ "snippets": [
567
+ {
568
+ "offsetInBeginSection": datasets.Value("int32"),
569
+ "offsetInEndSection": datasets.Value("int32"),
570
+ "text": datasets.Value("string"),
571
+ "beginSection": datasets.Value("string"),
572
+ "endSection": datasets.Value("string"),
573
+ "document": datasets.Value("string"),
574
+ }
575
+ ],
576
+ }
577
+ )
578
+ # simplified schema for QA tasks
579
+ elif self.config.schema == "bigbio_qa":
580
+ features = qa_features
581
+
582
+ return datasets.DatasetInfo(
583
+ description=_DESCRIPTION[self.config.subset_id],
584
+ features=features,
585
+ supervised_keys=None,
586
+ homepage=_HOMEPAGE,
587
+ license=str(_LICENSE),
588
+ citation=_CITATION,
589
+ )
590
+
591
+ def _dump_gold_json(self, data_dir):
592
+ """
593
+ BioASQ test data is split into multiple records {9B1_golden.json,...,9B5_golden.json}
594
+ We combine these files into a single test set file 9Bx_golden.json
595
+ """
596
+ # BLURB is based on version 7
597
+ version = (
598
+ re.search(r"bioasq_([0-9]+)b", self.config.subset_id).group(1)
599
+ if "blurb" not in self.config.name
600
+ else "7"
601
+ )
602
+ gold_fpath = os.path.join(
603
+ data_dir, f"Task{version}BGoldenEnriched/bx_golden.json"
604
+ )
605
+
606
+ if not os.path.exists(gold_fpath):
607
+ # combine all gold json files
608
+ filelist = glob.glob(os.path.join(data_dir, "*/*.json"))
609
+ data = {"questions": []}
610
+ for fname in sorted(filelist):
611
+ with open(fname, "rt", encoding="utf-8") as file:
612
+ data["questions"].extend(json.load(file)["questions"])
613
+ # dump gold to json
614
+ with open(gold_fpath, "wt", encoding="utf-8") as file:
615
+ json.dump(data, file, indent=2)
616
+
617
+ return f"Task{version}BGoldenEnriched/bx_golden.json"
618
+
619
+ def _blurb_split_generator(self, train_dir, test_dir):
620
+ """
621
+ Create splits for BLURB Benchmark
622
+ """
623
+ gold_fpath = self._dump_gold_json(test_dir)
624
+
625
+ # create train/dev splits from yesno questions
626
+ train_fpath = os.path.join(train_dir, "blurb_bioasq_train.json")
627
+ dev_fpath = os.path.join(train_dir, "blurb_bioasq_dev.json")
628
+
629
+ blurb_splits = {
630
+ "train": {"questions": []},
631
+ "dev": {"questions": []},
632
+ "test": {"questions": []},
633
+ }
634
+
635
+ if not os.path.exists(train_fpath):
636
+ data_fpath = os.path.join(train_dir, "BioASQ-training7b/trainining7b.json")
637
+ with open(data_fpath, "rt", encoding="utf-8") as file:
638
+ data = json.load(file)
639
+
640
+ for record in data["questions"]:
641
+ if record["type"] != "yesno":
642
+ continue
643
+ if record["id"] in _BLURB_SPLITS["dev"]:
644
+ blurb_splits["dev"]["questions"].append(record)
645
+ else:
646
+ blurb_splits["train"]["questions"].append(record)
647
+
648
+ with open(train_fpath, "wt", encoding="utf-8") as file:
649
+ json.dump(blurb_splits["train"], file, indent=2)
650
+
651
+ with open(dev_fpath, "wt", encoding="utf-8") as file:
652
+ json.dump(blurb_splits["dev"], file, indent=2)
653
+
654
+ # create test split from yesno questions
655
+ with open(os.path.join(test_dir, gold_fpath), "rt", encoding="utf-8") as file:
656
+ data = json.load(file)
657
+
658
+ for record in data["questions"]:
659
+ if record["type"] != "yesno":
660
+ continue
661
+ blurb_splits["test"]["questions"].append(record)
662
+
663
+ test_fpath = os.path.join(test_dir, "blurb_bioasq_test.json")
664
+ with open(test_fpath, "wt", encoding="utf-8") as file:
665
+ json.dump(blurb_splits["test"], file, indent=2)
666
+
667
+ return [
668
+ datasets.SplitGenerator(
669
+ name=datasets.Split.TRAIN,
670
+ gen_kwargs={
671
+ "filepath": train_fpath,
672
+ "split": "train",
673
+ },
674
+ ),
675
+ datasets.SplitGenerator(
676
+ name=datasets.Split.VALIDATION,
677
+ gen_kwargs={
678
+ "filepath": dev_fpath,
679
+ "split": "dev",
680
+ },
681
+ ),
682
+ datasets.SplitGenerator(
683
+ name=datasets.Split.TEST,
684
+ gen_kwargs={
685
+ "filepath": test_fpath,
686
+ "split": "test",
687
+ },
688
+ ),
689
+ ]
690
+
691
+ def _split_generators(self, dl_manager):
692
+ """Returns SplitGenerators."""
693
+
694
+ if self.config.data_dir is None:
695
+ raise ValueError(
696
+ "This is a local dataset. Please pass the data_dir kwarg to load_dataset."
697
+ )
698
+
699
+ train_dir, test_dir = dl_manager.download_and_extract(
700
+ [
701
+ os.path.join(self.config.data_dir, _url)
702
+ for _url in _URLs[self.config.subset_id]
703
+ ]
704
+ )
705
+ # create gold dump and get path
706
+ gold_fpath = self._dump_gold_json(test_dir)
707
+
708
+ # older versions of bioasq have different folder formats
709
+ train_fpaths = {
710
+ "bioasq_2b": "BioASQ_2013_TaskB/BioASQ-trainingDataset2b.json",
711
+ "bioasq_3b": "BioASQ-trainingDataset3b.json",
712
+ "bioasq_4b": "BioASQ-training4b/BioASQ-trainingDataset4b.json",
713
+ "bioasq_5b": "BioASQ-training5b/BioASQ-trainingDataset5b.json",
714
+ "bioasq_6b": "BioASQ-training6b/BioASQ-trainingDataset6b.json",
715
+ "bioasq_7b": "BioASQ-training7b/trainining7b.json",
716
+ "bioasq_8b": "training8b.json", # HACK - this zipfile strips the dirname
717
+ "bioasq_9b": "BioASQ-training9b/training9b.json",
718
+ "bioasq_10b": "training10b.json",
719
+ "bioasq_11b": "BioASQ-training11b/training11b.json",
720
+ }
721
+
722
+ # BLURB has custom train/dev/test splits based on Task 7B
723
+ if "blurb" in self.config.name:
724
+ return self._blurb_split_generator(train_dir, test_dir)
725
+
726
+ return [
727
+ datasets.SplitGenerator(
728
+ name=datasets.Split.TRAIN,
729
+ gen_kwargs={
730
+ "filepath": os.path.join(
731
+ train_dir, train_fpaths[self.config.subset_id]
732
+ ),
733
+ "split": "train",
734
+ },
735
+ ),
736
+ datasets.SplitGenerator(
737
+ name=datasets.Split.TEST,
738
+ gen_kwargs={
739
+ "filepath": os.path.join(test_dir, gold_fpath),
740
+ "split": "test",
741
+ },
742
+ ),
743
+ ]
744
+
745
+ def _get_exact_answer(self, record):
746
+ """The value exact_answer can be in different formats based on question type."""
747
+ if record["type"] == "yesno":
748
+ exact_answer = [record["exact_answer"]]
749
+ elif record["type"] == "summary":
750
+ exact_answer = []
751
+ # summary question types only have an ideal answer, so use that for bigbio
752
+ if self.config.schema == "bigbio_qa":
753
+ exact_answer = (
754
+ record["ideal_answer"]
755
+ if isinstance(record["ideal_answer"], list)
756
+ else [record["ideal_answer"]]
757
+ )
758
+
759
+ elif record["type"] == "list":
760
+ exact_answer = record["exact_answer"]
761
+ elif record["type"] == "factoid":
762
+ # older version of bioasq sometimes represent this as as string
763
+ exact_answer = (
764
+ record["exact_answer"]
765
+ if isinstance(record["exact_answer"], list)
766
+ else [record["exact_answer"]]
767
+ )
768
+ return exact_answer
769
+
770
+ @staticmethod
771
+ def _normalize_yesno(yesno):
772
+ assert len(yesno) == 1, "There should be only one answer."
773
+ yesno = yesno[0]
774
+ # normalize answers like "Yes."
775
+ yesno = yesno.lower()
776
+ if yesno.startswith('yes'):
777
+ return ['yes']
778
+ elif yesno.startswith('no'):
779
+ return ['no']
780
+ else:
781
+ raise ValueError(f'Unrecognized yesno value: {yesno}')
782
+
783
+ def _generate_examples(self, filepath, split):
784
+ """Yields examples as (key, example) tuples."""
785
+
786
+ if self.config.schema == "source":
787
+ with open(filepath, encoding="utf-8") as file:
788
+ data = json.load(file)
789
+ for i, record in enumerate(data["questions"]):
790
+ yield i, {
791
+ "id": record["id"],
792
+ "type": record["type"],
793
+ "body": record["body"],
794
+ "documents": record["documents"],
795
+ "concepts": record["concepts"] if "concepts" in record else [],
796
+ "triples": record["triples"] if "triples" in record else [],
797
+ "ideal_answer": record["ideal_answer"]
798
+ if isinstance(record["ideal_answer"], list)
799
+ else [record["ideal_answer"]],
800
+ "exact_answer": self._get_exact_answer(record),
801
+ "snippets": record["snippets"] if "snippets" in record else [],
802
+ }
803
+
804
+ elif self.config.schema == "bigbio_qa":
805
+ # NOTE: Years 2014-2016 (BioASQ2-BioASQ4) have duplicate records
806
+ cache = set()
807
+ with open(filepath, encoding="utf-8") as file:
808
+ uid = 0
809
+ data = json.load(file)
810
+ for record in data["questions"]:
811
+ # for questions that do not have snippets, skip
812
+ if "snippets" not in record:
813
+ continue
814
+
815
+ choices = []
816
+ answer = self._get_exact_answer(record)
817
+ if record["type"] == 'yesno':
818
+ choices = ['yes', 'no']
819
+ answer = self._normalize_yesno(answer)
820
+
821
+ for i, snippet in enumerate(record["snippets"]):
822
+ key = f'{record["id"]}_{i}'
823
+ # ignore duplicate records
824
+ if key not in cache:
825
+ cache.add(key)
826
+ yield uid, {
827
+ "id": key,
828
+ "document_id": snippet["document"],
829
+ "question_id": record["id"],
830
+ "question": record["body"],
831
+ "type": record["type"],
832
+ "choices": choices,
833
+ "context": snippet["text"],
834
+ "answer": answer,
835
+ }
836
+ uid += 1
bioasq/task_b/training14b.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5669afbc0bbc8f50d54850edb4aa592ecff3e38b29b09ac6684bb52990b651b7
3
+ size 57109878
medqa/medqa_form1.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:671487a3b2381b4ba2e9004dcd283a9e72ef3a634efc50507bc119d298926d4b
3
+ size 45096588
medqa/medqa_form2.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a773cc63d8958faaf77a12a9682e9f492c7cb9635f7e85fbe2451abebb62c37
3
+ size 42356548
ChemDisGene/data/ctd_derived/dev_abstracts.txt → medqa/textbooks/Anatomy_Gray.txt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d1c0a83e76e8460c450e6a3cbadc2b7f195283189c1139539ad32db323c4f60e
3
- size 4248350
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e042a602ecdf286d32acf172c3d165323aaa823667278ad767fdc77aeaa5f23c
3
+ size 2286967
ChemDisGene/data/ctd_derived/test_abstracts.txt → medqa/textbooks/Biochemistry_Lippincott.txt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ffcedc4d7995757db8de8428cf1d3f5240c25637c2a53b866332b989993861ee
3
- size 5640429
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b26ba8ce3b1c6f220813aee6f936edf3c512259a6744307ae192b19e3705d799
3
+ size 1353650
medqa/textbooks/Cell_Biology_Alberts.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3beae75976c9d3ad98833fcc8e1bee72cc3c494f03fcf6cf9f6962bde03f2a19
3
+ size 4895912
ChemDisGene/data/ctd_derived/ctd_stats.txt → medqa/textbooks/First_Aid_Step1.txt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e63a29247cc97807fb36f430b6af994a5424ac26e579436d2911c9e54c5f1d46
3
- size 6067
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ac10bab1d5e644025722043eb7d12a16a940b9ee42a9803b327370eed98e4ff
3
+ size 672786
medqa/textbooks/First_Aid_Step2.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76885005d4c6acbe1fa1b897e8795d93f1be6e1899a39e9f3afe95ce71b5bfbe
3
+ size 1038243
medqa/textbooks/Gynecology_Novak.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:195e08a97d67ba89fb3ab31ea8ef799a8c73e106dc4a35d08c413f74122aeed2
3
+ size 5647726
medqa/textbooks/Histology_Ross.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c3a02e2e5af95481fe6b2e8ba75ed1f24e054205db6965e5bb34de3e2e7b261
3
+ size 3054197
medqa/textbooks/Immunology_Janeway.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5a72e4bfff3ecd29a091ab4af5fd0cc641d66237d663e54c3c5b1a9cbea956b
3
+ size 3329538
medqa/textbooks/InternalMed_Harrison.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f56e1b224a416e61ff03790f72e74aae742e94661d863d3ee25b015438c34697
3
+ size 22375808
medqa/textbooks/Neurology_Adams.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf5a3e6087590ab5bfae4231c08740857d9f93cca3fe9ce367e9717a7c5fc460
3
+ size 8386547
medqa/textbooks/Obstentrics_Williams.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d9279984b772fb91afa55232a9c060f3c21581867c91ebeeda72f18f9a95a24
3
+ size 6585140
medqa/textbooks/Pathology_Robbins.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3379a5cc6a79d12f9a91a26b026b3ae362445c6dfdb780416ecf6ff64be39a39
3
+ size 3810423
ChemDisGene/data/curated/drugprot_pmids.txt → medqa/textbooks/Pathoma_Husain.txt RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:42f2983b41dcd71bf2a3564aa1adb8962ddbd501d88832bc3437df78feb22ca6
3
- size 2263
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7be3901e0cb887861ee5f68967c0cc333fdec23dbd4fd078e5e6e153fcbc9479
3
+ size 399974
medqa/textbooks/Pediatrics_Nelson.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a00ea67ce8504502233106f4fa4b94cddcf95fed6d001d984330959accc8e47d
3
+ size 3006793
medqa/textbooks/Pharmacology_Katzung.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cae7fb85cdd0c12e53ef7b3918c03b8f9c6607eaac5be9f1d7aaba2f425dd4d4
3
+ size 5141125
medqa/textbooks/Physiology_Levy.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c0fb6ef6922e7cfee7ea3c9f1917c49c59776b4ef67c89da6c10c668d0d87a7
3
+ size 3067415
medqa/textbooks/Psichiatry_DSM-5.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e5c6f34fcae556168280292a9cab09927ebbad0a767c515b12d8007c153f6c9
3
+ size 2905014
medqa/textbooks/Surgery_Schwartz.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76218cdbe925d26d4c08d86ce815c2ad83b4a8c70eaef620d75d8c82c031f3bb
3
+ size 11478017
ChemDisGene/data/ctd_derived/train_abstracts.txt → pubmedqa/pubmedqa.csv RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c181ac826f85285d852f9b147b6c1581406e8d85217e654ff39942a03c61dc5c
3
- size 181657176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a97fba3bfe918cad5b5012c3bc78a76414880dc8fd551a1855c7357d235bc6d6
3
+ size 580446152