jmhb commited on
Commit
13a62ca
·
verified ·
1 Parent(s): 14b3ee7

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +38 -188
README.md CHANGED
@@ -46,7 +46,7 @@ This dataset is derived from the **BioASQ Challenge** data. BioASQ is a series o
46
 
47
  ### Citations
48
 
49
- If you use this dataset, please cite the following papers:
50
 
51
  **Primary BioASQ Paper**:
52
  ```bibtex
@@ -62,25 +62,16 @@ If you use this dataset, please cite the following papers:
62
  }
63
  ```
64
 
65
- **If using this organized version**:
66
- ```bibtex
67
- @misc{bioasq_all_types,
68
- title={BioASQ All Question Types Dataset},
69
- author={[Your Name]},
70
- year={2026},
71
- note={Reorganized version of BioASQ challenge data with question types as splits},
72
- url={https://huggingface.co/datasets/jmhb/BioASQ}
73
- }
74
- ```
75
-
76
  **PaperSearchQA** (if used in context of this project):
77
  ```bibtex
78
- @inproceedings{papersearchqa2026,
79
- title={PaperSearchQA: A Large-Scale Dataset for Question Answering over Biomedical Literature},
80
- author={[Authors to be added]},
81
- booktitle={Proceedings of the 2026 Conference of the European Chapter of the Association for Computational Linguistics (EACL)},
82
- year={2026},
83
- url={https://jmhb0.github.io/PaperSearchQA}
 
 
84
  }
85
  ```
86
 
@@ -99,196 +90,55 @@ For full license terms, see: https://bioasq.org/participate
99
 
100
  ### This Dataset
101
 
102
- This reorganized version is provided **AS-IS** for research purposes and is subject to the original BioASQ terms. The reorganization (splitting by question type) is released under **CC BY 4.0**, but the underlying data remains under BioASQ terms.
103
 
104
  ## Dataset Structure
105
 
106
- ### Splits
107
 
108
- The dataset has **four splits**, one for each question type:
109
 
110
- | Split | Samples | Description |
111
- |-------|---------|-------------|
112
- | `factoid` | 1,609 | Questions with short factual answers (e.g., "What is the function of protein X?") |
113
- | `yesno` | 1,464 | Yes/No questions (e.g., "Is aspirin effective for headaches?") |
114
- | `summary` | 1,283 | Questions requiring summarization (e.g., "Describe the role of...") |
115
- | `list` | 1,048 | Questions with list-based answers (e.g., "List all known variants of...") |
 
116
 
117
- ### Fields
118
 
119
- Each sample contains the following fields:
120
 
121
- ```python
122
- {
123
- 'id': str, # Unique question ID
124
- 'type': str, # Question type ('factoid', 'yesno', 'summary', or 'list')
125
- 'question': str, # The question text
126
- 'answer': list or str, # Answer(s) - format varies by type
127
- 'ideal_answer': str or list, # Detailed/ideal answer
128
- 'documents': list, # Relevant PubMed document IDs (URLs)
129
- 'snippets': list, # Relevant text snippets from documents
130
- 'concepts': list, # Medical concepts mentioned
131
- 'triples': list, # Knowledge graph triples (if available)
132
- 'asq_challenge': str, # BioASQ challenge identifier
133
- 'folder_name': str, # Source folder
134
- }
135
- ```
136
-
137
- ### Answer Formats by Type
138
-
139
- - **factoid**: `answer` is a list of short text answers
140
- - **yesno**: `answer` is "yes" or "no"
141
- - **summary**: `answer` is typically empty; use `ideal_answer` for summary text
142
- - **list**: `answer` is a list of items
143
 
144
  ## Usage
145
 
146
- ### Load All Splits
147
-
148
  ```python
149
  from datasets import load_dataset
150
 
151
- # Load entire dataset
152
- dataset = load_dataset("jmhb/BioASQ_all_types")
153
-
154
- print(dataset.keys())
155
- # dict_keys(['factoid', 'yesno', 'summary', 'list'])
156
-
157
- # Access specific split
158
- factoid_questions = dataset['factoid']
159
- print(f"Factoid questions: {len(factoid_questions)}")
160
- ```
161
 
162
- ### Load Specific Split
163
-
164
- ```python
165
  # Load only factoid questions
166
- factoid_data = load_dataset("jmhb/BioASQ_all_types", split="factoid")
167
-
168
- # Example question
169
- sample = factoid_data[0]
170
- print(f"Question: {sample['question']}")
171
- print(f"Answer: {sample['answer']}")
172
- print(f"Documents: {sample['documents'][:2]}...")
173
- ```
174
-
175
- ### Filter by Type
176
-
177
- ```python
178
- # Get all yes/no questions
179
- yesno_data = load_dataset("jmhb/BioASQ_all_types", split="yesno")
180
-
181
- # Count yes vs no answers
182
- yes_count = sum(1 for ex in yesno_data if ex['answer'] == 'yes')
183
- no_count = sum(1 for ex in yesno_data if ex['answer'] == 'no')
184
- print(f"Yes: {yes_count}, No: {no_count}")
185
- ```
186
-
187
- ### Iterate Over All Types
188
-
189
- ```python
190
- dataset = load_dataset("jmhb/BioASQ_all_types")
191
-
192
- for question_type, split_data in dataset.items():
193
- print(f"\n{question_type.upper()} Questions:")
194
- print(f" Total: {len(split_data)}")
195
-
196
- # Show first example
197
- example = split_data[0]
198
- print(f" Sample: {example['question'][:100]}...")
199
- ```
200
-
201
- ## Data Statistics
202
-
203
- ### Overall Statistics
204
-
205
- - **Total Questions**: 5,404
206
- - **Question Types**: 4 (factoid, yesno, summary, list)
207
- - **Source**: BioASQ challenges 1-9
208
- - **Domain**: Biomedical and life sciences
209
- - **Language**: English
210
-
211
- ### Question Type Distribution
212
 
 
 
213
  ```
214
- factoid: 29.8% (1,609 questions)
215
- yesno: 27.1% (1,464 questions)
216
- summary: 23.7% (1,283 questions)
217
- list: 19.4% (1,048 questions)
218
- ```
219
-
220
- ### Sample Questions by Type
221
-
222
- **Factoid**:
223
- - "What is the genetic basis of Huntington's disease?"
224
- - "Which protein is encoded by the BRCA1 gene?"
225
-
226
- **Yes/No**:
227
- - "Is the protein Papilin secreted?"
228
- - "Does metformin interfere with vitamin B12 absorption?"
229
-
230
- **Summary**:
231
- - "Describe the role of the immune system in cancer development."
232
- - "What is known about the association between coffee consumption and health?"
233
-
234
- **List**:
235
- - "List symptoms of Alzheimer's disease."
236
- - "Which genes are associated with autosomal dominant Alzheimer's disease?"
237
 
238
- ## Limitations and Considerations
239
 
240
- 1. **Question Distribution**: Not uniformly distributed across types
241
- 2. **Answer Variability**: Answer formats vary significantly by type
242
- 3. **Domain Specificity**: Highly specialized biomedical knowledge required
243
- 4. **Evaluation Complexity**: Different metrics needed for different question types
244
- 5. **Document Access**: Referenced PubMed documents may require separate retrieval
245
- 6. **Knowledge Cutoff**: Questions are based on medical knowledge available up to the challenge date
246
-
247
- ## Evaluation
248
-
249
- Different question types require different evaluation metrics:
250
-
251
- - **Factoid**: Exact Match (EM), F1 score
252
- - **Yes/No**: Accuracy, F1 score
253
- - **Summary**: ROUGE, BERTScore, human evaluation
254
- - **List**: F1 score, Partial Match
255
-
256
- See the [BioASQ evaluation tools](http://participants-area.bioasq.org/general_information/Task6b/) for official evaluation scripts.
257
-
258
- ## Related Resources
259
-
260
- ### BioASQ
261
- - **Website**: https://bioasq.org/
262
- - **Participate**: https://bioasq.org/participate
263
- - **Papers**: https://bioasq.org/publications
264
-
265
- ### PubMed Corpus
266
- - **Full PubMed Corpus**: https://huggingface.co/datasets/jmhb/pubmed_bioasq_2022
267
- - **NLM PubMed**: https://pubmed.ncbi.nlm.nih.gov/
268
-
269
- ### PaperSearchQA Project
270
- - **Website**: https://jmhb0.github.io/PaperSearchQA
271
- - **GitHub**: https://github.com/jmhb0/PaperSearchQA
272
- - **Main Dataset**: https://huggingface.co/datasets/jmhb/PaperSearchQA
273
- - **Collection**: https://huggingface.co/collections/jmhb/papersearchqa
274
-
275
- ## Contact and Support
276
-
277
- For questions about:
278
- - **Original BioASQ data**: Contact BioASQ organizers at https://bioasq.org/contact
279
- - **This dataset organization**: Open an issue on the PaperSearchQA GitHub
280
- - **PaperSearchQA project**: Visit https://jmhb0.github.io/PaperSearchQA
281
 
282
  ## Acknowledgments
283
 
284
- - **BioASQ Organizers**: For creating and maintaining this valuable resource
285
- - **PubMed/NCBI**: For the underlying biomedical literature
286
- - **Challenge Participants**: For advancing biomedical QA research
287
- - **Funding Agencies**: Supporting the BioASQ challenges
288
-
289
- ## Version History
290
-
291
- - **v1.0** (2026-01): Initial release with all question types as separate splits
292
- - 5,404 questions across 4 types
293
- - Organized from BioASQ challenges 1-9
294
- - Split by question type for convenient access
 
46
 
47
  ### Citations
48
 
49
+ If you use this dataset, please cite:
50
 
51
  **Primary BioASQ Paper**:
52
  ```bibtex
 
62
  }
63
  ```
64
 
 
 
 
 
 
 
 
 
 
 
 
65
  **PaperSearchQA** (if used in context of this project):
66
  ```bibtex
67
+ @misc{burgess2026papersearchqalearningsearchreason,
68
+ title={PaperSearchQA: Learning to Search and Reason over Scientific Papers with RLVR},
69
+ author={James Burgess and Jan N. Hansen and Duo Peng and Yuhui Zhang and Alejandro Lozano and Min Woo Sun and Emma Lundberg and Serena Yeung-Levy},
70
+ year={2026},
71
+ eprint={2601.18207},
72
+ archivePrefix={arXiv},
73
+ primaryClass={cs.LG},
74
+ url={https://arxiv.org/abs/2601.18207},
75
  }
76
  ```
77
 
 
90
 
91
  ### This Dataset
92
 
93
+ This reorganized version (with question types as splits) is provided for research convenience. All terms and conditions of the original BioASQ license apply.
94
 
95
  ## Dataset Structure
96
 
97
+ ### Data Fields
98
 
99
+ Each sample contains:
100
 
101
+ - `id`: Unique question identifier
102
+ - `body`: The question text
103
+ - `type`: Question type (factoid, yesno, summary, or list)
104
+ - `ideal_answer`: Reference answer(s)
105
+ - `exact_answer`: Structured answer (for factoid/list questions)
106
+ - `documents`: URLs of relevant PubMed documents
107
+ - `snippets`: Relevant text snippets from the documents
108
 
109
+ ### Data Splits
110
 
111
+ The dataset is organized into 4 splits by question type:
112
 
113
+ | Split | Examples |
114
+ |-------|----------|
115
+ | factoid | 1,609 |
116
+ | yesno | 1,464 |
117
+ | summary | 1,283 |
118
+ | list | 1,048 |
119
+ | **Total** | **5,404** |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
120
 
121
  ## Usage
122
 
 
 
123
  ```python
124
  from datasets import load_dataset
125
 
126
+ # Load all question types
127
+ dataset = load_dataset("jmhb/BioASQ")
 
 
 
 
 
 
 
 
128
 
 
 
 
129
  # Load only factoid questions
130
+ factoid = load_dataset("jmhb/BioASQ", split="factoid")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
 
132
+ # Load specific question types
133
+ dataset = load_dataset("jmhb/BioASQ", split=["factoid", "yesno"])
134
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135
 
136
+ ## Additional Resources
137
 
138
+ - **BioASQ Official Website**: https://bioasq.org/
139
+ - **BioASQ Participants Area**: https://participants-area.bioasq.org/
140
+ - **PaperSearchQA Project**: https://jmhb0.github.io/PaperSearchQA/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
141
 
142
  ## Acknowledgments
143
 
144
+ This dataset is made available through the BioASQ Challenge organizers. We thank them for creating and maintaining this valuable resource for the biomedical NLP community.