slinusc commited on
Commit
94df3ea
·
verified ·
1 Parent(s): f790cf7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PubMed Abstracts Subset (10%)
2
+
3
+ This dataset contains a 10% probabilistic sample of the ~24 million PubMed abstracts available as public metadata from the [National Library of Medicine (NLM)](https://pubmed.ncbi.nlm.nih.gov/). The dataset was originally compiled and released as part of the [MedRAG benchmark](https://arxiv.org/abs/2402.13178), and has been reformatted and republished as part of the study:
4
+
5
+ **Stuhlmann et al. (2025)**
6
+ *Efficient and Reproducible Biomedical Question Answering using Retrieval‑Augmented Generation*
7
+ → [arXiv:2505.07917](https://arxiv.org/abs/2505.07917)
8
+
9
+ ---
10
+
11
+ ## 📄 Description
12
+
13
+ Each entry in the dataset includes:
14
+ - `id`: Local unique identifier
15
+ - `title`: Title of the publication
16
+ - `abstract`: Abstract text
17
+ - `PMID`: PubMed identifier
18
+
19
+ The dataset is split into 24 `.jsonl` files, each containing approximately 100,000 entries, for a total of ~2.39 million samples.
20
+
21
+ ---
22
+
23
+ ## 🔍 How to Access
24
+
25
+ ### ▶️ Option 1: Load using Hugging Face `datasets` (streaming)
26
+
27
+ ```python
28
+ from datasets import load_dataset
29
+
30
+ dataset = load_dataset("slinusc/PubMedAbstractsSubset", streaming=True)
31
+
32
+ for doc in dataset:
33
+ print(doc["title"], doc["abstract"])
34
+ break
35
+ ```
36
+
37
+ > Streaming is recommended for large-scale processing and avoids loading the entire dataset into memory.
38
+
39
+ ---
40
+
41
+ ### 💾 Option 2: Clone using Git and Git LFS
42
+
43
+ ```bash
44
+ git lfs install
45
+ git clone https://huggingface.co/datasets/slinusc/PubMedAbstractsSubset
46
+ cd PubMedAbstractsSubset
47
+ ```
48
+
49
+ > After cloning, run `git lfs pull` if needed to retrieve the full data files.
50
+
51
+ ---
52
+
53
+ ## 📦 Format
54
+
55
+ Each file is in `.jsonl` (JSON Lines) format, where each line is a valid JSON object:
56
+
57
+ ```json
58
+ {
59
+ "id": "pubmed23n1166_0",
60
+ "title": "...",
61
+ "abstract": "...",
62
+ "PMID": 36464820
63
+ }
64
+ ```
65
+
66
+ ---
67
+
68
+ ## 📚 Source and Licensing
69
+
70
+ This dataset is derived from public domain PubMed metadata (titles and abstracts), redistributed in accordance with [NLM data usage policies](https://www.nlm.nih.gov/databases/download/data_distrib_main.html).
71
+
72
+ - Reformatted and used in:
73
+ **Stuhlmann et al. (2025)**, *Efficient and Reproducible Biomedical QA using RAG*, [arXiv:2505.07917](https://arxiv.org/abs/2505.07917)
74
+
75
+ ---
76
+
77
+ ## ✨ Citation
78
+
79
+ If you use this dataset, please cite:
80
+
81
+ ```
82
+
83
+ @article{stuhlmann2025efficient,
84
+ title={Efficient and Reproducible Biomedical Question Answering using Retrieval-Augmented Generation},
85
+ author={Stuhlmann, Linus and Saxer, Michael Alexander and Fürst, Jonathan},
86
+ journal={arXiv preprint arXiv:2505.07917},
87
+ year={2025}
88
+ }
89
+ ```
90
+
91
+ ---
92
+
93
+ ## 🏷️ Version
94
+
95
+ - `v1.0` – Initial release (10% sample, 2.39M entries, 24 JSONL files)
96
+
97
+ ---
98
+
99
+ ## 📬 Contact
100
+
101
+ Maintained by [@slinusc](https://huggingface.co/slinusc).
102
+ For questions or issues, please open a discussion or pull request on the Hugging Face dataset page.