imene-kolli commited on
Commit
d7f9564
·
verified ·
1 Parent(s): 4577c12

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +306 -297
README.md CHANGED
@@ -1,297 +1,306 @@
1
- ---
2
- license: mit
3
- ---
4
-
5
- # pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs
6
-
7
- [pdfQA](https://arxiv.org/abs/2601.02285) is a structured benchmark collection for document-level question answering and PDF understanding research.
8
-
9
- The dataset is organized to support:
10
-
11
- * Raw document processing research
12
- * Structured extraction pipelines
13
- * Retrieval-augmented QA
14
- * End-to-end document reasoning systems
15
-
16
- It preserves original documents alongside structured derivatives to enable reproducible evaluation across preprocessing strategies.
17
-
18
- ---
19
-
20
- ## Dataset Structure
21
-
22
- The repository follows a strict hierarchical layout:
23
-
24
- ```
25
- <category>/<type>/<dataset>/...
26
- ```
27
-
28
- ### Categories
29
-
30
- * `real-pdfQA/` — Real-world benchmark datasets
31
- * `syn-pdfQA/` Synthetic benchmark datasets
32
-
33
- ### Types
34
-
35
- Each dataset contains three file-type folders:
36
-
37
- * `01.1_Input_Files_Non_PDF/` — Original source formats (e.g., xlsx, epub, htm, tex, txt)
38
- * `01.2_Input_Files_PDF/` — Original PDF files
39
- * `01.3_Input_Files_CSV/` — Structured annotations / tabular representations
40
-
41
- ### Datasets
42
- Each type folder contains subfolders for individual datasets. Supported datasets include:
43
-
44
- #### Real-world Datasets
45
- - `ClimateFinanceBench/`
46
- - `ClimRetrieve/`
47
- - `FeTaQA/`
48
- - `FinanceBench/`
49
- - `FinQA/`
50
- - `NaturalQuestions/`
51
- - `PaperTab/`
52
- - `PaperText/`
53
- - `Tat-QA/`
54
-
55
- #### Synthetic Datasets
56
- - `books/`
57
- - `financial_reports/`
58
- - `sustainability_disclosures/`
59
- - `research_articles/`
60
-
61
-
62
- ### Example
63
-
64
- ```
65
- syn-pdfQA/
66
- 01.2_Input_Files_PDF/
67
- books/
68
- file1.pdf
69
- 01.3_Input_Files_CSV/
70
- books/
71
- file1.csv
72
- 01.1_Input_Files_Non_PDF/
73
- books/
74
- file1.xlsx
75
- ```
76
-
77
- This design allows:
78
-
79
- * Access to original PDFs
80
- * Access to structured evaluation data
81
- * Access to original source formats for preprocessing research
82
-
83
- ---
84
-
85
- ## Intended Use
86
-
87
- This dataset is intended for:
88
-
89
- * PDF parsing and layout understanding
90
- * Financial and sustainability document QA
91
- * Retrieval-augmented generation (RAG)
92
- * Multi-modal document pipelines
93
- * Table extraction and structured reasoning
94
- * Robustness evaluation across preprocessing pipelines
95
-
96
- It is particularly useful for comparing:
97
-
98
- * Direct PDF-based reasoning
99
- * OCR pipelines
100
- * Structured table extraction
101
- * Raw-source ingestion approaches
102
-
103
- ---
104
-
105
- ## Access Patterns
106
-
107
- The dataset supports multiple access patterns depending on research
108
- needs.
109
-
110
- All official download scripts are available in the GitHub repository:
111
-
112
- 👉 https://github.com/tobischimanski/pdfQA
113
-
114
- Scripts are provided in both:
115
-
116
- - **Bash (git + Git LFS)** --- recommended for large-scale downloads\
117
- - **Python (huggingface_hub API)** --- recommended for programmatic
118
- workflows
119
-
120
- ------------------------------------------------------------------------
121
-
122
- ### 1️⃣ Download Everything
123
-
124
- Download the entire repository (all categories, types, and datasets).
125
-
126
- #### Bash (git + LFS)
127
-
128
- ``` bash
129
- ./tools/download_using_bash/download_all.sh
130
- ```
131
-
132
- [Bash script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_bash/download_all.sh)
133
-
134
-
135
- #### Python (HF API)
136
-
137
- ``` bash
138
- python tools/download_using_python/download_all.py
139
- ```
140
-
141
- [Python script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_python/download_all.py)
142
-
143
- ------------------------------------------------------------------------
144
-
145
- ### 2️⃣ Download by Category
146
-
147
- Download only:
148
-
149
- - `real-pdfQA/`
150
- - or `syn-pdfQA/`
151
-
152
- #### Example
153
-
154
- ``` bash
155
- ./tools/download_using_bash/download_category.sh syn-pdfQA
156
- ```
157
-
158
- [Bash script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_bash/download_category.sh)
159
-
160
- [Python script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_python/download_category.py)
161
-
162
- ------------------------------------------------------------------------
163
-
164
- ### 3️⃣ Download by Dataset (All Types)
165
-
166
- Download a single dataset across all three file-type folders:
167
-
168
- - `01.1_Input_Files_Non_PDF/`
169
- - `01.2_Input_Files_PDF/`
170
- - `01.3_Input_Files_CSV/`
171
-
172
- #### Example
173
-
174
- ``` bash
175
- ./tools/download_using_bash/download_dataset.sh syn-pdfQA books
176
- ```
177
-
178
- [Bash script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_bash/download_dataset.sh)
179
-
180
- [Python script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_python/download_dataset.py)
181
-
182
- ------------------------------------------------------------------------
183
-
184
- ### 4️⃣ Download Arbitrary Folders
185
-
186
- Download one or multiple arbitrary folder paths.
187
-
188
- #### Example
189
-
190
- ``` bash
191
- ./tools/download_using_bash/download_folders.sh \
192
- "syn-pdfQA/01.2_Input_Files_PDF/books" \
193
- "syn-pdfQA/01.3_Input_Files_CSV/books"
194
- ```
195
-
196
- [Bash script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_bash/download_folders.sh)
197
-
198
- [Python script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_python/download_folders.py)
199
-
200
- ------------------------------------------------------------------------
201
-
202
- ### 5️⃣ Download Specific Files
203
-
204
- Download one or more individual files.
205
-
206
- #### Example (Bash)
207
-
208
- ``` bash
209
- ./tools/download_using_bash/download_files.sh \
210
- "syn-pdfQA/01.2_Input_Files_PDF/books/file1.pdf"
211
- ```
212
-
213
- [Bash script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_bash/download_files.sh)
214
-
215
- [Python script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_python/download_files.py)
216
-
217
- ------------------------------------------------------------------------
218
-
219
- ### 6️⃣ Direct API Access (Single File)
220
-
221
- Files can also be downloaded directly using the Hugging Face API. Example:
222
-
223
- ``` python
224
- from huggingface_hub import hf_hub_download
225
-
226
- hf_hub_download(
227
- repo_id="pdfqa/pdfQA-Benchmark",
228
- repo_type="dataset",
229
- filename="syn-pdfQA/01.2_Input_Files_PDF/FinQA/AAL_2010.pdf"
230
- )
231
- ```
232
-
233
- ------------------------------------------------------------------------
234
-
235
- ## Recommended Usage
236
-
237
- - For **large-scale research experiments** → use **Bash + git LFS**
238
- (fully resumable).
239
- - For **automated pipelines** → use **Python scripts**.
240
- - For **fine-grained subset control** → use folder or file-based
241
- scripts.
242
-
243
- ---
244
-
245
- ## Data Modalities
246
-
247
- Depending on the dataset:
248
-
249
- * Financial reports
250
- * Sustainability disclosures
251
- * Structured financial QA corpora
252
- * Table-heavy documents
253
- * Mixed structured/unstructured content
254
-
255
- Formats may include: `PDF`, `CSV`, `XLS/XLSX`, `EPUB`, `HTML/HTM`, `TEX`, `TXT`
256
-
257
- ---
258
-
259
- ## Research Motivation
260
-
261
- Many document QA benchmarks release only structured data or only PDFs.
262
- pdfQA preserves **all representations**:
263
-
264
- * Original document
265
- * Structured derivative
266
- * Raw source format (if available)
267
-
268
- This enables:
269
-
270
- * Studying preprocessing impact
271
- * Comparing parsing strategies
272
- * Evaluating robustness to format variation
273
- * End-to-end pipeline benchmarking
274
-
275
- ---
276
-
277
- ## Citation
278
-
279
- If you use **pdfQA**, please cite:
280
-
281
- ```
282
- @misc{schimanski2026pdfqa,
283
- title={pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs},
284
- author={Tobias Schimanski and Imene Kolli and Yu Fan and Ario Saeid Vaghefi and Jingwei Ni and Elliott Ash and Markus Leippold},
285
- year={2026},
286
- eprint={2601.02285},
287
- archivePrefix={arXiv},
288
- primaryClass={cs.CL},
289
- url={https://arxiv.org/abs/2601.02285},
290
- }
291
- ```
292
-
293
- ---
294
-
295
- ## Contact
296
-
297
- Visit [https://github.com/tobischimanski/pdfQA](https://github.com/tobischimanski/pdfQA) for access and updates.
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - question-answering
5
+ - table-question-answering
6
+ language:
7
+ - en
8
+ tags:
9
+ - research
10
+ - climate
11
+ - finance
12
+ ---
13
+
14
+ # pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs
15
+
16
+ [pdfQA](https://arxiv.org/abs/2601.02285) is a structured benchmark collection for document-level question answering and PDF understanding research.
17
+
18
+ The dataset is organized to support:
19
+
20
+ * Raw document processing research
21
+ * Structured extraction pipelines
22
+ * Retrieval-augmented QA
23
+ * End-to-end document reasoning systems
24
+
25
+ It preserves original documents alongside structured derivatives to enable reproducible evaluation across preprocessing strategies.
26
+
27
+ ---
28
+
29
+ ## Dataset Structure
30
+
31
+ The repository follows a strict hierarchical layout:
32
+
33
+ ```
34
+ <category>/<type>/<dataset>/...
35
+ ```
36
+
37
+ ### Categories
38
+
39
+ * `real-pdfQA/` — Real-world benchmark datasets
40
+ * `syn-pdfQA/` — Synthetic benchmark datasets
41
+
42
+ ### Types
43
+
44
+ Each dataset contains three file-type folders:
45
+
46
+ * `01.1_Input_Files_Non_PDF/` — Original source formats (e.g., xlsx, epub, htm, tex, txt)
47
+ * `01.2_Input_Files_PDF/` — Original PDF files
48
+ * `01.3_Input_Files_CSV/` — Structured annotations / tabular representations
49
+
50
+ ### Datasets
51
+ Each type folder contains subfolders for individual datasets. Supported datasets include:
52
+
53
+ #### Real-world Datasets
54
+ - `ClimateFinanceBench/`
55
+ - `ClimRetrieve/`
56
+ - `FeTaQA/`
57
+ - `FinanceBench/`
58
+ - `FinQA/`
59
+ - `NaturalQuestions/`
60
+ - `PaperTab/`
61
+ - `PaperText/`
62
+ - `Tat-QA/`
63
+
64
+ #### Synthetic Datasets
65
+ - `books/`
66
+ - `financial_reports/`
67
+ - `sustainability_disclosures/`
68
+ - `research_articles/`
69
+
70
+
71
+ ### Example
72
+
73
+ ```
74
+ syn-pdfQA/
75
+ 01.2_Input_Files_PDF/
76
+ books/
77
+ file1.pdf
78
+ 01.3_Input_Files_CSV/
79
+ books/
80
+ file1.csv
81
+ 01.1_Input_Files_Non_PDF/
82
+ books/
83
+ file1.xlsx
84
+ ```
85
+
86
+ This design allows:
87
+
88
+ * Access to original PDFs
89
+ * Access to structured evaluation data
90
+ * Access to original source formats for preprocessing research
91
+
92
+ ---
93
+
94
+ ## Intended Use
95
+
96
+ This dataset is intended for:
97
+
98
+ * PDF parsing and layout understanding
99
+ * Financial and sustainability document QA
100
+ * Retrieval-augmented generation (RAG)
101
+ * Multi-modal document pipelines
102
+ * Table extraction and structured reasoning
103
+ * Robustness evaluation across preprocessing pipelines
104
+
105
+ It is particularly useful for comparing:
106
+
107
+ * Direct PDF-based reasoning
108
+ * OCR pipelines
109
+ * Structured table extraction
110
+ * Raw-source ingestion approaches
111
+
112
+ ---
113
+
114
+ ## Access Patterns
115
+
116
+ The dataset supports multiple access patterns depending on research
117
+ needs.
118
+
119
+ All official download scripts are available in the GitHub repository:
120
+
121
+ 👉 https://github.com/tobischimanski/pdfQA
122
+
123
+ Scripts are provided in both:
124
+
125
+ - **Bash (git + Git LFS)** --- recommended for large-scale downloads\
126
+ - **Python (huggingface_hub API)** --- recommended for programmatic
127
+ workflows
128
+
129
+ ------------------------------------------------------------------------
130
+
131
+ ### 1️⃣ Download Everything
132
+
133
+ Download the entire repository (all categories, types, and datasets).
134
+
135
+ #### Bash (git + LFS)
136
+
137
+ ``` bash
138
+ ./tools/download_using_bash/download_all.sh
139
+ ```
140
+
141
+ [Bash script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_bash/download_all.sh)
142
+
143
+
144
+ #### Python (HF API)
145
+
146
+ ``` bash
147
+ python tools/download_using_python/download_all.py
148
+ ```
149
+
150
+ [Python script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_python/download_all.py)
151
+
152
+ ------------------------------------------------------------------------
153
+
154
+ ### 2️⃣ Download by Category
155
+
156
+ Download only:
157
+
158
+ - `real-pdfQA/`
159
+ - or `syn-pdfQA/`
160
+
161
+ #### Example
162
+
163
+ ``` bash
164
+ ./tools/download_using_bash/download_category.sh syn-pdfQA
165
+ ```
166
+
167
+ [Bash script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_bash/download_category.sh)
168
+
169
+ [Python script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_python/download_category.py)
170
+
171
+ ------------------------------------------------------------------------
172
+
173
+ ### 3️⃣ Download by Dataset (All Types)
174
+
175
+ Download a single dataset across all three file-type folders:
176
+
177
+ - `01.1_Input_Files_Non_PDF/`
178
+ - `01.2_Input_Files_PDF/`
179
+ - `01.3_Input_Files_CSV/`
180
+
181
+ #### Example
182
+
183
+ ``` bash
184
+ ./tools/download_using_bash/download_dataset.sh syn-pdfQA books
185
+ ```
186
+
187
+ [Bash script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_bash/download_dataset.sh)
188
+
189
+ [Python script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_python/download_dataset.py)
190
+
191
+ ------------------------------------------------------------------------
192
+
193
+ ### 4️⃣ Download Arbitrary Folders
194
+
195
+ Download one or multiple arbitrary folder paths.
196
+
197
+ #### Example
198
+
199
+ ``` bash
200
+ ./tools/download_using_bash/download_folders.sh \
201
+ "syn-pdfQA/01.2_Input_Files_PDF/books" \
202
+ "syn-pdfQA/01.3_Input_Files_CSV/books"
203
+ ```
204
+
205
+ [Bash script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_bash/download_folders.sh)
206
+
207
+ [Python script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_python/download_folders.py)
208
+
209
+ ------------------------------------------------------------------------
210
+
211
+ ### 5️⃣ Download Specific Files
212
+
213
+ Download one or more individual files.
214
+
215
+ #### Example (Bash)
216
+
217
+ ``` bash
218
+ ./tools/download_using_bash/download_files.sh \
219
+ "syn-pdfQA/01.2_Input_Files_PDF/books/file1.pdf"
220
+ ```
221
+
222
+ [Bash script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_bash/download_files.sh)
223
+
224
+ [Python script](https://github.com/tobischimanski/pdfQA/blob/main/tools/download_using_python/download_files.py)
225
+
226
+ ------------------------------------------------------------------------
227
+
228
+ ### 6️⃣ Direct API Access (Single File)
229
+
230
+ Files can also be downloaded directly using the Hugging Face API. Example:
231
+
232
+ ``` python
233
+ from huggingface_hub import hf_hub_download
234
+
235
+ hf_hub_download(
236
+ repo_id="pdfqa/pdfQA-Benchmark",
237
+ repo_type="dataset",
238
+ filename="syn-pdfQA/01.2_Input_Files_PDF/FinQA/AAL_2010.pdf"
239
+ )
240
+ ```
241
+
242
+ ------------------------------------------------------------------------
243
+
244
+ ## Recommended Usage
245
+
246
+ - For **large-scale research experiments** → use **Bash + git LFS**
247
+ (fully resumable).
248
+ - For **automated pipelines** → use **Python scripts**.
249
+ - For **fine-grained subset control** → use folder or file-based
250
+ scripts.
251
+
252
+ ---
253
+
254
+ ## Data Modalities
255
+
256
+ Depending on the dataset:
257
+
258
+ * Financial reports
259
+ * Sustainability disclosures
260
+ * Structured financial QA corpora
261
+ * Table-heavy documents
262
+ * Mixed structured/unstructured content
263
+
264
+ Formats may include: `PDF`, `CSV`, `XLS/XLSX`, `EPUB`, `HTML/HTM`, `TEX`, `TXT`
265
+
266
+ ---
267
+
268
+ ## Research Motivation
269
+
270
+ Many document QA benchmarks release only structured data or only PDFs.
271
+ pdfQA preserves **all representations**:
272
+
273
+ * Original document
274
+ * Structured derivative
275
+ * Raw source format (if available)
276
+
277
+ This enables:
278
+
279
+ * Studying preprocessing impact
280
+ * Comparing parsing strategies
281
+ * Evaluating robustness to format variation
282
+ * End-to-end pipeline benchmarking
283
+
284
+ ---
285
+
286
+ ## Citation
287
+
288
+ If you use **pdfQA**, please cite:
289
+
290
+ ```
291
+ @misc{schimanski2026pdfqa,
292
+ title={pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs},
293
+ author={Tobias Schimanski and Imene Kolli and Yu Fan and Ario Saeid Vaghefi and Jingwei Ni and Elliott Ash and Markus Leippold},
294
+ year={2026},
295
+ eprint={2601.02285},
296
+ archivePrefix={arXiv},
297
+ primaryClass={cs.CL},
298
+ url={https://arxiv.org/abs/2601.02285},
299
+ }
300
+ ```
301
+
302
+ ---
303
+
304
+ ## Contact
305
+
306
+ Visit [https://github.com/tobischimanski/pdfQA](https://github.com/tobischimanski/pdfQA) for access and updates.