pdfQA-Benchmark / README.md
imene-kolli's picture
Update README.md
d7f9564 verified
metadata
license: mit
task_categories:
  - question-answering
  - table-question-answering
language:
  - en
tags:
  - research
  - climate
  - finance

pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs

pdfQA is a structured benchmark collection for document-level question answering and PDF understanding research.

The dataset is organized to support:

  • Raw document processing research
  • Structured extraction pipelines
  • Retrieval-augmented QA
  • End-to-end document reasoning systems

It preserves original documents alongside structured derivatives to enable reproducible evaluation across preprocessing strategies.


Dataset Structure

The repository follows a strict hierarchical layout:

<category>/<type>/<dataset>/...

Categories

  • real-pdfQA/ — Real-world benchmark datasets
  • syn-pdfQA/ — Synthetic benchmark datasets

Types

Each dataset contains three file-type folders:

  • 01.1_Input_Files_Non_PDF/ — Original source formats (e.g., xlsx, epub, htm, tex, txt)
  • 01.2_Input_Files_PDF/ — Original PDF files
  • 01.3_Input_Files_CSV/ — Structured annotations / tabular representations

Datasets

Each type folder contains subfolders for individual datasets. Supported datasets include:

Real-world Datasets

  • ClimateFinanceBench/
  • ClimRetrieve/
  • FeTaQA/
  • FinanceBench/
  • FinQA/
  • NaturalQuestions/
  • PaperTab/
  • PaperText/
  • Tat-QA/

Synthetic Datasets

  • books/
  • financial_reports/
  • sustainability_disclosures/
  • research_articles/

Example

syn-pdfQA/
  01.2_Input_Files_PDF/
    books/
      file1.pdf
  01.3_Input_Files_CSV/
    books/
      file1.csv
  01.1_Input_Files_Non_PDF/
    books/
      file1.xlsx

This design allows:

  • Access to original PDFs
  • Access to structured evaluation data
  • Access to original source formats for preprocessing research

Intended Use

This dataset is intended for:

  • PDF parsing and layout understanding
  • Financial and sustainability document QA
  • Retrieval-augmented generation (RAG)
  • Multi-modal document pipelines
  • Table extraction and structured reasoning
  • Robustness evaluation across preprocessing pipelines

It is particularly useful for comparing:

  • Direct PDF-based reasoning
  • OCR pipelines
  • Structured table extraction
  • Raw-source ingestion approaches

Access Patterns

The dataset supports multiple access patterns depending on research needs.

All official download scripts are available in the GitHub repository:

👉 https://github.com/tobischimanski/pdfQA

Scripts are provided in both:

  • Bash (git + Git LFS) --- recommended for large-scale downloads\
  • Python (huggingface_hub API) --- recommended for programmatic workflows

1️⃣ Download Everything

Download the entire repository (all categories, types, and datasets).

Bash (git + LFS)

./tools/download_using_bash/download_all.sh

Bash script

Python (HF API)

python tools/download_using_python/download_all.py

Python script


2️⃣ Download by Category

Download only:

  • real-pdfQA/
  • or syn-pdfQA/

Example

./tools/download_using_bash/download_category.sh syn-pdfQA

Bash script

Python script


3️⃣ Download by Dataset (All Types)

Download a single dataset across all three file-type folders:

  • 01.1_Input_Files_Non_PDF/
  • 01.2_Input_Files_PDF/
  • 01.3_Input_Files_CSV/

Example

./tools/download_using_bash/download_dataset.sh syn-pdfQA books

Bash script

Python script


4️⃣ Download Arbitrary Folders

Download one or multiple arbitrary folder paths.

Example

./tools/download_using_bash/download_folders.sh \
  "syn-pdfQA/01.2_Input_Files_PDF/books" \
  "syn-pdfQA/01.3_Input_Files_CSV/books"

Bash script

Python script


5️⃣ Download Specific Files

Download one or more individual files.

Example (Bash)

./tools/download_using_bash/download_files.sh \
  "syn-pdfQA/01.2_Input_Files_PDF/books/file1.pdf"

Bash script

Python script


6️⃣ Direct API Access (Single File)

Files can also be downloaded directly using the Hugging Face API. Example:

from huggingface_hub import hf_hub_download

hf_hub_download(
    repo_id="pdfqa/pdfQA-Benchmark",
    repo_type="dataset",
    filename="syn-pdfQA/01.2_Input_Files_PDF/FinQA/AAL_2010.pdf"
)

Recommended Usage

  • For large-scale research experiments → use Bash + git LFS (fully resumable).
  • For automated pipelines → use Python scripts.
  • For fine-grained subset control → use folder or file-based scripts.

Data Modalities

Depending on the dataset:

  • Financial reports
  • Sustainability disclosures
  • Structured financial QA corpora
  • Table-heavy documents
  • Mixed structured/unstructured content

Formats may include: PDF, CSV, XLS/XLSX, EPUB, HTML/HTM, TEX, TXT


Research Motivation

Many document QA benchmarks release only structured data or only PDFs. pdfQA preserves all representations:

  • Original document
  • Structured derivative
  • Raw source format (if available)

This enables:

  • Studying preprocessing impact
  • Comparing parsing strategies
  • Evaluating robustness to format variation
  • End-to-end pipeline benchmarking

Citation

If you use pdfQA, please cite:

@misc{schimanski2026pdfqa,
      title={pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs}, 
      author={Tobias Schimanski and Imene Kolli and Yu Fan and Ario Saeid Vaghefi and Jingwei Ni and Elliott Ash and Markus Leippold},
      year={2026},
      eprint={2601.02285},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2601.02285}, 
}

Contact

Visit https://github.com/tobischimanski/pdfQA for access and updates.