dude_disco / README.md
kenza-ily's picture
Update README.md
e4dd08e verified
metadata
license: cc-by-4.0
task_categories:
  - question-answering
  - document-question-answering
language:
  - en
pretty_name: DUDE Mini
size_categories:
  - n<1K
tags:
  - document-understanding
  - multi-page
  - document-qa

DUDE Mini Dataset

A stratified 404-sample subset of the DUDE (Document Understanding Dataset and Evaluation) benchmark, focused on document question answering with multi-page PDF documents.

Dataset Description

DUDE_mini contains QA pairs from the DUDE sample dataset with balanced representation across:

  • Answer types: extractive, abstractive, not-answerable
  • Question families: numeric amounts, dates/times, entity lookup, yes/no, multi-hop reasoning
  • Document diversity: Maximum 5 QA pairs per document

Statistics

Metric Value
Total QA pairs 404
Unique documents 100
Extractive answers ~241
Abstractive answers ~149
Not-answerable ~14

Usage

Load with Datasets library

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("kenza-ily/dude-mini")

# Access data
for sample in dataset["train"]:
    print(f"Question: {sample['question']}")
    print(f"Answer: {sample['answers']}")
    print(f"Answer Type: {sample['answer_type']}")
    print(f"Document ID: {sample['docId']}")

Load from JSON directly

import json

with open("dude_mini.json", "r") as f:
    data = json.load(f)

for item in data:
    print(f"Q: {item['question']}")
    print(f"A: {item['answers']}")

Data Fields

Field Type Description
questionId string Unique identifier for the QA pair
question string The question text
answers list[string] Ground truth answer(s)
answers_page_bounding_boxes list Bounding boxes for answers on PDF pages
answers_variants list[string] Alternative acceptable answers
answer_type string extractive, abstractive, or not-answerable
docId string Document identifier (matches PDF filename)
data_split string Data split (train)

Linking to PDF Documents

This dataset contains QA pairs only. To link with the actual PDF documents:

  1. Download DUDE sample PDFs from the original repository
  2. Use the docId field to match QA pairs with PDF files ({docId}.pdf)
# Example: linking to PDFs
from pathlib import Path

pdf_dir = Path("path/to/dude_sample_pdfs")

for sample in dataset["train"]:
    doc_id = sample["docId"]
    pdf_path = pdf_dir / f"{doc_id}.pdf"
    if pdf_path.exists():
        print(f"Found PDF for document: {doc_id}")

Citation

If you use this dataset, please cite both the original DUDE paper and the DISCO paper, which introduces this evaluation subset

@inproceedings{vanlandeghem2023dude,
  title={Document Understanding Dataset and Evaluation ({DUDE})},
  author={Van Landeghem, Jordy and Borchmann, {\L}ukasz and Tito, Rub{\`e}n and Pietruszka, Micha{\l} and Jurkiewicz, Dawid and Powalski, Rafa{\l} and J{\'o}ziak, Pawe{\l} and Biswas, Sanket and Coustaty, Micka{\"e}l and Stanisławek, Tomasz},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2023}
}

@inproceedings{benkirane2026disco,
  title={{DISCO}: Document Intelligence Suite for Comparative Evaluation},
  author={Benkirane, Kenza and Asenov, Martin and Goldwater, Daniel and Ghodsi, Aneiss},
  booktitle={ICLR 2026 Workshop on Multimodal Intelligence},
  year={2026},
  url={https://openreview.net/forum?id=Bb9vBASVzX}
}

License

This subset follows the original DUDE dataset license (CC BY 4.0).

Related Resources