BBox DocVQA Train Set
The BBox DocVQA Train Set is a large-scale dataset designed for training document visual question answering models with grounded supervision. Each QA instance is paired with one or more rendered PDF pages and pixel-level bounding boxes that mark the evidence required to answer the question. The dataset covers a broad distribution of document types, visual regions, and multi-page reasoning patterns.
Repository layout
The dataset is organized as follows:
BBox_DocVQA_Train.jsonl– newline-delimited JSON containing all training QA samples and metadata.<category>/<arxiv-id>/*.png– rendered PDF pages grouped into eight arXiv subject categories
(cs,econ,eess,math,physics,q-bio,q-fin,stat).- Page images follow the naming format:
<arxiv-id>_<page>.png, where<page>corresponds to the original PDF’s 1-based page index.
This directory layout mirrors the benchmark structure for seamless integration.
Dataset statistics
The BBox DocVQA Train Set contains:
- Total QA samples: 30,780
- Total pages: 42,380
- Total papers: 3,671
Task type distribution
| Task Type | Count |
|---|---|
| SPSBB | 11,668 (37.91%) |
| SPMBB | 7,512 (24.41%) |
| MPMBB | 11,600 (37.69%) |
Region type distribution
| Region Type | Count |
|---|---|
| Text | 30,424 (60.98%) |
| Image | 12,542 (25.14%) |
| Table | 6,926 (13.88%) |
- Average bounding box area ratio: 14.26%
JSON lines schema
Each entry in BBox_DocVQA_Train.jsonl follows the schema below:
| Field | Type | Description |
|---|---|---|
query / question |
string | Natural-language question (duplicate keys for compatibility). |
answer |
string | Grounded short-form answer. |
category |
string | One of the eight arXiv subject classes. |
doc_name |
string | ArXiv identifier of the source paper. |
evidence_page |
list[int] | Pages containing the evidence (1-based). |
image_paths / images |
list[str] | Relative paths to one or two rendered PDF pages. |
bbox |
list[list[list[int]]] | Bounding boxes for each referenced page, in pixel units. |
subimg_tpye |
list[list[str]] | Region type per bounding box (text, table, or image). |
Example
{
"query": "What is the caption of Figure 3 on the referenced page?",
"answer": "Comparison between the baseline and our method",
"doc_name": "2301.12345",
"category": "cs",
"evidence_page": [4],
"image_paths": ["cs/2301.12345/2301.12345_4.png"],
"bbox": [
[[512, 1340, 1880, 1620]]
],
"subimg_tpye": [["image"]]
}
Quick start
import json
from PIL import Image, ImageDraw
with open("BBox_DocVQA_Train.jsonl") as f:
sample = json.loads(f.readline())
for page_path, boxes in zip(sample["image_paths"], sample["bbox"]):
img = Image.open(page_path).convert("RGB")
draw = ImageDraw.Draw(img)
for (xmin, ymin, xmax, ymax) in boxes:
draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=5)
img.show()
Notes and usage guidance
- Page images are uncompressed PNG renders produced from arXiv PDFs; please observe arXiv’s terms of use for any redistribution.
- Bounding boxes are provided in absolute pixel coordinates; normalize them by image width/height when required.
- Duplicate key names (e.g.,
query/question,image_paths/images) are intentionally preserved for compatibility. - The train set provides large-scale grounded supervision across diverse document layouts and visual evidence types.