File size: 3,646 Bytes
af001e7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
# BBox DocVQA **Train Set**

The BBox DocVQA Train Set is a large-scale dataset designed for training document visual question answering models with grounded supervision. Each QA instance is paired with one or more rendered PDF pages and pixel-level bounding boxes that mark the evidence required to answer the question. The dataset covers a broad distribution of document types, visual regions, and multi-page reasoning patterns.

---

## Repository layout

The dataset is organized as follows:

- **`BBox_DocVQA_Train.jsonl`** – newline-delimited JSON containing all training QA samples and metadata.
- **`<category>/<arxiv-id>/*.png`** – rendered PDF pages grouped into eight arXiv subject categories  
  (`cs`, `econ`, `eess`, `math`, `physics`, `q-bio`, `q-fin`, `stat`).
- Page images follow the naming format:  
  **`<arxiv-id>_<page>.png`**, where `<page>` corresponds to the original PDF’s 1-based page index.

This directory layout mirrors the benchmark structure for seamless integration.

---

## Dataset statistics

The BBox DocVQA Train Set contains:

- **Total QA samples:** 30,780  
- **Total pages:** 42,380  
- **Total papers:** 3,671  

### Task type distribution

| Task Type | Count |
|----------|------:|
| SPSBB | 11,668 (37.91%) |
| SPMBB | 7,512 (24.41%) |
| MPMBB | 11,600 (37.69%) |

### Region type distribution

| Region Type | Count |
|-------------|------:|
| Text  | 30,424 (60.98%) |
| Image | 12,542 (25.14%) |
| Table | 6,926 (13.88%) |

- **Average bounding box area ratio:** 14.26%

---

## JSON lines schema

Each entry in `BBox_DocVQA_Train.jsonl` follows the schema below:

| Field | Type | Description |
|-------|------|-------------|
| `query` / `question` | string | Natural-language question (duplicate keys for compatibility). |
| `answer` | string | Grounded short-form answer. |
| `category` | string | One of the eight arXiv subject classes. |
| `doc_name` | string | ArXiv identifier of the source paper. |
| `evidence_page` | list[int] | Pages containing the evidence (1-based). |
| `image_paths` / `images` | list[str] | Relative paths to one or two rendered PDF pages. |
| `bbox` | list[list[list[int]]] | Bounding boxes for each referenced page, in pixel units. |
| `subimg_tpye` | list[list[str]] | Region type per bounding box (`text`, `table`, or `image`). |

---

## Example

```json
{
  "query": "What is the caption of Figure 3 on the referenced page?",
  "answer": "Comparison between the baseline and our method",
  "doc_name": "2301.12345",
  "category": "cs",
  "evidence_page": [4],
  "image_paths": ["cs/2301.12345/2301.12345_4.png"],
  "bbox": [
    [[512, 1340, 1880, 1620]]
  ],
  "subimg_tpye": [["image"]]
}
```

---

## Quick start

```python
import json
from PIL import Image, ImageDraw

with open("BBox_DocVQA_Train.jsonl") as f:
    sample = json.loads(f.readline())

for page_path, boxes in zip(sample["image_paths"], sample["bbox"]):
    img = Image.open(page_path).convert("RGB")
    draw = ImageDraw.Draw(img)
    for (xmin, ymin, xmax, ymax) in boxes:
        draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=5)
    img.show()
```

---

## Notes and usage guidance

- Page images are uncompressed PNG renders produced from arXiv PDFs; please observe arXiv’s terms of use for any redistribution.
- Bounding boxes are provided in absolute pixel coordinates; normalize them by image width/height when required.
- Duplicate key names (e.g., `query`/`question`, `image_paths`/`images`) are intentionally preserved for compatibility.
- The train set provides large-scale grounded supervision across diverse document layouts and visual evidence types.