Yuwh07 commited on
Commit
af001e7
·
verified ·
1 Parent(s): ecbda57

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ BBox_DocVQA_Train.jsonl filter=lfs diff=lfs merge=lfs -text
BBox_DocVQA_Train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c40310223feccdeea125664a0722797a15fe537d483a73576dd594ec246d8a14
3
+ size 14669063
BBox_DocVQA_Train.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f1e9df1460642f25d10bc029809a8e3b7ebf9846b7989f71da5df97006bdf25
3
+ size 70195692860
README.md CHANGED
@@ -1,3 +1,110 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # BBox DocVQA **Train Set**
2
+
3
+ The BBox DocVQA Train Set is a large-scale dataset designed for training document visual question answering models with grounded supervision. Each QA instance is paired with one or more rendered PDF pages and pixel-level bounding boxes that mark the evidence required to answer the question. The dataset covers a broad distribution of document types, visual regions, and multi-page reasoning patterns.
4
+
5
+ ---
6
+
7
+ ## Repository layout
8
+
9
+ The dataset is organized as follows:
10
+
11
+ - **`BBox_DocVQA_Train.jsonl`** – newline-delimited JSON containing all training QA samples and metadata.
12
+ - **`<category>/<arxiv-id>/*.png`** – rendered PDF pages grouped into eight arXiv subject categories
13
+ (`cs`, `econ`, `eess`, `math`, `physics`, `q-bio`, `q-fin`, `stat`).
14
+ - Page images follow the naming format:
15
+ **`<arxiv-id>_<page>.png`**, where `<page>` corresponds to the original PDF’s 1-based page index.
16
+
17
+ This directory layout mirrors the benchmark structure for seamless integration.
18
+
19
+ ---
20
+
21
+ ## Dataset statistics
22
+
23
+ The BBox DocVQA Train Set contains:
24
+
25
+ - **Total QA samples:** 30,780
26
+ - **Total pages:** 42,380
27
+ - **Total papers:** 3,671
28
+
29
+ ### Task type distribution
30
+
31
+ | Task Type | Count |
32
+ |----------|------:|
33
+ | SPSBB | 11,668 (37.91%) |
34
+ | SPMBB | 7,512 (24.41%) |
35
+ | MPMBB | 11,600 (37.69%) |
36
+
37
+ ### Region type distribution
38
+
39
+ | Region Type | Count |
40
+ |-------------|------:|
41
+ | Text | 30,424 (60.98%) |
42
+ | Image | 12,542 (25.14%) |
43
+ | Table | 6,926 (13.88%) |
44
+
45
+ - **Average bounding box area ratio:** 14.26%
46
+
47
+ ---
48
+
49
+ ## JSON lines schema
50
+
51
+ Each entry in `BBox_DocVQA_Train.jsonl` follows the schema below:
52
+
53
+ | Field | Type | Description |
54
+ |-------|------|-------------|
55
+ | `query` / `question` | string | Natural-language question (duplicate keys for compatibility). |
56
+ | `answer` | string | Grounded short-form answer. |
57
+ | `category` | string | One of the eight arXiv subject classes. |
58
+ | `doc_name` | string | ArXiv identifier of the source paper. |
59
+ | `evidence_page` | list[int] | Pages containing the evidence (1-based). |
60
+ | `image_paths` / `images` | list[str] | Relative paths to one or two rendered PDF pages. |
61
+ | `bbox` | list[list[list[int]]] | Bounding boxes for each referenced page, in pixel units. |
62
+ | `subimg_tpye` | list[list[str]] | Region type per bounding box (`text`, `table`, or `image`). |
63
+
64
+ ---
65
+
66
+ ## Example
67
+
68
+ ```json
69
+ {
70
+ "query": "What is the caption of Figure 3 on the referenced page?",
71
+ "answer": "Comparison between the baseline and our method",
72
+ "doc_name": "2301.12345",
73
+ "category": "cs",
74
+ "evidence_page": [4],
75
+ "image_paths": ["cs/2301.12345/2301.12345_4.png"],
76
+ "bbox": [
77
+ [[512, 1340, 1880, 1620]]
78
+ ],
79
+ "subimg_tpye": [["image"]]
80
+ }
81
+ ```
82
+
83
+ ---
84
+
85
+ ## Quick start
86
+
87
+ ```python
88
+ import json
89
+ from PIL import Image, ImageDraw
90
+
91
+ with open("BBox_DocVQA_Train.jsonl") as f:
92
+ sample = json.loads(f.readline())
93
+
94
+ for page_path, boxes in zip(sample["image_paths"], sample["bbox"]):
95
+ img = Image.open(page_path).convert("RGB")
96
+ draw = ImageDraw.Draw(img)
97
+ for (xmin, ymin, xmax, ymax) in boxes:
98
+ draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=5)
99
+ img.show()
100
+ ```
101
+
102
+ ---
103
+
104
+ ## Notes and usage guidance
105
+
106
+ - Page images are uncompressed PNG renders produced from arXiv PDFs; please observe arXiv’s terms of use for any redistribution.
107
+ - Bounding boxes are provided in absolute pixel coordinates; normalize them by image width/height when required.
108
+ - Duplicate key names (e.g., `query`/`question`, `image_paths`/`images`) are intentionally preserved for compatibility.
109
+ - The train set provides large-scale grounded supervision across diverse document layouts and visual evidence types.
110
+