Datasets:

ArXiv:
License:
Limitless063 liminghao1630 commited on
Commit
c2c72b4
·
verified ·
0 Parent(s):

Duplicate from liminghao1630/DocBank

Browse files

Co-authored-by: Minghao Li <liminghao1630@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
56
+ # Video files - compressed
57
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
58
+ *.webm filter=lfs diff=lfs merge=lfs -text
59
+ DocBank_500K_ori_img.zip.001 filter=lfs diff=lfs merge=lfs -text
60
+ DocBank_500K_ori_img.zip.002 filter=lfs diff=lfs merge=lfs -text
61
+ DocBank_500K_ori_img.zip.003 filter=lfs diff=lfs merge=lfs -text
62
+ DocBank_500K_ori_img.zip.004 filter=lfs diff=lfs merge=lfs -text
63
+ DocBank_500K_ori_img.zip.005 filter=lfs diff=lfs merge=lfs -text
64
+ DocBank_500K_ori_img.zip.006 filter=lfs diff=lfs merge=lfs -text
65
+ DocBank_500K_ori_img.zip.007 filter=lfs diff=lfs merge=lfs -text
66
+ DocBank_500K_ori_img.zip.008 filter=lfs diff=lfs merge=lfs -text
67
+ DocBank_500K_ori_img.zip.009 filter=lfs diff=lfs merge=lfs -text
68
+ DocBank_500K_ori_img.zip.010 filter=lfs diff=lfs merge=lfs -text
DocBank_500K_ori_img.zip.001 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0884689be13ef8d07737197465b2c64b8d51d03adc83e880d7f056dcc937c5a0
3
+ size 5368709120
DocBank_500K_ori_img.zip.002 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49d662c0facb9c532477a0bb43a766e0eb7e0d8fc85296e74239cde68b12a7ab
3
+ size 5368709120
DocBank_500K_ori_img.zip.003 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d551bd8952e7a45ee0d1a686bf8973a1bf2d5ff84df34d47005eab237e9826fd
3
+ size 5368709120
DocBank_500K_ori_img.zip.004 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2cdf0c42176ddd9ffac8daad9d3ba6046d8289405a0c7ce27ed0edb9daef738
3
+ size 5368709120
DocBank_500K_ori_img.zip.005 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d81f4ec4bf9ce48847c1bbd24e65895a6318b066c0dcb0c507a6d5ce38b79f28
3
+ size 5368709120
DocBank_500K_ori_img.zip.006 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74049926b050c969bd3cc6773537e49303b2b4afcc8a02337e2ffae9d92ed343
3
+ size 5368709120
DocBank_500K_ori_img.zip.007 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:029bda8274762824ea69d3970ff1672751acd172f1068345fb4e09db4a99237b
3
+ size 5368709120
DocBank_500K_ori_img.zip.008 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e7af66e8a2c85eaa5b86900f7d524f2c53b4714f54a4aa7273db407e4ab38df
3
+ size 5368709120
DocBank_500K_ori_img.zip.009 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47d6422678e0bffe1ee32e3a9b811721d893ac3e943973e8423ce03b4efcd3d1
3
+ size 5368709120
DocBank_500K_ori_img.zip.010 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87d4695574d3414a4fa5548a462dffd089b9e42764de5a5eb567b408b75c5669
3
+ size 2589288107
DocBank_500K_txt.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:984668c0241d68476ff1e793c5e9c5dc525c557538250130b8af969200491318
3
+ size 3167771976
MSCOCO_Format_Annotation.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8666d9a8ae77e471309c27ea514b30cb4e21ce2418cb28c2640736aa75ac7816
3
+ size 208969736
README.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # DocBank
5
+
6
+ DocBank is a new large-scale dataset that is constructed using a weak supervision approach. It enables models to integrate both the textual and layout information for downstream tasks. The current DocBank dataset totally includes 500K document pages, where 400K for training, 50K for validation and 50K for testing.
7
+
8
+ [[GitHub](https://github.com/doc-analysis/DocBank)] [[Paper](https://arxiv.org/abs/2006.01038)]
9
+
10
+ ## News
11
+ - **We update the license to Apache-2.0.**
12
+ - **Our paper has been accepted in [COLING2020](https://coling2020.org/pages/accepted_papers_main_conference) and [the Camera-ready version paper](https://arxiv.org/abs/2006.01038) has been updated on arXiv.com**
13
+ - **We provide a dataset loader named [DocBankLoader](https://github.com/doc-analysis/DocBankLoader) and it can also convert DocBank to the Object Detection models' format**
14
+ - **DocBank is a natural extension of the TableBank ([repo](https://github.com/doc-analysis/TableBank), [paper](https://arxiv.org/abs/1903.01949)) dataset**
15
+ - **LayoutLM ([repo](https://github.com/microsoft/unilm/tree/master/layoutlm), [paper](https://arxiv.org/abs/1912.13318)) is an effective pre-training method of text and layout and archives the SOTA result on DocBank**
16
+
17
+ ## Introduction
18
+
19
+ For document layout analysis tasks, there have been some image-based document layout datasets, while most of them are built for computer vision approaches and they are difficult to apply to NLP methods. In addition, image-based datasets mainly include the page images and the bounding boxes of large semantic structures, which are not fine-grained token-level annotations. Moreover, it is also time-consuming and labor-intensive to produce human-labeled and fine-grained token-level text block arrangement. Therefore, it is vital to leverage weak supervision to obtain fine-grained labeled documents with minimum efforts, meanwhile making the data be easily applied to any NLP and computer vision approaches.
20
+
21
+ To this end, we build the DocBank dataset, a document-level benchmark with fine-grained token-level annotations for layout analysis. Distinct from the conventional human-labeled datasets, our approach obtains high quality annotations in a simple yet effective way with weak supervision.
22
+
23
+ ## Statistics of DocBank
24
+ The DocBank dataset consists of 500K document pages with 12 types of semantic units.
25
+
26
+ ### Semantic Structure Statistics of DocBank
27
+ | Split | Abstract | Author | Caption | Date | Equation | Figure | Footer | List | Paragraph | Reference | Section | Table | Title | Total |
28
+ |:-----:|:--------:|:------:|:-------:|:-----:|:--------:|:------:|:------:|:------:|:---------:|:---------:|:-------:|:-----:|:-----:|:-------:|
29
+ | Train | 25,387 | 25,909 | 106,723 | 6,391 | 161,140 | 90,429 | 38,482 | 44,927 | 398,086 | 44,813 | 180,774 | 19,638 | 21,688 | 400,000 |
30
+ | | 6.35% | 6.48% | 26.68% | 1.60% | 40.29% | 22.61% | 9.62% | 11.23% | 99.52% | 11.20% | 45.19% | 4.91% | 5.42% | 100.00% |
31
+ | Dev | 3,164 | 3,286 | 13,443 | 797 | 20,154 | 11,463 | 4,804 | 5,609 | 49,759 | 5,549 | 22,666 | 2,374 | 2,708 | 50,000 |
32
+ | | 6.33% | 6.57% | 26.89% | 1.59% | 40.31% | 22.93% | 9.61% | 11.22% | 99.52% | 11.10% | 45.33% | 4.75% | 5.42% | 100.00% |
33
+ | Test | 3,176 | 3,277 | 13,476 | 832 | 20,244 | 11,378 | 4,876 | 5,553 | 49,762 | 5,641 | 22,384 | 2,505 | 2,729 | 50,000 |
34
+ | | 6.35% | 6.55% | 26.95% | 1.66% | 40.49% | 22.76% | 9.75% | 11.11% | 99.52% | 11.28% | 44.77% | 5.01% | 5.46% | 100.00% |
35
+ | Total | 31,727 | 32,472 | 133,642 | 8,020 | 201,538 | 113,270 | 48,162 | 56,089 | 497,607 | 56,003 | 225,824 | 24,517 | 27,125 | 500,000 |
36
+ | | 6.35% | 6.49% | 26.73% | 1.60% | 40.31% | 22.65% | 9.63% | 11.22% | 99.52% | 11.20% | 45.16% | 4.90% | 5.43% | 100.00% |
37
+
38
+ ### Year Statistics of DocBank
39
+
40
+ | Year | Train | | Dev | | Test | | ALL | |
41
+ |:-----:|:------:|:-------:|:-----:|:-------:|:-----:|:-------:|:------:|:-------:|
42
+ | 2014 | 65,976 | 16.49% | 8,270 | 16.54% | 8,112 | 16.22% | 82,358 | 16.47% |
43
+ | 2015 | 77,879 | 19.47% | 9,617 | 19.23% | 9,700 | 19.40% | 97,196 | 19.44% |
44
+ | 2016 | 87,006 | 21.75% | 10,970 | 21.94% | 10,990 | 21.98% | 108,966 | 21.79% |
45
+ | 2017 | 91,583 | 22.90% | 11,623 | 23.25% | 11,464 | 22.93% | 114,670 | 22.93% |
46
+ | 2018 | 77,556 | 19.39% | 9,520 | 19.04% | 9,734 | 19.47% | 96,810 | 19.36% |
47
+ | Total | 400,000 | 100.00% | 50,000 | 100.00% | 50,000 | 100.00% | 500,000 | 100.00% |
48
+
49
+ ### Comparison of DocBank with existing document layout analysis datasets
50
+ | Dataset | #Pages | #Units | Image-based? | Text-based? | Fine-grained? | Extendable? |
51
+ |:---------------:|:-------:|:------:|:------------:|:-----------:|:-------------:|:-----------:|
52
+ | Article Regions | 100 | 9 | ✔ | ✘ | ✔ | ✘ |
53
+ | GROTOAP2 | 119,334 | 22 | ✔ | ✘ | ✘ | ✘ |
54
+ | PubLayNet | 364,232 | 5 | ✔ | ✘ | ✔ | ✘ |
55
+ | TableBank | 417,234 | 1 | ✔ | ✘ | ✔ | ✔ |
56
+ | DocBank | 500,000 | 12 | ✔ | ✔ | ✔ | ✔ |
57
+
58
+ ## Baseline
59
+ As the dataset was fully annotated at token-level, we consider the document layout analysis task as a text-based sequence labeling task.
60
+
61
+ Under this setting, we evaluate three representative pre-trained language models on our dataset including BERT, RoBERTa and LayoutLM to validate the effectiveness of DocBank.
62
+
63
+ To verify the performance of the models from different modalities on DocBank, we train the Faster R-CNN model on the object detection format of DocBank and unify its output with the sequence labeling models to evaluate.
64
+
65
+
66
+ ### Settings
67
+ Our baselines of BERT and RoBERTa are built upon the HuggingFace's Transformers while the LayoutLM baselines are implemented with the codebase in [LayoutLM's official repository](https://aka.ms/layoutlm). We used 8 V100 GPUs with a batch size of 10 per GPU. It takes 5 hours to fine-tune 1 epoch on the 400K document pages. We used the BERT and RoBERTa tokenizers to tokenize the training samples and optimized the model with AdamW. The initial learning rate of the optimizer is 5e-5. We split the data into a max block size of N=512. We use the [Detectron2](https://github.com/facebookresearch/detectron2) to train the Faster R-CNN model on DocBank. We use the Faster R-CNN algorithm with the ResNeXt-101 as the backbone network architecture, where the parameters are pre-trained on the ImageNet dataset.
68
+
69
+ ### Results
70
+
71
+ #### The evaluation results of BERT, RoBERTa and LayoutLM
72
+ | Models | Abstract | Author | Caption | Equation | Figure | Footer | List | Paragraph | Reference | Section | Table | Title | Macro average |
73
+ |:---------------------:|:--------:|:-------:|:-------:|:--------:|:-------:|:-------:|:-------:|:---------:|:---------:|:-------:|:-------:|:-------:|:-------------:|
74
+ | bert-base | 0.9294 | 0.8484 | 0.8629 | 0.8152 | 1.0000 | 0.7805 | 0.7133 | 0.9619 | 0.9310 | 0.9081 | 0.8296 | 0.9442 | 0.8770 |
75
+ | roberta-base | 0.9288 | 0.8618 | 0.8944 | 0.8248 | 1.0000 | 0.8014 | 0.7353 | 0.9646 | 0.9341 | 0.9337 | 0.8389 | 0.9511 | 0.8891 |
76
+ | layoutlm-base | **0.9816** | 0.8595 | 0.9597 | 0.8947 | 1.0000 | 0.8957 | 0.8948 | 0.9788 | 0.9338 | 0.9598 | 0.8633 | **0.9579** | 0.9316 |
77
+ | bert-large | 0.9286 | 0.8577 | 0.8650 | 0.8177 | 1.0000 | 0.7814 | 0.6960 | 0.9619 | 0.9284 | 0.9065 | 0.8320 | 0.9430 | 0.8765 |
78
+ | roberta-large | 0.9479 | 0.8724 | 0.9081 | 0.8370 | 1.0000 | 0.8392 | 0.7451 | 0.9665 | 0.9334 | 0.9407 | 0.8494 | 0.9461 | 0.8988 |
79
+ | layoutlm-large | 0.9784 | 0.8783 | 0.9556 | 0.8974 | **1.0000** | 0.9146 | 0.9004 | 0.9790 | 0.9332 | 0.9596 | 0.8679 | 0.9552 | 0.9350 |
80
+ | X101 | 0.9717 | 0.8227 | 0.9435 | 0.8938 | 0.8812 | 0.9029 | 0.9051 | 0.9682 | 0.8798 | 0.9412 | 0.8353 | 0.9158 | 0.9051 |
81
+ | X101 & layoutlm-base | 0.9815 | 0.8907 | **0.9669** | 0.9430 | 0.9990 | 0.9292 | **0.9300** | 0.9843 | **0.9437** | 0.9664 | 0.8818 | 0.9575 | 0.9478 |
82
+ | X101 & layoutlm-large | 0.9802 | **0.8964** | 0.9666 | **0.9440** | 0.9994 | **0.9352** | 0.9293 | **0.9844** | 0.9430 | **0.9670** | **0.8875** | 0.9531 | **0.9488** |
83
+
84
+
85
+
86
+ We evaluate six models on the test set of DocBank. We notice that the LayoutLM gets the highest scores on the \{abstract, author, caption, equation, figure, footer, list, paragraph, section, table, title\} labels. The RoBERTa model gets the best performance on the "reference" label but the gap with the LayoutLM is very small. This indicates that the LayoutLM architecture is significantly better than the BERT and RoBERTa architecture in the document layout analysis task.
87
+
88
+ We also evaluate the ResNeXt-101 model and two ensemble models combining ResNeXt-101 and LayoutLM. The output of the ResNeXt-101 model is the bounding boxes of semantic structures. To unify the outputs of them, we mark the tokens inside each bounding box by the label of the corresponding bounding box. After that, we calculate the metrics following the above equation.
89
+
90
+ ## **Scripts**
91
+
92
+ We provide a script to convert PDF files to the DocBank format data. You can run the PDF processing script pdf_process.py in the scripts directory. You may need to install some dependencies of this script through the pip package installer.
93
+
94
+ ~~~bash
95
+ cd scripts
96
+ python pdf_process.py --data_dir /path/to/pdf/directory \
97
+ --output_dir /path/to/data/output/directory
98
+ ~~~
99
+
100
+
101
+ ## **Paper and Citation**
102
+ ### DocBank: A Benchmark Dataset for Document Layout Analysis
103
+
104
+ Minghao Li, Yiheng Xu, Lei Cui, Shaohan Huang, Furu Wei, Zhoujun Li, Ming Zhou
105
+
106
+ https://arxiv.org/abs/2006.01038
107
+ ```
108
+ @misc{li2020docbank,
109
+ title={DocBank: A Benchmark Dataset for Document Layout Analysis},
110
+ author={Minghao Li and Yiheng Xu and Lei Cui and Shaohan Huang and Furu Wei and Zhoujun Li and Ming Zhou},
111
+ year={2020},
112
+ eprint={2006.01038},
113
+ archivePrefix={arXiv},
114
+ primaryClass={cs.CL}
115
+ }
116
+ ```