chhhhhgrghdu dolfim-ibm commited on
Commit
a8b7721
·
0 Parent(s):

Duplicate from docling-project/DocLayNet

Browse files

Co-authored-by: Michele Dolfi <dolfim-ibm@users.noreply.huggingface.co>

Files changed (4) hide show
  1. .gitattributes +54 -0
  2. .gitignore +304 -0
  3. DocLayNet.py +210 -0
  4. README.md +165 -0
.gitattributes ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ # Audio files - uncompressed
37
+ *.pcm filter=lfs diff=lfs merge=lfs -text
38
+ *.sam filter=lfs diff=lfs merge=lfs -text
39
+ *.raw filter=lfs diff=lfs merge=lfs -text
40
+ # Audio files - compressed
41
+ *.aac filter=lfs diff=lfs merge=lfs -text
42
+ *.flac filter=lfs diff=lfs merge=lfs -text
43
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
44
+ *.ogg filter=lfs diff=lfs merge=lfs -text
45
+ *.wav filter=lfs diff=lfs merge=lfs -text
46
+ # Image files - uncompressed
47
+ *.bmp filter=lfs diff=lfs merge=lfs -text
48
+ *.gif filter=lfs diff=lfs merge=lfs -text
49
+ *.png filter=lfs diff=lfs merge=lfs -text
50
+ *.tiff filter=lfs diff=lfs merge=lfs -text
51
+ # Image files - compressed
52
+ *.jpg filter=lfs diff=lfs merge=lfs -text
53
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
54
+ *.webp filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Created by https://www.gitignore.io/api/linux,macos,python,windows,pycharm+all,visualstudiocode,virtualenv
2
+ # Edit at https://www.gitignore.io/?templates=linux,macos,python,windows,pycharm+all,visualstudiocode,virtualenv
3
+
4
+ ### Linux ###
5
+ *~
6
+
7
+ # temporary files which can be created if a process still has a handle open of a deleted file
8
+ .fuse_hidden*
9
+
10
+ # KDE directory preferences
11
+ .directory
12
+
13
+ # Linux trash folder which might appear on any partition or disk
14
+ .Trash-*
15
+
16
+ # .nfs files are created when an open file is removed but is still being accessed
17
+ .nfs*
18
+
19
+ ### macOS ###
20
+ # General
21
+ .DS_Store
22
+ .AppleDouble
23
+ .LSOverride
24
+
25
+ # Icon must end with two \r
26
+ Icon
27
+
28
+ # Thumbnails
29
+ ._*
30
+
31
+ # Files that might appear in the root of a volume
32
+ .DocumentRevisions-V100
33
+ .fseventsd
34
+ .Spotlight-V100
35
+ .TemporaryItems
36
+ .Trashes
37
+ .VolumeIcon.icns
38
+ .com.apple.timemachine.donotpresent
39
+
40
+ # Directories potentially created on remote AFP share
41
+ .AppleDB
42
+ .AppleDesktop
43
+ Network Trash Folder
44
+ Temporary Items
45
+ .apdisk
46
+
47
+ ### PyCharm+all ###
48
+ # Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio and WebStorm
49
+ # Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
50
+
51
+ # User-specific stuff
52
+ .idea/**/workspace.xml
53
+ .idea/**/tasks.xml
54
+ .idea/**/usage.statistics.xml
55
+ .idea/**/dictionaries
56
+ .idea/**/shelf
57
+
58
+ # Generated files
59
+ .idea/**/contentModel.xml
60
+
61
+ # Sensitive or high-churn files
62
+ .idea/**/dataSources/
63
+ .idea/**/dataSources.ids
64
+ .idea/**/dataSources.local.xml
65
+ .idea/**/sqlDataSources.xml
66
+ .idea/**/dynamic.xml
67
+ .idea/**/uiDesigner.xml
68
+ .idea/**/dbnavigator.xml
69
+
70
+ # Gradle
71
+ .idea/**/gradle.xml
72
+ .idea/**/libraries
73
+
74
+ # Gradle and Maven with auto-import
75
+ # When using Gradle or Maven with auto-import, you should exclude module files,
76
+ # since they will be recreated, and may cause churn. Uncomment if using
77
+ # auto-import.
78
+ # .idea/modules.xml
79
+ # .idea/*.iml
80
+ # .idea/modules
81
+ # *.iml
82
+ # *.ipr
83
+
84
+ # CMake
85
+ cmake-build-*/
86
+
87
+ # Mongo Explorer plugin
88
+ .idea/**/mongoSettings.xml
89
+
90
+ # File-based project format
91
+ *.iws
92
+
93
+ # IntelliJ
94
+ out/
95
+
96
+ # mpeltonen/sbt-idea plugin
97
+ .idea_modules/
98
+
99
+ # JIRA plugin
100
+ atlassian-ide-plugin.xml
101
+
102
+ # Cursive Clojure plugin
103
+ .idea/replstate.xml
104
+
105
+ # Crashlytics plugin (for Android Studio and IntelliJ)
106
+ com_crashlytics_export_strings.xml
107
+ crashlytics.properties
108
+ crashlytics-build.properties
109
+ fabric.properties
110
+
111
+ # Editor-based Rest Client
112
+ .idea/httpRequests
113
+
114
+ # Android studio 3.1+ serialized cache file
115
+ .idea/caches/build_file_checksums.ser
116
+
117
+ ### PyCharm+all Patch ###
118
+ # Ignores the whole .idea folder and all .iml files
119
+ # See https://github.com/joeblau/gitignore.io/issues/186 and https://github.com/joeblau/gitignore.io/issues/360
120
+
121
+ .idea/
122
+
123
+ # Reason: https://github.com/joeblau/gitignore.io/issues/186#issuecomment-249601023
124
+
125
+ *.iml
126
+ modules.xml
127
+ .idea/misc.xml
128
+ *.ipr
129
+
130
+ # Sonarlint plugin
131
+ .idea/sonarlint
132
+
133
+ ### Python ###
134
+ # Byte-compiled / optimized / DLL files
135
+ __pycache__/
136
+ *.py[cod]
137
+ *$py.class
138
+
139
+ # C extensions
140
+ *.so
141
+
142
+ # Distribution / packaging
143
+ .Python
144
+ build/
145
+ develop-eggs/
146
+ dist/
147
+ downloads/
148
+ eggs/
149
+ .eggs/
150
+ lib/
151
+ lib64/
152
+ parts/
153
+ sdist/
154
+ var/
155
+ wheels/
156
+ pip-wheel-metadata/
157
+ share/python-wheels/
158
+ *.egg-info/
159
+ .installed.cfg
160
+ *.egg
161
+ MANIFEST
162
+
163
+ # PyInstaller
164
+ # Usually these files are written by a python script from a template
165
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
166
+ *.manifest
167
+ *.spec
168
+
169
+ # Installer logs
170
+ pip-log.txt
171
+ pip-delete-this-directory.txt
172
+
173
+ # Unit test / coverage reports
174
+ htmlcov/
175
+ .tox/
176
+ .nox/
177
+ .coverage
178
+ .coverage.*
179
+ .cache
180
+ nosetests.xml
181
+ coverage.xml
182
+ *.cover
183
+ .hypothesis/
184
+ .pytest_cache/
185
+
186
+ # Translations
187
+ *.mo
188
+ *.pot
189
+
190
+ # Scrapy stuff:
191
+ .scrapy
192
+
193
+ # Sphinx documentation
194
+ docs/_build/
195
+
196
+ # PyBuilder
197
+ target/
198
+
199
+ # pyenv
200
+ .python-version
201
+
202
+ # pipenv
203
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
204
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
205
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
206
+ # install all needed dependencies.
207
+ #Pipfile.lock
208
+
209
+ # celery beat schedule file
210
+ celerybeat-schedule
211
+
212
+ # SageMath parsed files
213
+ *.sage.py
214
+
215
+ # Spyder project settings
216
+ .spyderproject
217
+ .spyproject
218
+
219
+ # Rope project settings
220
+ .ropeproject
221
+
222
+ # Mr Developer
223
+ .mr.developer.cfg
224
+ .project
225
+ .pydevproject
226
+
227
+ # mkdocs documentation
228
+ /site
229
+
230
+ # mypy
231
+ .mypy_cache/
232
+ .dmypy.json
233
+ dmypy.json
234
+
235
+ # Pyre type checker
236
+ .pyre/
237
+
238
+ ### VirtualEnv ###
239
+ # Virtualenv
240
+ # http://iamzed.com/2009/05/07/a-primer-on-virtualenv/
241
+ pyvenv.cfg
242
+ .env
243
+ .venv
244
+ env/
245
+ venv/
246
+ ENV/
247
+ env.bak/
248
+ venv.bak/
249
+ pip-selfcheck.json
250
+
251
+ ### VisualStudioCode ###
252
+ .vscode/*
253
+
254
+ ### VisualStudioCode Patch ###
255
+ # Ignore all local history of files
256
+ .history
257
+
258
+ ### Windows ###
259
+ # Windows thumbnail cache files
260
+ Thumbs.db
261
+ Thumbs.db:encryptable
262
+ ehthumbs.db
263
+ ehthumbs_vista.db
264
+
265
+ # Dump file
266
+ *.stackdump
267
+
268
+ # Folder config file
269
+ [Dd]esktop.ini
270
+
271
+ # Recycle Bin used on file shares
272
+ $RECYCLE.BIN/
273
+
274
+ # Windows Installer files
275
+ *.cab
276
+ *.msi
277
+ *.msix
278
+ *.msm
279
+ *.msp
280
+
281
+ # Windows shortcuts
282
+ *.lnk
283
+
284
+ # End of https://www.gitignore.io/api/linux,macos,python,windows,pycharm+all,visualstudiocode,virtualenv
285
+
286
+
287
+ # Created by https://www.toptal.com/developers/gitignore/api/jupyternotebooks
288
+ # Edit at https://www.toptal.com/developers/gitignore?templates=jupyternotebooks
289
+
290
+ ### JupyterNotebooks ###
291
+ # gitignore template for Jupyter Notebooks
292
+ # website: http://jupyter.org/
293
+
294
+ .ipynb_checkpoints
295
+ */.ipynb_checkpoints/*
296
+
297
+ # IPython
298
+ profile_default/
299
+ ipython_config.py
300
+
301
+ # Remove previous ipynb_checkpoints
302
+ # git rm -r .ipynb_checkpoints/
303
+
304
+ # End of https://www.toptal.com/developers/gitignore/api/jupyternotebooks
DocLayNet.py ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Inspired from
3
+ https://huggingface.co/datasets/ydshieh/coco_dataset_script/blob/main/coco_dataset_script.py
4
+ """
5
+
6
+ import json
7
+ import os
8
+ import datasets
9
+ import collections
10
+
11
+
12
+ class COCOBuilderConfig(datasets.BuilderConfig):
13
+ def __init__(self, name, splits, **kwargs):
14
+ super().__init__(name, **kwargs)
15
+ self.splits = splits
16
+
17
+
18
+ # Add BibTeX citation
19
+ # Find for instance the citation on arxiv or on the dataset repo/website
20
+ _CITATION = """\
21
+ @article{doclaynet2022,
22
+ title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis},
23
+ doi = {10.1145/3534678.353904},
24
+ url = {https://arxiv.org/abs/2206.01062},
25
+ author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
26
+ year = {2022}
27
+ }
28
+ """
29
+
30
+ # Add description of the dataset here
31
+ # You can copy an official description
32
+ _DESCRIPTION = """\
33
+ DocLayNet is a human-annotated document layout segmentation dataset from a broad variety of document sources.
34
+ """
35
+
36
+ # Add a link to an official homepage for the dataset here
37
+ _HOMEPAGE = "https://developer.ibm.com/exchanges/data/all/doclaynet/"
38
+
39
+ # Add the licence for the dataset here if you can find it
40
+ _LICENSE = "CDLA-Permissive-1.0"
41
+
42
+ # Add link to the official dataset URLs here
43
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
44
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
45
+
46
+ _URLs = {
47
+ "core": "https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip",
48
+ }
49
+
50
+ # Name of the dataset usually match the script name with CamelCase instead of snake_case
51
+ class COCODataset(datasets.GeneratorBasedBuilder):
52
+ """An example dataset script to work with the local (downloaded) COCO dataset"""
53
+
54
+ VERSION = datasets.Version("1.0.0")
55
+
56
+ BUILDER_CONFIG_CLASS = COCOBuilderConfig
57
+ BUILDER_CONFIGS = [
58
+ COCOBuilderConfig(name="2022.08", splits=["train", "val", "test"]),
59
+ ]
60
+ DEFAULT_CONFIG_NAME = "2022.08"
61
+
62
+ def _info(self):
63
+ features = datasets.Features(
64
+ {
65
+ "image_id": datasets.Value("int64"),
66
+ "image": datasets.Image(),
67
+ "width": datasets.Value("int32"),
68
+ "height": datasets.Value("int32"),
69
+ # Custom fields
70
+ "doc_category": datasets.Value(
71
+ "string"
72
+ ), # high-level document category
73
+ "collection": datasets.Value("string"), # sub-collection name
74
+ "doc_name": datasets.Value("string"), # original document filename
75
+ "page_no": datasets.Value("int64"), # page number in original document
76
+ }
77
+ )
78
+ object_dict = {
79
+ "category_id": datasets.ClassLabel(
80
+ names=[
81
+ "Caption",
82
+ "Footnote",
83
+ "Formula",
84
+ "List-item",
85
+ "Page-footer",
86
+ "Page-header",
87
+ "Picture",
88
+ "Section-header",
89
+ "Table",
90
+ "Text",
91
+ "Title",
92
+ ]
93
+ ),
94
+ "image_id": datasets.Value("string"),
95
+ "id": datasets.Value("int64"),
96
+ "area": datasets.Value("int64"),
97
+ "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
98
+ "segmentation": [[datasets.Value("float32")]],
99
+ "iscrowd": datasets.Value("bool"),
100
+ "precedence": datasets.Value("int32"),
101
+ }
102
+ features["objects"] = [object_dict]
103
+
104
+ return datasets.DatasetInfo(
105
+ # This is the description that will appear on the datasets page.
106
+ description=_DESCRIPTION,
107
+ # This defines the different columns of the dataset and their types
108
+ features=features, # Here we define them above because they are different between the two configurations
109
+ # If there's a common (input, target) tuple from the features,
110
+ # specify them here. They'll be used if as_supervised=True in
111
+ # builder.as_dataset.
112
+ supervised_keys=None,
113
+ # Homepage of the dataset for documentation
114
+ homepage=_HOMEPAGE,
115
+ # License for the dataset if available
116
+ license=_LICENSE,
117
+ # Citation for the dataset
118
+ citation=_CITATION,
119
+ )
120
+
121
+ def _split_generators(self, dl_manager):
122
+ """Returns SplitGenerators."""
123
+ archive_path = dl_manager.download_and_extract(_URLs)
124
+ splits = []
125
+ for split in self.config.splits:
126
+ if split == "train":
127
+ dataset = datasets.SplitGenerator(
128
+ name=datasets.Split.TRAIN,
129
+ # These kwargs will be passed to _generate_examples
130
+ gen_kwargs={
131
+ "json_path": os.path.join(
132
+ archive_path["core"], "COCO", "train.json"
133
+ ),
134
+ "image_dir": os.path.join(archive_path["core"], "PNG"),
135
+ "split": "train",
136
+ },
137
+ )
138
+ elif split in ["val", "valid", "validation", "dev"]:
139
+ dataset = datasets.SplitGenerator(
140
+ name=datasets.Split.VALIDATION,
141
+ # These kwargs will be passed to _generate_examples
142
+ gen_kwargs={
143
+ "json_path": os.path.join(
144
+ archive_path["core"], "COCO", "val.json"
145
+ ),
146
+ "image_dir": os.path.join(archive_path["core"], "PNG"),
147
+ "split": "val",
148
+ },
149
+ )
150
+ elif split == "test":
151
+ dataset = datasets.SplitGenerator(
152
+ name=datasets.Split.TEST,
153
+ # These kwargs will be passed to _generate_examples
154
+ gen_kwargs={
155
+ "json_path": os.path.join(
156
+ archive_path["core"], "COCO", "test.json"
157
+ ),
158
+ "image_dir": os.path.join(archive_path["core"], "PNG"),
159
+ "split": "test",
160
+ },
161
+ )
162
+ else:
163
+ continue
164
+
165
+ splits.append(dataset)
166
+ return splits
167
+
168
+ def _generate_examples(
169
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
170
+ self,
171
+ json_path,
172
+ image_dir,
173
+ split,
174
+ ):
175
+ """Yields examples as (key, example) tuples."""
176
+ # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
177
+ # The `key` is here for legacy reason (tfds) and is not important in itself.
178
+ def _image_info_to_example(image_info, image_dir):
179
+ image = image_info["file_name"]
180
+ return {
181
+ "image_id": image_info["id"],
182
+ "image": os.path.join(image_dir, image),
183
+ "width": image_info["width"],
184
+ "height": image_info["height"],
185
+ "doc_category": image_info["doc_category"],
186
+ "collection": image_info["collection"],
187
+ "doc_name": image_info["doc_name"],
188
+ "page_no": image_info["page_no"],
189
+ }
190
+
191
+ with open(json_path, encoding="utf8") as f:
192
+ annotation_data = json.load(f)
193
+ images = annotation_data["images"]
194
+ annotations = annotation_data["annotations"]
195
+ image_id_to_annotations = collections.defaultdict(list)
196
+ for annotation in annotations:
197
+ image_id_to_annotations[annotation["image_id"]].append(annotation)
198
+
199
+ for idx, image_info in enumerate(images):
200
+ example = _image_info_to_example(image_info, image_dir)
201
+ annotations = image_id_to_annotations[image_info["id"]]
202
+ objects = []
203
+ for annotation in annotations:
204
+ category_id = annotation["category_id"] # Zero based counting
205
+ if category_id != -1:
206
+ category_id = category_id - 1
207
+ annotation["category_id"] = category_id
208
+ objects.append(annotation)
209
+ example["objects"] = objects
210
+ yield idx, example
README.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ license: other
5
+ pretty_name: DocLayNet
6
+ size_categories:
7
+ - 10K<n<100K
8
+ tags:
9
+ - layout-segmentation
10
+ - COCO
11
+ - document-understanding
12
+ - PDF
13
+ task_categories:
14
+ - object-detection
15
+ - image-segmentation
16
+ task_ids:
17
+ - instance-segmentation
18
+ ---
19
+
20
+ # Dataset Card for DocLayNet
21
+
22
+ ## Table of Contents
23
+ - [Table of Contents](#table-of-contents)
24
+ - [Dataset Description](#dataset-description)
25
+ - [Dataset Summary](#dataset-summary)
26
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
27
+ - [Dataset Structure](#dataset-structure)
28
+ - [Data Fields](#data-fields)
29
+ - [Data Splits](#data-splits)
30
+ - [Dataset Creation](#dataset-creation)
31
+ - [Annotations](#annotations)
32
+ - [Additional Information](#additional-information)
33
+ - [Dataset Curators](#dataset-curators)
34
+ - [Licensing Information](#licensing-information)
35
+ - [Citation Information](#citation-information)
36
+ - [Contributions](#contributions)
37
+
38
+ ## Dataset Description
39
+
40
+ - **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
41
+ - **Repository:** https://github.com/DS4SD/DocLayNet
42
+ - **Paper:** https://doi.org/10.1145/3534678.3539043
43
+ - **Leaderboard:**
44
+ - **Point of Contact:**
45
+
46
+ ### Dataset Summary
47
+
48
+ DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
49
+
50
+ 1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
51
+ 2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
52
+ 3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
53
+ 4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
54
+ 5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
55
+
56
+ ### Supported Tasks and Leaderboards
57
+
58
+ We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/.
59
+
60
+ ## Dataset Structure
61
+
62
+ ### Data Fields
63
+
64
+ DocLayNet provides four types of data assets:
65
+
66
+ 1. PNG images of all pages, resized to square `1025 x 1025px`
67
+ 2. Bounding-box annotations in COCO format for each PNG image
68
+ 3. Extra: Single-page PDF files matching each PNG image
69
+ 4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
70
+
71
+ The COCO image record are defined like this example
72
+
73
+ ```js
74
+ ...
75
+ {
76
+ "id": 1,
77
+ "width": 1025,
78
+ "height": 1025,
79
+ "file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
80
+
81
+ // Custom fields:
82
+ "doc_category": "financial_reports" // high-level document category
83
+ "collection": "ann_reports_00_04_fancy", // sub-collection name
84
+ "doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
85
+ "page_no": 9, // page number in original document
86
+ "precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
87
+ },
88
+ ...
89
+ ```
90
+
91
+ The `doc_category` field uses one of the following constants:
92
+
93
+ ```
94
+ financial_reports,
95
+ scientific_articles,
96
+ laws_and_regulations,
97
+ government_tenders,
98
+ manuals,
99
+ patents
100
+ ```
101
+
102
+
103
+ ### Data Splits
104
+
105
+ The dataset provides three splits
106
+ - `train`
107
+ - `val`
108
+ - `test`
109
+
110
+ ## Dataset Creation
111
+
112
+ ### Annotations
113
+
114
+ #### Annotation process
115
+
116
+ The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
117
+
118
+
119
+ #### Who are the annotators?
120
+
121
+ Annotations are crowdsourced.
122
+
123
+
124
+ ## Additional Information
125
+
126
+ ### Dataset Curators
127
+
128
+ The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
129
+ You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
130
+
131
+ Curators:
132
+ - Christoph Auer, [@cau-git](https://github.com/cau-git)
133
+ - Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
134
+ - Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
135
+ - Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
136
+
137
+ ### Licensing Information
138
+
139
+ License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
140
+
141
+
142
+ ### Citation Information
143
+
144
+
145
+ ```bib
146
+ @article{doclaynet2022,
147
+ title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
148
+ doi = {10.1145/3534678.353904},
149
+ url = {https://doi.org/10.1145/3534678.3539043},
150
+ author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
151
+ year = {2022},
152
+ isbn = {9781450393850},
153
+ publisher = {Association for Computing Machinery},
154
+ address = {New York, NY, USA},
155
+ booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
156
+ pages = {3743–3751},
157
+ numpages = {9},
158
+ location = {Washington DC, USA},
159
+ series = {KDD '22}
160
+ }
161
+ ```
162
+
163
+ ### Contributions
164
+
165
+ Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset.