Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: AttributeError
Message: 'str' object has no attribute 'items'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 690, in get_module
config_name: DatasetInfo.from_dict(dataset_info_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 284, in from_dict
return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 20, in __init__
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 170, in __post_init__
self.features = Features.from_dict(self.features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1983, in from_dict
obj = generate_from_dict(dic)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1564, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1564, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1564, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
^^^^^^^^^
AttributeError: 'str' object has no attribute 'items'Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:The task_categories "document-layout-analysis" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
DocLayNet 6-Class Filtered Dataset
Dataset Description
This is a filtered version of the DocLayNet dataset containing only 6 most relevant layout element classes for document layout analysis tasks.
Original Dataset
DocLayNet is a human-annotated document layout segmentation dataset containing 80,863 pages from diverse sources with 11 distinct layout categories.
Citation:
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
doi = {10.1145/3534678.3539043},
}
Filtering Methodology
Classes Retained (6):
- Text - Body text paragraphs
- List-item - List elements (bulleted, numbered)
- Section-header - Section and subsection titles
- Picture - Images, figures, diagrams
- Table - Tabular data structures
- Caption - Image and table captions
Classes Removed (5):
- Footnote
- Formula
- Page-footer
- Page-header
- Title
Rationale: Focus on the most common and semantically important layout elements for general document understanding tasks. The 6 retained classes represent 85.1% of all annotations in the original dataset.
Dataset Statistics
Split Distribution
| Split | Images | Annotations | Classes |
|---|---|---|---|
| Train | 68,673 | 800,614 | 6 |
| Validation | 6,446 | 85,057 | 6 |
| Test | 4,952 | 56,483 | 6 |
| Total | 80,071 | 942,154 | 6 |
Class Distribution (Training Set)
Based on 800,614 annotations:
| Class ID | Class Name | Count | Percentage |
|---|---|---|---|
| 0 | Caption | 19,218 | 2.4% |
| 1 | List-item | 161,818 | 20.2% |
| 2 | Picture | 39,667 | 5.0% |
| 3 | Section-header | 118,590 | 14.8% |
| 4 | Table | 30,070 | 3.8% |
| 5 | Text | 431,251 | 53.9% |
Retention from Original Dataset
- Images retained: 99.0%
- Annotations retained: 85.1%
Dataset Structure
Format
Annotations are provided in COCO JSON format:
DocLayNet_6class/
βββ coco/
β βββ train.json # Training annotations
β βββ val.json # Validation annotations
β βββ test.json # Test annotations
βββ README.md # This file
Images are NOT included - use the original DocLayNet image files from:
- HuggingFace:
docling-project/DocLayNet - Official source: https://github.com/DS4SD/DocLayNet
Loading the Dataset
Using HuggingFace Datasets
from datasets import load_dataset
# Load the filtered annotations
dataset = load_dataset("kbang2021/doclaynet-6class")
# Access splits
train_data = dataset["train"]
val_data = dataset["validation"]
test_data = dataset["test"]
Manual Loading
import json
from pathlib import Path
# Load COCO annotations
with open("coco/train.json") as f:
train_coco = json.load(f)
# Categories
categories = train_coco["categories"] # 6 classes with IDs 0-5
# Images
images = train_coco["images"] # Image metadata
# Annotations
annotations = train_coco["annotations"] # Bounding boxes
Annotation Format
Each annotation follows the COCO format:
{
"id": 12345,
"image_id": 123,
"category_id": 5, // 0-5 (remapped from original 11 classes)
"bbox": [x_min, y_min, width, height], // In pixels
"area": 12345.67,
"iscrowd": 0
}
Category Mapping
Original DocLayNet β 6-Class Filtered:
| Original ID | Original Name | Filtered ID | Filtered Name | Status |
|---|---|---|---|---|
| 0 | Caption | 0 | Caption | β Kept |
| 1 | Footnote | - | - | β Removed |
| 2 | Formula | - | - | β Removed |
| 3 | List-item | 1 | List-item | β Kept |
| 4 | Page-footer | - | - | β Removed |
| 5 | Page-header | - | - | β Removed |
| 6 | Picture | 2 | Picture | β Kept |
| 7 | Section-header | 3 | Section-header | β Kept |
| 8 | Table | 4 | Table | β Kept |
| 9 | Text | 5 | Text | β Kept |
| 10 | Title | - | - | β Removed |
Use Cases
This filtered dataset is ideal for:
- Document layout analysis with focus on content structure
- Information extraction from documents (text, tables, figures)
- Object detection model training for document AI
- Multi-scale document understanding tasks
- Transfer learning from general object detection to document analysis
Limitations
- Images not included: You must obtain images from the original DocLayNet dataset
- Class imbalance: Text class dominates (53.9% of annotations)
- Domain specific: Focused on document layout, may not generalize to other domains
- Annotation quality: Inherits any annotation errors from original DocLayNet
Ethical Considerations
- Dataset maintains the original DocLayNet license (CDLA-Permissive-2.0)
- No personal or sensitive information in annotations
- Source documents from diverse domains (financial, scientific, patents, manuals)
- Should not be used to discriminate based on document type or origin
Citation
If you use this filtered dataset, please cite both:
- Original DocLayNet paper:
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
doi = {10.1145/3534678.3539043},
}
- This filtered version:
@misc{doclaynet6class2024,
title = {DocLayNet 6-Class: Filtered Document Layout Analysis Dataset},
author = {[Keng Boon, Ang]},
year = {2026},
howpublished = {\url{https://huggingface.co/datasets/kbang2021/doclaynet-6class}},
note = {Filtered subset of DocLayNet containing 6 primary layout element classes}
}
License
This filtered dataset maintains the original license:
CDLA-Permissive-2.0 (Community Data License Agreement β Permissive β Version 2.0)
See: https://cdla.dev/permissive-2-0/
Acknowledgments
- Original DocLayNet dataset: IBM Research
- Built using the layout-for-tools evaluation framework
Contact
For questions or issues with this filtered dataset, please open an issue on the repository.
For questions about the original DocLayNet dataset, see: https://github.com/DS4SD/DocLayNet
- Downloads last month
- 6