publaynet-mini / README.md
kenza-ily's picture
Add files using upload-large-folder tool
76a7519 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: image
      dtype: image
    - name: annotations
      list:
        struct:
          - name: category_id
            dtype: int64
          - name: bbox
            list:
              dtype: float32
          - name: area
            dtype: float32
          - name: iscrowd
            dtype: int64
          - name: id
            dtype: int64
          - name: image_id
            dtype: int64
          - name: segmentation
            list:
              list:
                dtype: float32
  splits:
    - name: train
      num_bytes: 155800000
      num_examples: 500
  download_size: 155800000
  dataset_size: 155800000
configs:
  - config_name: default
    data_files:
      - split: train
        path: publaynet_mini.parquet
language:
  - en
license: mit
task_categories:
  - object-detection
task_ids:
  - object-detection
pretty_name: PubLayNet Mini
size_categories:
  - n<1K
tags:
  - document-layout-analysis
  - document-understanding
  - layout-detection
  - academic-papers
  - research

PubLayNet_mini Dataset

A diverse mini subset of the PubLayNet dataset with 500 samples for document layout analysis evaluation.

Dataset Details

  • Total Samples: 500 document images
  • Source: PubLayNet training set (146,874 total samples)
  • Task: Document Layout Analysis
  • Format: Parquet with embedded images and annotations
  • Image Size: 612×792 pixels (RGB)
  • Categories: 5 layout element types

Categories

The dataset contains annotations for 5 categories of document layout elements:

  1. Text (1): Regular text blocks and paragraphs
  2. Title (2): Document titles and headings
  3. List (3): Bulleted or numbered lists
  4. Table (4): Tabular data structures
  5. Figure (5): Images, charts, and diagrams

Features

Each sample contains:

  • id: Unique document identifier
  • image: Document image (PIL Image) - automatically loaded from embedded bytes
  • annotations: List of layout element annotations with:
    • category_id: Element type (1-5)
    • bbox: Bounding box coordinates [x, y, width, height]
    • area: Area of the bounding box
    • iscrowd: Whether the annotation is for a crowd of objects
    • id: Unique annotation identifier
    • image_id: Reference to the document image
    • segmentation: Polygon segmentation mask

Data Storage

Images are stored as embedded bytes in the parquet file and automatically converted to PIL Images when loaded. This ensures:

  • Self-contained dataset (no external image dependencies)
  • Fast loading and processing
  • Compatibility with HuggingFace datasets library

Category Distribution

This subset maintains diverse representation across categories:

  • Text: ~3,676 annotations
  • Title: ~1,000 annotations
  • List: ~73 annotations
  • Table: ~128 annotations
  • Figure: ~172 annotations

Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("kenza-ily/publaynet-mini")

# Each sample contains:
for sample in dataset['train']:
    print(f"Document ID: {sample['id']}")
    print(f"Number of layout elements: {len(sample['annotations'])}")
    
    # Access the image (automatically converted to PIL Image)
    image = sample['image']  # PIL Image object
    print(f"Image size: {image.size}")
    
    # Access annotations
    for ann in sample['annotations']:
        category = ann['category_id']
        bbox = ann['bbox']
        segmentation = ann['segmentation']
        print(f"Element {category}: bbox={bbox}")

Loading from Parquet

You can also load the data directly from the parquet file:

import pyarrow.parquet as pq
import pandas as pd
from PIL import Image as PILImage
import io

# Read parquet file
table = pq.read_table("publaynet_mini.parquet")
df = table.to_pandas()

# Convert images from bytes to PIL Images
def convert_image(img_data):
    if isinstance(img_data, dict) and 'bytes' in img_data:
        img_bytes = img_data['bytes']
        return PILImage.open(io.BytesIO(img_bytes))
    return img_data

df['image'] = df['image'].apply(convert_image)

# Access data
for idx, row in df.iterrows():
    image = row['image']  # PIL Image
    annotations = row['annotations']  # List of dicts

Citation

Please cite the original PubLayNet paper if you use this subset:

@article{zhong2019publaynet, title={PubLayNet: largest dataset ever for document AI}, author={Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno}, journal={arXiv preprint arXiv:1908.07836}, year={2019} }

License

This subset follows the original PubLayNet dataset license.