metadata
license: apache-2.0
task_categories:
- object-detection
- image-to-text
language:
- en
size_categories:
- n<1K
LightOnOCR-bbox-bench
Evaluation benchmark for assessing the ability of vision-language models (VLMs) to localize images within documents using bounding boxes.
Task Description
Given a document page (PDF), the model must predict bounding boxes around images (figures, charts, photographs, etc.) present in the document. This evaluates the model's spatial understanding and ability to distinguish visual content from text in complex document layouts.
Each sample contains 1-5 images to localize, with ground truth bounding boxes normalized to a 0-1000 coordinate space.
Dataset Structure
Splits:
arxiv: 565 samples from scientific papersolmocr_bench: 290 samples from diverse document types
Columns:
bboxes: List of [x1, y1, x2, y2] bounding boxes normalized to 0-1000 coordinate spacepdf: Single-page PDF as bytes
Usage
from datasets import load_dataset
# Load dataset
dataset = load_dataset("lightonai/LightOnOCR-bbox-bench")
# Access a sample
sample = dataset['arxiv'][0]
gt_bboxes = sample['bboxes'] # [[x1, y1, x2, y2], ...] normalized to 0-1000
pdf_bytes = sample['pdf'] # Single-page PDF as bytes
# Render PDF to image using your preferred library
# Convert normalized bboxes (0-1000) to pixel coordinates based on rendered image dimensions
Dataset Composition
ArXiv (565 samples):
- Scientific papers with figures, charts, and diagrams
- Automatically annotated using nvpdftex toolkit
- Filtered to 1-5 images per page
OlmOCR (290 samples):
- Diverse document types: mathematical papers, tables, multi-column layouts, historical scans
- Images and annotations from allenai/olmOCR-bench
- Filtered to 1-5 images per page, excluding logo-only samples
Source Datasets
- ArXiv subset: Scientific papers from arXiv
- OlmOCR subset: Derived from
allenai/olmOCR-bench
Citation
If you use this dataset, please cite:
@misc{lightonocr2_2026,
title = {LightOnOCR: End-to-End, Multilingual, Efficient, State-of-the-Art Vision-Language Model for OCR},
author = {Said Taghadouini and Adrien Cavaill\`{e}s and Baptiste Aubertin},
year = {2026},
howpublished = {\url{https://huggingface.co/blog/lightonai/lightonocr-2}}
}