LightOnOCR-2 🦉
Collection
LightOnOCR-2-1B: a lightweight high-performance end-to-end OCR model family
•
12 items
•
Updated
•
12
bboxes
listlengths 1
5
| pdf
unknown |
|---|---|
[
[
138,
584,
438,
765
],
[
557,
63,
865,
247
]
] | "JVBERi0xLjcKJcK1wrYKCjEgMCBvYmoKPDwvVHlwZS9DYXRhbG9nL1BhZ2VzIDIgMCBSPj4KZW5kb2JqCgoyIDAgb2JqCjw8L1R(...TRUNCATED)
|
[
[
541,
68,
892,
196
]
] | "JVBERi0xLjcKJcK1wrYKCjEgMCBvYmoKPDwvVHlwZS9DYXRhbG9nL1BhZ2VzIDIgMCBSPj4KZW5kb2JqCgoyIDAgb2JqCjw8L1R(...TRUNCATED)
|
[
[
305,
273,
694,
424
],
[
338,
520,
661,
686
]
] | "JVBERi0xLjcKJcK1wrYKCjEgMCBvYmoKPDwvVHlwZS9DYXRhbG9nL1BhZ2VzIDIgMCBSPj4KZW5kb2JqCgoyIDAgb2JqCjw8L1R(...TRUNCATED)
|
[
[
356,
65,
650,
336
]
] | "JVBERi0xLjcKJcK1wrYKCjEgMCBvYmoKPDwvVHlwZS9DYXRhbG9nL1BhZ2VzIDIgMCBSPj4KZW5kb2JqCgoyIDAgb2JqCjw8L1R(...TRUNCATED)
|
[
[
302,
100,
697,
289
]
] | "JVBERi0xLjcKJcK1wrYKCjEgMCBvYmoKPDwvVHlwZS9DYXRhbG9nL1BhZ2VzIDIgMCBSPj4KZW5kb2JqCgoyIDAgb2JqCjw8L1R(...TRUNCATED)
|
[
[
232,
245,
767,
520
]
] | "JVBERi0xLjcKJcK1wrYKCjEgMCBvYmoKPDwvVHlwZS9DYXRhbG9nL1BhZ2VzIDIgMCBSPj4KZW5kb2JqCgoyIDAgb2JqCjw8L1R(...TRUNCATED)
|
[
[
90,
633,
480,
873
]
] | "JVBERi0xLjcKJcK1wrYKCjEgMCBvYmoKPDwvVHlwZS9DYXRhbG9nL1BhZ2VzIDIgMCBSPj4KZW5kb2JqCgoyIDAgb2JqCjw8L1R(...TRUNCATED)
|
[
[
90,
117,
479,
317
],
[
495,
117,
884,
317
],
[
90,
339,
479,
539
],
[
495,
339,
884,
539
],
[
292,
598,
681,
804
]
] | "JVBERi0xLjcKJcK1wrYKCjEgMCBvYmoKPDwvVHlwZS9DYXRhbG9nL1BhZ2VzIDIgMCBSPj4KZW5kb2JqCgoyIDAgb2JqCjw8L1R(...TRUNCATED)
|
[
[
523,
425,
918,
634
]
] | "JVBERi0xLjcKJcK1wrYKCjEgMCBvYmoKPDwvVHlwZS9DYXRhbG9nL1BhZ2VzIDIgMCBSPj4KZW5kb2JqCgoyIDAgb2JqCjw8L1R(...TRUNCATED)
|
[
[
209,
147,
765,
811
]
] | "JVBERi0xLjcKJcK1wrYKCjEgMCBvYmoKPDwvVHlwZS9DYXRhbG9nL1BhZ2VzIDIgMCBSPj4KZW5kb2JqCgoyIDAgb2JqCjw8L1R(...TRUNCATED)
|
Evaluation benchmark for assessing the ability of vision-language models (VLMs) to localize images within documents using bounding boxes.
Given a document page (PDF), the model must predict bounding boxes around images (figures, charts, photographs, etc.) present in the document. This evaluates the model's spatial understanding and ability to distinguish visual content from text in complex document layouts.
Each sample contains 1-5 images to localize, with ground truth bounding boxes normalized to a 0-1000 coordinate space.
Splits:
arxiv: 565 samples from scientific papersolmocr_bench: 290 samples from diverse document typesColumns:
bboxes: List of [x1, y1, x2, y2] bounding boxes normalized to 0-1000 coordinate spacepdf: Single-page PDF as bytesfrom datasets import load_dataset
# Load dataset
dataset = load_dataset("lightonai/LightOnOCR-bbox-bench")
# Access a sample
sample = dataset['arxiv'][0]
gt_bboxes = sample['bboxes'] # [[x1, y1, x2, y2], ...] normalized to 0-1000
pdf_bytes = sample['pdf'] # Single-page PDF as bytes
# Render PDF to image using your preferred library
# Convert normalized bboxes (0-1000) to pixel coordinates based on rendered image dimensions
ArXiv (565 samples):
OlmOCR (290 samples):
allenai/olmOCR-benchIf you use this dataset, please cite:
@misc{lightonocr2_2026,
title = {LightOnOCR: End-to-End, Multilingual, Efficient, State-of-the-Art Vision-Language Model for OCR},
author = {Said Taghadouini and Adrien Cavaill\`{e}s and Baptiste Aubertin},
year = {2026},
howpublished = {\url{https://huggingface.co/blog/lightonai/lightonocr-2}}
}