staghado commited on
Commit
7c255a3
·
verified ·
1 Parent(s): cdc9592

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +75 -23
README.md CHANGED
@@ -1,25 +1,77 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: bboxes
5
- list:
6
- list: int32
7
- - name: pdf
8
- dtype: binary
9
- splits:
10
- - name: arxiv
11
- num_bytes: 349567152
12
- num_examples: 565
13
- - name: olmocr_bench
14
- num_bytes: 113260284
15
- num_examples: 290
16
- download_size: 427387535
17
- dataset_size: 462827436
18
- configs:
19
- - config_name: default
20
- data_files:
21
- - split: arxiv
22
- path: data/arxiv-*
23
- - split: olmocr_bench
24
- path: data/olmocr_bench-*
25
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - object-detection
5
+ - image-to-text
6
+ language:
7
+ - en
8
+ size_categories:
9
+ - n<1K
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
+
12
+ # LightOnOCR-bbox-bench
13
+
14
+ Evaluation benchmark for assessing the ability of vision-language models (VLMs) to localize images within documents using bounding boxes.
15
+
16
+ ## Task Description
17
+
18
+ Given a document page (PDF), the model must predict bounding boxes around images (figures, charts, photographs, etc.) present in the document. This evaluates the model's spatial understanding and ability to distinguish visual content from text in complex document layouts.
19
+
20
+ Each sample contains 1-5 images to localize, with ground truth bounding boxes normalized to a 0-1000 coordinate space.
21
+
22
+ ## Dataset Structure
23
+
24
+ **Splits:**
25
+ - `arxiv`: 565 samples from scientific papers
26
+ - `olmocr_bench`: 290 samples from diverse document types
27
+
28
+ **Columns:**
29
+ - `bboxes`: List of [x1, y1, x2, y2] bounding boxes normalized to 0-1000 coordinate space
30
+ - `pdf`: Single-page PDF as bytes
31
+
32
+ ## Usage
33
+
34
+ ```python
35
+ from datasets import load_dataset
36
+
37
+ # Load dataset
38
+ dataset = load_dataset("lightonai/LightOnOCR-bbox-bench")
39
+
40
+ # Access a sample
41
+ sample = dataset['arxiv'][0]
42
+ gt_bboxes = sample['bboxes'] # [[x1, y1, x2, y2], ...] normalized to 0-1000
43
+ pdf_bytes = sample['pdf'] # Single-page PDF as bytes
44
+
45
+ # Render PDF to image using your preferred library
46
+ # Convert normalized bboxes (0-1000) to pixel coordinates based on rendered image dimensions
47
+ ```
48
+
49
+ ## Dataset Composition
50
+
51
+ **ArXiv (565 samples):**
52
+ - Scientific papers with figures, charts, and diagrams
53
+ - Automatically annotated using nvpdftex toolkit
54
+ - Filtered to 1-5 images per page
55
+
56
+ **OlmOCR (290 samples):**
57
+ - Diverse document types: mathematical papers, tables, multi-column layouts, historical scans
58
+ - Images and annotations from [allenai/olmOCR-bench](https://huggingface.co/datasets/allenai/olmOCR-bench)
59
+ - Filtered to 1-5 images per page, excluding logo-only samples
60
+
61
+ ## Source Datasets
62
+
63
+ - **ArXiv subset**: Scientific papers from arXiv
64
+ - **OlmOCR subset**: Derived from `allenai/olmOCR-bench`
65
+
66
+ ## Citation
67
+
68
+ If you use this dataset, please cite:
69
+
70
+ ```bibtex
71
+ @misc{lightonocr2_2026,
72
+ title = {LightOnOCR: End-to-End, Multilingual, Efficient, State-of-the-Art Vision-Language Model for OCR},
73
+ author = {Said Taghadouini and Adrien Cavaill\`{e}s and Baptiste Aubertin},
74
+ year = {2026},
75
+ howpublished = {\url{https://huggingface.co/blog/lightonai/lightonocr-2}}
76
+ }
77
+ ```