| --- |
| pretty_name: KITAB PDF to Markdown (Reviewed) |
| language: |
| - ar |
| license: apache-2.0 |
| tags: |
| - ocr |
| - arabic |
| - document-understanding |
| - pdf-to-markdown |
| dataset_info: |
| features: |
| - name: markdown |
| dtype: string |
| - name: image |
| dtype: image |
| splits: |
| - name: train |
| num_bytes: 68643604.0 |
| num_examples: 62 |
| download_size: 68467976 |
| dataset_size: 68643604.0 |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: data/train-* |
| --- |
| # KITAB_pdf_to_markdown_reviewed (Corrected KITAB-Bench PDF→Markdown) |
|
|
| **Short description.** A carefully reviewed and corrected version of the KITAB-Bench PDF-to-Markdown subset for **Arabic document OCR** evaluation. We fixed ground-truth errors (hallucinated text, missing page numbers, omissions of small-font text) and standardized formatting to provide a **reliable benchmark** for model comparison. |
|
|
| **TL;DR** |
| - ✅ Human-verified ground truth for Arabic PDF→Markdown |
| - ✅ Removes hallucinations and fills missing/omitted content |
| - ✅ Keeps the original task and schema for drop-in evaluation |
| - 🔗 [Based on KITAB-Bench](https://github.com/mbzuai-oryx/KITAB-Bench) |
|
|
| --- |
|
|
| ## Motivation & Background |
|
|
| Evaluating Arabic OCR and document understanding models requires robust, accurate benchmarks. During an assessment of the original **KITAB-Bench** PDF-to-Markdown subset[^kitab], we found problems that compromise fair evaluation: |
|
|
| - **Hallucinated ground truth:** some reference markdown contained phrases not present in the source page (likely tool-generated). |
| *Example:* one entry included the English sentence: |
| > “**You're right - let me write it exactly as it appears in the image, maintaining the right-to-left direction:**” |
| - **Missing page numbers** in references. |
| - **Omission of small-font text** that is clearly visible in the source image. |
|
|
| To address this, we manually reviewed and corrected the ground truth, producing this dataset. |
| --- |
|
|
| ## What’s in this dataset? |
|
|
| - **Split:** `train` |
| - **Records:** currently ~60+ page-level samples (may grow in future versions). |
| - **Fields:** |
| - `image` — the page image. |
| - `markdown` — human-verified, structure-preserving Markdown for the page. |
|
|
| --- |
|
|
| ## How we corrected the data |
|
|
| 1. **Removed hallucinated phrases** that do not appear in the image. |
| 2. **Restored omitted content**, including **small-font text**. |
| 3. **Added/verified page markers** when appropriate. |
| 4. **Normalized minor formatting** to keep the task consistent across samples. |
|
|
| Our goal was **minimal, faithful correction**: keep the original task and layout intent, while ensuring the ground truth actually matches the page. |
|
|
| --- |
|
|
| ## Usage |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("Misraj/KITAB_pdf_to_markdown_reviewed", split="train") |
| row = ds[0] |
| |
| # image preview |
| row["image"].show() |
| |
| # markdown preview |
| print(row["markdown"][:800]) |
| ``` |
|
|
| --- |
|
|
| ## Evaluation protocol (suggested) |
|
|
| Commonly reported metrics for this task include: |
|
|
| * **WER / CER** — word/character error rate (↓ better) |
| * **BLEU / ChrF** — text similarity (↑ better) |
| * **TEDS** — structural fidelity of tree/HTML/Markdown (↑ better) |
| * **MARS** — combined structure + text score (↑ better) |
|
|
| > Evaluate text metrics on normalized text; compute TEDS/MARS on rendered trees/blocks to reflect layout/structure preservation. |
|
|
| --- |
|
|
| ## Example results (on the corrected KITAB-Bench PDF→Markdown) |
|
|
| > Snapshot from our experiments using only open-source models for fairness; best in **bold**, second-best <u>underlined</u>. |
|
|
| | Model | WER ↓ | CER ↓ | BLEU ↑ | CHRF ↑ | TEDS ↑ | MARS ↑ | |
| | ----------------- | ----------: | ----------: | -----------: | -----------: | --------: | -----------: | |
| | Dots.ocr | **0.39** | **0.28** | **59.28** | **83.16** | 43 | <u>63.08</u> | |
| | **Baseer (ours)** | 0.61 | <u>0.40</u> | <u>55.78</u> | <u>80.26</u> | **56** | **68.13** | |
| | Nanonets | <u>0.51</u> | <u>0.40</u> | 51.37 | 77.45 | 33 | 55.225 | |
| | Qari | 0.65 | 0.48 | 44.61 | 71.45 | 43 | 57.225 | |
| | Qwen2.5-VL-3B | 0.70 | 0.57 | 40.44 | 66.78 | 31 | 48.89 | |
| | Qwen2.5-VL-7B | 0.76 | 0.63 | 36.76 | 62.45 | 24 | 43.225 | |
| | Gemma3-12B | 0.85 | 0.69 | 27.56 | 52.09 | <u>55</u> | 53.545 | |
| | Gemma3-4B | 0.95 | 0.82 | 12.94 | 31.72 | 27 | 29.36 | |
| | Aya-vision | 1.27 | 0.96 | 5.58 | 16.19 | 26 | 21.095 | |
| | AIN | 1.18 | 1.08 | 2.61 | 3.99 | 24 | 13.995 | |
|
|
| **Reading the snapshot.** Dots.ocr leads most text-centric metrics, while **Baseer** achieves the **best structural** score (TEDS) and **best overall MARS**, reflecting stronger layout understanding. The KITAB-Bench subset is small (tens of pages), so each misprediction impacts the score noticeably. On our larger and more diverse **Misraj-DocOCR** benchmark (400 expert-verified pages), Baseer’s advantage is more pronounced. |
|
|
| --- |
|
|
| ## How to cite |
|
|
| If you use this dataset, please cite **both** this corrected release and the original KITAB-Bench: |
|
|
| **This dataset (recommended):** |
|
|
| ```bibtex |
| @misc{hennara2025baseervisionlanguagemodelarabic, |
| title={Baseer: A Vision-Language Model for Arabic Document-to-Markdown OCR}, |
| author={Khalil Hennara and Muhammad Hreden and Mohamed Motasim Hamed and Ahmad Bastati and Zeina Aldallal and Sara Chrouf and Safwan AlModhayan}, |
| year={2025}, |
| eprint={2509.18174}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.CV}, |
| url={https://arxiv.org/abs/2509.18174}, |
| } |
| ``` |
|
|