|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: markdown |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 634447055 |
|
|
num_examples: 1256 |
|
|
download_size: 554295266 |
|
|
dataset_size: 634447055 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
--- |
|
|
|
|
|
# Arabic Document OCR Markdown Dataset |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
This dataset contains 1,256 pairs of document images and their corresponding Markdown representations, specifically designed for Arabic document OCR tasks. The dataset is intended for training and evaluating models that convert document images into structured Markdown text (image-to-markdown OCR). |
|
|
|
|
|
## Features |
|
|
|
|
|
The dataset consists of two main features: |
|
|
|
|
|
- **image**: Document images in various formats |
|
|
- **markdown**: Corresponding Markdown text extracted from the images |
|
|
|
|
|
## Use Cases |
|
|
|
|
|
This dataset can be used for: |
|
|
|
|
|
- Training image-to-text OCR models for Arabic documents |
|
|
- Evaluating document understanding models |
|
|
|
|
|
## Loading the Dataset |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("Omar-youssef/arabic-document-ocr-markdown") |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset in your research, please cite: |
|
|
|
|
|
```bibtex |
|
|
@dataset{arabic_document_ocr_markdown, |
|
|
author = {Omar Youssef}, |
|
|
title = {Arabic Document OCR Markdown Dataset}, |
|
|
year = {2026}, |
|
|
publisher = {Hugging Face}, |
|
|
url = {https://huggingface.co/datasets/Omar-youssef/arabic-document-ocr-markdown} |
|
|
} |
|
|
``` |