|
|
--- |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
- split: validation |
|
|
path: data/validation-* |
|
|
- split: test |
|
|
path: data/test-* |
|
|
dataset_info: |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: output |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 2440808481 |
|
|
num_examples: 22435 |
|
|
- name: validation |
|
|
num_bytes: 329337240 |
|
|
num_examples: 2804 |
|
|
- name: test |
|
|
num_bytes: 328649745 |
|
|
num_examples: 2805 |
|
|
download_size: 3127169673 |
|
|
dataset_size: 3098795466 |
|
|
--- |
|
|
# Arabic-Image2Html Dataset |
|
|
|
|
|
A dataset of **28K image-HTML pairs** for training OCR models to transform Arabic documents into structured and semantic HTML. |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
This dataset was created to address the lack of available open-source Arabic OCR datasets with image-to-semantic HTML pairs. It contains diverse Arabic document images paired with clean, semantic HTML output. |
|
|
|
|
|
### Dataset Composition |
|
|
|
|
|
The dataset consists of two main components: |
|
|
|
|
|
**1. Web-Scraped Wikipedia Content (~13K samples, 46%)** |
|
|
- Extracted from Arabic Wikipedia articles |
|
|
- Post-processed HTML with only semantic tags preserved |
|
|
- Screenshots captured with real styling using Playwright |
|
|
- Cleaned structure with proper semantic elements (section, header, main, etc.) |
|
|
|
|
|
**2. Synthetically Generated Documents (~15K samples, 54%)** |
|
|
- HTML documents rendered into images using CSS styling |
|
|
- Mimics various real-world document types: |
|
|
- Historical manuscripts |
|
|
- Newspaper articles |
|
|
- Scientific papers |
|
|
- Invoices |
|
|
- Recipes |
|
|
- And more (~13 formats total) |
|
|
- Diverse layouts, styles, noise levels, fonts, and text flows |
|
|
- Filled with plain Arabic text from open datasets |
|
|
- Multiple semantic tag combinations (footer, table, section, etc.) |
|
|
|
|
|
### Features |
|
|
|
|
|
- **Total samples:** 28,000 image-HTML pairs |
|
|
- **Language:** Arabic |
|
|
- **Output format:** Semantic HTML (clean tags without id, class attributes) |
|
|
- **Document diversity:** Multiple formats and layouts |
|
|
|
|
|
## Usage |
|
|
|
|
|
### Loading the Dataset |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("OussamaBenSlama/arabic-image2html") |
|
|
``` |
|
|
|
|
|
|
|
|
## Limitations |
|
|
|
|
|
- Limited examples with diacritical marks (tashkeel), which may affect model performance on texts with extensive diacritics |
|
|
- Wikipedia samples share similar design patterns |
|
|
- Synthetic generation may not capture all real-world document variations |
|
|
|
|
|
## Related Resources |
|
|
|
|
|
- **Model:** [Alef-OCR-Image2Html](https://huggingface.co/OussamaBenSlama/Alef-OCR-Image2Html) |
|
|
- **Training Notebooks:** [Github Repository](https://github.com/OussamaBenSlama/Alef-OCR-Image2Html) |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@misc{arabic_image2html_2025, |
|
|
title={Arabic-Image2Html: A Dataset for Arabic OCR to Semantic HTML}, |
|
|
author={Oussama Ben Slama}, |
|
|
year={2025}, |
|
|
howpublished={Hugging Face Datasets}, |
|
|
url={https://huggingface.co/datasets/OussamaBenSlama/arabic-image2html} |
|
|
} |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
Apache2.0 |
|
|
## Acknowledgments |
|
|
|
|
|
This work builds upon the excellent research by the NAMAA community and their state-of-the-art Qari-OCR model. |
|
|
|