ATR-benchmark / README.md
mboillet's picture
Upload README.md
8a7b26a verified
---
task_categories:
- image-to-text
pretty_name: ATR benchmark
size_categories:
- n<1K
language:
- fr
- la
- en
- no
- ar
- zh
- de
- nl
tags:
- atr
- htr
- ocr
dataset_info:
features:
- name: dataset
dtype: string
- name: image
dtype: image
- name: text
dtype: string
- name: level
dtype: string
splits:
- name: test
num_bytes: 180132193.0
num_examples: 133
download_size: 178873479
dataset_size: 180132193.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# ATR benchmark - Page/paragraph level
## Dataset Description
- **Homepage:** [ATR benchmark](https://huggingface.co/datasets/Teklia/ATR-benchmark)
- **Point of Contact:** [TEKLIA](https://teklia.com)
## Dataset Summary
The ATR benchmark dataset is a multilingual dataset that includes 463 document images, at page or paragraph level. This dataset has been designed to test ATR models and combines data from several public datasets:
- [BnL Historical Newspapers](https://data.bnl.lu/data/historical-newspapers/)
- [CASIA-HWDB2](https://nlpr.ia.ac.cn/databases/handwriting/Offline_database.html)
- [Churro](https://huggingface.co/datasets/stanford-oval/churro-dataset)
- [DIY History - Social Justice](http://diyhistory.lib.uiowa.edu/)
- [DAI-CRETDHI](https://dai-cretdhi.univ-lr.fr/)
- [Esposalles](https://dag.cvc.uab.es/dataset/the-esposalles-database/)
- FINLAM - Historical Newspapers
- [Horae - Books of hours](https://github.com/oriflamms/HORAE)
- [IAM](https://fki.tic.heia-fr.ch/databases/iam-handwriting-database)
- [NDLOCR](https://lab.ndl.go.jp/data_set/r4_kotenocr_en/)
- [NorHand v3](https://zenodo.org/records/10255840)
- [OpenITI](https://openiti.org/projects/OpenITI%20Corpus.html)
- [Marius PELLET](https://europeana.transcribathon.eu/documents/story/?story=121795)
- [QARI](https://huggingface.co/datasets/NAMAA-Space/QariOCR-v0.3-markdown-mixed-dataset)
- [RASM](http://www.primaresearch.org/RASM2019/)
- [READ-2016](https://zenodo.org/records/218236)
- [RIMES](https://teklia.com/research/rimes-database/)
- [ScribbleLens](https://openslr.org/84/)
Images are in their original size.
### Split
| dataset | images | language |
| ------------------------------ | -----: | ---------------- |
| BnL Historical Newspapers | 3 | German |
| CASIA-HWDB2 | 10 | Chinese |
| Churro | 290 | Multi-lingual |
| DAI-CRETDHI | 10 | French |
| DIY History - Social Justice | 20 | English |
| Esposalles | 10 | Catalan |
| FINLAM - Historical Newspapers | 10 | English / French |
| Horae - Books of hours | 10 | Latin |
| IAM | 10 | English |
| NDLOCR | 10 | Japanese |
| NorHand v3 | 10 | Norwegian |
| OpenITI | 10 | Arabic |
| Marius PELLET | 10 | French |
| QARI | 10 | Arabic |
| RASM | 10 | Arabic |
| READ-2016 | 10 | German |
| RIMES | 10 | French |
| ScribbleLens | 10 | Dutch |
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size={img size} at 0x1A800E8E190,
'text': '{transcription}'
}
```
### Data Fields
- `image`: a PIL.Image.Image object containing the image. Note that when accessing the image column (using dataset[0]["image"]), the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- `dataset`: the name of the original dataset.
- `text`: the label transcription of the image.
- `level`: full document page or a single paragraph.