Datasets:
File size: 3,177 Bytes
ceb1be7 7f36742 a76f5f3 f669ae3 a76f5f3 ca610d1 a76f5f3 f669ae3 a76f5f3 f669ae3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
---
dataset_info:
features:
- name: Id
dtype: string
- name: image
dtype: image
- name: Task
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
splits:
- name: test
num_bytes: 1918653929
num_examples: 2808
download_size: 1902027712
dataset_size: 1918653929
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: cc-by-sa-4.0
task_categories:
- image-text-to-text
language:
- th
size_categories:
- 1K<n<10K
---
# ThaiOCRBench: A Task-Diverse Benchmark for Vision-Language Understanding in Thai
**ThaiOCRBench** is the first comprehensive benchmark for evaluating vision-language models (VLMs) on Thai text-rich visual understanding tasks.
Inspired by OCRBench v2, it contains **2,808 human-annotated samples** across **13 diverse tasks**, including table parsing, chart understanding, full-page OCR, key information extraction, and visual question answering.
The benchmark enables **standardized zero-shot evaluation** for both proprietary and open-source models, revealing significant performance gaps and paving the way for document understanding in low-resource languages.
π **Our paper _ThaiOCRBench_ has been accepted to the IJCNLP-AACL 2025 Main Conference!**
π **[π Read the Paper](https://arxiv.org/abs/2511.04479)**
π **[π» GitHub Repository](https://github.com/scb-10x/ThaiOCRBench)**
## π Dataset Statistics
| Task Type | Number of Samples |
|-------------------------------|-------------------|
| Text Recognition | 333 |
| Table Parsing | 193 |
| Full-page OCR | 197 |
| Chart Parsing | 200 |
| Key Information Extraction | 201 |
| Diagram VQA | 204 |
| Fine-grained Text Recognition | 206 |
| Handwritten Content Extraction| 209 |
| Key Information Mapping | 209 |
| Document Parsing | 211 |
| Infographics VQA | 213 |
| Document Classification | 215 |
| Cognition VQA | 217 |
| **Total** | **2,808** |
## π§ Performance of VLMs on ThaiOCRBench
<p align="center">
<img src="https://raw.githubusercontent.com/scb-10x/ThaiOCRBench/main/pics/thaiocrbench_eval.png" width="70%" height="60%">
</p>
## π Citation
If you use ThaiOCRBench in your research or applications, please cite our work:
```
@misc{nonesung2025thaiocrbenchtaskdiversebenchmarkvisionlanguage,
title={ThaiOCRBench: A Task-Diverse Benchmark for Vision-Language Understanding in Thai},
author={Surapon Nonesung and Teetouch Jaknamon and Sirinya Chaiophat and Natapong Nitarach and Chanakan Wittayasakpan and Warit Sirichotedumrong and Adisai Na-Thalang and Kunat Pipatanakul},
year={2025},
eprint={2511.04479},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2511.04479},
}
``` |