Datasets:
update_readme
Browse files# ThaiOCRBench: A Task-Diverse Benchmark for Vision-Language Understanding in Thai
**ThaiOCRBench** is the first comprehensive benchmark for evaluating vision-language models (VLMs) on Thai text-rich visual understanding tasks.
Inspired by OCRBench v2, it contains **2,808 human-annotated samples** across **13 diverse tasks**, including table parsing, chart understanding, full-page OCR, key information extraction, and visual question answering.
The benchmark enables standardized zero-shot evaluation for both proprietary and open-source models, revealing significant performance gaps and paving the way for document understanding in low-resource languages.
🚀 **Our paper _ThaiOCRBench_ has been accepted to the IJCNLP-AACL 2025 Main Conference!**
## 📊 Dataset Statistics
| Task Type | Number of Samples |
|-------------------------------|-------------------|
| Text Recognition | 333 |
| Table Parsing | 193 |
| Full-page OCR | 197 |
| Chart Parsing | 200 |
| Key Information Extraction | 201 |
| Diagram VQA | 204 |
| Fine-grained Text Recognition | 206 |
| Handwritten Content Extraction| 209 |
| Key Information Mapping | 209 |
| Document Parsing | 211 |
| Infographics VQA | 213 |
| Document Classification | 215 |
| Cognition VQA | 217 |
| **Total** | **2,808** |
## 🧠 Performance of VLMs on ThaiOCRBench
<p align="center">
<img src="https://github.com/scb-10x/ThaiOCRBench/blob/main/pics/thaiocrbench_eval.png" width="88%" height="60%">
</p>
## 📘 Citation
If you use this benchmark in your research, please cite:
|
@@ -24,4 +24,11 @@ configs:
|
|
| 24 |
data_files:
|
| 25 |
- split: test
|
| 26 |
path: data/test-*
|
| 27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
data_files:
|
| 25 |
- split: test
|
| 26 |
path: data/test-*
|
| 27 |
+
license: cc-by-sa-4.0
|
| 28 |
+
task_categories:
|
| 29 |
+
- image-text-to-text
|
| 30 |
+
language:
|
| 31 |
+
- th
|
| 32 |
+
size_categories:
|
| 33 |
+
- 1K<n<10K
|
| 34 |
+
---
|