You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
To access OCRGenBench, please submit an application form on our website. Access will be granted automatically upon submission.
Log in or Sign Up to review the conditions and access this dataset content.
OCRGenBench: A Comprehensive Benchmark for Evaluating OCR Generative Capabilities
π Dataset Access
This dataset is gated. To download OCRGenBench, please submit an access request:
π Apply for Access β click the Access button on the dataset page
Applications are automatically approved. You will receive an email confirmation once access is granted.
π Overview
OCRGenBench is the most comprehensive benchmark to date for evaluating the OCR generative capabilities of generative models. It pioneers in the unification of:
- T2I Generation β text-to-image synthesis with accurate visual text
- Text Editing β precise modification of text within images
- OCR I2I Translation β OCR-related image-to-image translation
The benchmark covers 5 common text categories and 33 OCR generative tasks, including 1,060 challenging, human-annotated samples with dense text, varied layouts, multiple aspect ratios, and bilingual (English/Chinese) content.
We also design a unified metric OCRGenScore, assessing text accuracy, instruction following, visual quality, and structural consistency in visual text synthesis.
ποΈ Data Categorization
OCRGenBench encompasses five major text scenarios and 33 OCR generative tasks:
π Data Distribution
OCRGenBench includes 1,060 high-quality, manually annotated samples:
π Leaderboard
Performance across tasks (main leaderboard)
View the full interactive leaderboard: OCRGenBench Leaderboard
π Citation
If you find our work helpful, please cite our paper:
@article{zhang2025ocrgenbench,
title={{OCRGenBench: A Comprehensive Benchmark for Evaluating OCR Generative Capabilities}},
author={Zhang, Peirong and Xu, Haowei and Zhang, Jiaxin and Zheng, Xuhan and Xu, Guitao and Zhang, Yuyi and Liu, Junle and Yang, Zhenhua and Zhou, Wei and Jin, Lianwen},
journal={arXiv preprint arXiv:2507.15085},
year={2025}
}
π¬ Contact
For questions about the dataset: eeprzhang@mail.scut.edu.cn
π Acknowledgement
Copyright 2025β2026, Deep Learning and Vision Computing (DLVC) Lab, South China University of Technology.
- Downloads last month
- 5