zhouchangda commited on
Commit ·
7bf0603
1
Parent(s): d9bfd92
update readme
Browse files
README.md
CHANGED
|
@@ -1,13 +1,17 @@
|
|
| 1 |
# Real5-OmniDocBench
|
| 2 |
|
| 3 |
-
<div align="center">
|
| 4 |
-
[[📜 arXiv]](https://arxiv.org/pdf/2603.04205) | [[Dataset (🤗Hugging Face)]](https://huggingface.co/datasets/PaddlePaddle/Real5-OmniDocBench)
|
| 5 |
-
</div>
|
| 6 |
-
|
| 7 |
**Real5-OmniDocBench** is a brand-new benchmark oriented toward real-world scenarios, which we constructed based on the OmniDocBench v1.5 dataset. The dataset comprises five distinct scenarios: Scanning, Warping, Screen-Photography, Illumination, and Skew. Apart from the Scanning category, all images were manually acquired via handheld mobile devices to closely simulate real-world conditions. Each subset maintains a one-to-one correspondence with the original OmniDocBench, strictly adhering to its ground-truth annotations and evaluation protocols. Given its empirical and realistic nature, this dataset serves as a rigorous benchmark for assessing the robustness of document parsing models in practical applications.
|
| 8 |
|
| 9 |
---
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
## Key Features
|
| 12 |
|
| 13 |
### 1. Real-world Scenarios
|
|
@@ -1330,4 +1334,8 @@ If you use Real5-OmniDocBench in your research, please cite our dataset paper an
|
|
| 1330 |
primaryClass={cs.CV},
|
| 1331 |
url={https://arxiv.org/abs/2603.04205},
|
| 1332 |
}
|
| 1333 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Real5-OmniDocBench
|
| 2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
**Real5-OmniDocBench** is a brand-new benchmark oriented toward real-world scenarios, which we constructed based on the OmniDocBench v1.5 dataset. The dataset comprises five distinct scenarios: Scanning, Warping, Screen-Photography, Illumination, and Skew. Apart from the Scanning category, all images were manually acquired via handheld mobile devices to closely simulate real-world conditions. Each subset maintains a one-to-one correspondence with the original OmniDocBench, strictly adhering to its ground-truth annotations and evaluation protocols. Given its empirical and realistic nature, this dataset serves as a rigorous benchmark for assessing the robustness of document parsing models in practical applications.
|
| 4 |
|
| 5 |
---
|
| 6 |
|
| 7 |
+
## Updates
|
| 8 |
+
|
| 9 |
+
- [2026/03/05] Release paper and update mertics
|
| 10 |
+
- The paper has been released on [arXiv](https://arxiv.org/abs/2512.03069).
|
| 11 |
+
- Update DeepSeek-OCR 2, GLM-OCR model evaluation.
|
| 12 |
+
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
## Key Features
|
| 16 |
|
| 17 |
### 1. Real-world Scenarios
|
|
|
|
| 1334 |
primaryClass={cs.CV},
|
| 1335 |
url={https://arxiv.org/abs/2603.04205},
|
| 1336 |
}
|
| 1337 |
+
```
|
| 1338 |
+
|
| 1339 |
+
## Links
|
| 1340 |
+
|
| 1341 |
+
- Paper: [Real5-OmniDocBench](https://arxiv.org/pdf/2603.04205)
|