Real5-OmniDocBench / README.md
zhouchangda
update mertic
cc119ae
# Real5-OmniDocBench
**Real5-OmniDocBench** is a brand-new benchmark oriented toward real-world scenarios, which we constructed based on the OmniDocBench v1.5 dataset. The dataset comprises five distinct scenarios: Scanning, Warping, Screen-Photography, Illumination, and Skew. Apart from the Scanning category, all images were manually acquired via handheld mobile devices to closely simulate real-world conditions. Each subset maintains a one-to-one correspondence with the original OmniDocBench, strictly adhering to its ground-truth annotations and evaluation protocols. Given its empirical and realistic nature, this dataset serves as a rigorous benchmark for assessing the robustness of document parsing models in practical applications.
---
## Key Features
### 1. Real-world Scenarios
Real5-OmniDocBench contains five challenging and representative photographic scenarios:
* **Scanning**: Images captured by scanning devices, simulating flat and clean document scans.
* **Warping**: Photos of documents with visible page curvatures, mimicking distortions caused by book spines or folding.
* **Skew**: Documents photographed at an angle, resulting in perspective distortion.
* **Screen-Photography**: Photographs of screens displaying documents, introducing moiré patterns and reflections.
* **Illumination**: Images taken under varied lighting conditions, including shadows and glare.
### 2. Comprehensive Coverage
For each scenario, the dataset contains **1,355 images**, corresponding exactly to the original pages in the OmniDocBench evaluation set. This enables direct, controlled comparison of model performance across different real-world conditions.
### 3. High-Quality Realism
All images are captured manually under real-world conditions. Care was taken to cover typical distortions and artifacts encountered in mobile or camera-based document digitization.
### 4. Evaluation Protocol Compatibility
The evaluation scripts and metrics from OmniDocBench (e.g., Normalized Edit Distance, BLEU, METEOR, TEDS, COCODet) are fully compatible and directly applicable to Real5-OmniDocBench.
<div align="center">
<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/paddleocr_vl_1_5/Real5-OmniDocBench-sample.png" width="800"/>
</div>
---
## Dataset Structure
The dataset follows the directory structure shown below:
```text
Real5-OmniDocBench/
├── Real5-OmniDocBench-Warping/
│ ├── book_en_[搬书匠#20][HTML5 Canvas].2011.英文版_page_208.png
│ └── ...
├── Real5-OmniDocBench-Illumination/
│ └── ...
├── Real5-OmniDocBench-Scanning/
│ └── ...
├── Real5-OmniDocBench-Screen-Photography/
│ └── ...
└── Real5-OmniDocBench-Skew/
└── ...
```
## Usage & Evaluation
* **Input:** For each scenario, models should take the corresponding set of 1,355 images as input.
* **Output:** Model predictions should follow the same format as OmniDocBench, e.g., Markdown for end-to-end parsing.
* **Evaluation:** Use the official OmniDocBench evaluation scripts and metrics for assessment. Direct comparison across scenarios is encouraged to measure model robustness.
---
## Benchmark Results
To illustrate the impact of real-world visual distortions on document parsing, we evaluated several mainstream models across all five scenarios in Real5-OmniDocBench. The same metrics as OmniDocBench are used: **Overall↑**, **TextEdit↓**, **FormulaCDM↑**, **TableTEDS↑**, and **Reading OrderEdit↓**.
### 1. Scanning
<table border="1" style="width:100%; border-collapse: collapse;">
<tr>
<th><strong>Model Type</strong></th>
<th><strong>Methods</strong></th>
<th><strong>Parameters</strong></th>
<th><strong>Overall&#x2191</strong></th>
<th><strong>Text<sup>Edit</strong>&#x2193</sup></th>
<th><strong>Formula<sup>CDM</strong>&#x2191</sup></th>
<th><strong>Table<sup>TEDS</strong>&#x2191</sup></th>
<th><strong>Reading Order<sup>Edit</strong>&#x2193</sup></th>
</tr>
<tr>
<td rowspan="2">Pipeline Tools</td>
<td>Maker-1.8.2</td>
<td>-</td>
<td>70.27</td>
<td>0.223</td>
<td>77.03</td>
<td>56.05</td>
<td>0.238</td>
</tr>
<tr>
<td>PP-StructureV3</td>
<td>-</td>
<td>84.68</td>
<td>0.094</td>
<td>84.34</td>
<td>79.06</td>
<td>0.092</td>
</tr>
<tr>
<td rowspan="5">General VLMs</td>
<td>GPT-5.2</td>
<td>-</td>
<td>84.43</td>
<td>0.142</td>
<td>85.68</td>
<td>81.78</td>
<td>0.109</td>
</tr>
<tr>
<td>Qwen2.5-VL-72B</td>
<td>72B</td>
<td>86.19</td>
<td>0.110</td>
<td>86.14</td>
<td>83.41</td>
<td>0.114</td>
</tr>
<tr>
<td>Gemini-2.5 Pro</td>
<td>-</td>
<td>89.25</td>
<td>0.073</td>
<td>87.44</td>
<td>87.62</td>
<td>0.098</td>
</tr>
<tr>
<td>Qwen3-VL-235B-A22B-Instruct</td>
<td>235B</td>
<td>89.43</td>
<td>0.059</td>
<td>89.01</td>
<td>85.19</td>
<td>0.066</td>
</tr>
<tr>
<td>Gemini-3 Pro</td>
<td>-</td>
<td>89.47</td>
<td>0.071</td>
<td>88.16</td>
<td>87.37</td>
<td>0.078</td>
</tr>
<tr>
<td rowspan="12">Specialized VLMs</td>
<td>Dolphin</td>
<td>322M</td>
<td>72.16</td>
<td>0.154</td>
<td>64.58</td>
<td>67.27</td>
<td>0.130</td>
</tr>
<tr>
<td>Dolphin-1.5 </td>
<td>0.3B</td>
<td>83.39</td>
<td>0.097</td>
<td>76.25</td>
<td>83.65</td>
<td>0.090</td>
</tr>
<tr>
<td>MinerU2-VLM</td>
<td>0.9B</td>
<td>83.60</td>
<td>0.094</td>
<td>79.76</td>
<td>80.44</td>
<td>0.091</td>
</tr>
<tr>
<td>MonkeyOCR-pro-1.2B </td>
<td>1.9B</td>
<td>84.64</td>
<td>0.123</td>
<td>84.17</td>
<td>82.13</td>
<td>0.145</td>
</tr>
<tr>
<td>MonkeyOCR-3B </td>
<td>3.7B</td>
<td>84.65</td>
<td>0.100</td>
<td>84.16</td>
<td>79.81</td>
<td>0.143</td>
</tr>
<tr>
<td>Nanonets-OCR-s</td>
<td>3B</td>
<td>85.52</td>
<td>0.106</td>
<td>88.09</td>
<td>79.11</td>
<td>0.106</td>
</tr>
<tr>
<td>Deepseek-OCR</td>
<td>3B</td>
<td>86.17</td>
<td>0.078</td>
<td>83.59</td>
<td>82.69</td>
<td>0.085</td>
</tr>
<tr>
<td>dots.ocr</td>
<td>3B</td>
<td>86.87</td>
<td>0.083</td>
<td>83.27</td>
<td>85.68</td>
<td>0.081</td>
</tr>
<tr>
<td>MonkeyOCR-pro-3B </td>
<td>3.7B</td>
<td>86.94</td>
<td>0.103</td>
<td>86.29</td>
<td>84.86</td>
<td>0.141</td>
</tr>
<tr>
<td>MinerU2.5</td>
<td>1.2B</td>
<td>90.06</td>
<td>0.052</td>
<td>88.22</td>
<td>87.16</td>
<td>0.050</td>
</tr>
<tr>
<td>PaddleOCR-VL</td>
<td>0.9B</td>
<td><u>92.11</u></td>
<td><u>0.039</u></td>
<td><u>90.35</u></td>
<td><u>89.90</u></td>
<td><u>0.048</u></td>
</tr>
<tr>
<td><strong>PaddleOCR-VL-1.5</strong></td>
<td>0.9B</td>
<td><strong>93.43</strong></td>
<td><strong>0.037</strong></td>
<td><strong>93.04</strong></td>
<td><strong>90.97</strong></td>
<td><strong>0.045</strong></td>
</tr>
</table>
*In the scanning scenario, the models’ overall performance remains high, with PaddleOCR-VL-1.5 typically achieving the best results.*
---
### 2. Warping
<table border="1" style="width:100%; border-collapse: collapse;">
<tr>
<th><strong>Model Type</strong></th>
<th><strong>Methods</strong></th>
<th><strong>Parameters</strong></th>
<th><strong>Overall&#x2191</strong></th>
<th><strong>Text<sup>Edit</strong>&#x2193</sup></th>
<th><strong>Formula<sup>CDM</strong>&#x2191</sup></th>
<th><strong>Table<sup>TEDS</strong>&#x2191</sup></th>
<th><strong>Reading Order<sup>Edit</strong>&#x2193</sup></th>
</tr>
<tr>
<td rowspan="2">Pipeline Tools</td>
<td>Maker-1.8.2</td>
<td>-</td>
<td>58.98</td>
<td>0.349</td>
<td>72.71</td>
<td>39.08</td>
<td>0.390</td>
</tr>
<tr>
<td>PP-StructureV3</td>
<td>-</td>
<td>59.34</td>
<td>0.376</td>
<td>68.22</td>
<td>47.40</td>
<td>0.261</td>
</tr>
<tr>
<td rowspan="5">General VLMs</td>
<td>GPT-5.2</td>
<td>-</td>
<td>76.26</td>
<td>0.239</td>
<td>80.90</td>
<td>71.80</td>
<td>0.165</td>
</tr>
<tr>
<td>Gemini-2.5 Pro</td>
<td>-</td>
<td>87.63</td>
<td>0.092</td>
<td>86.50</td>
<td>85.59</td>
<td>0.109</td>
</tr>
<tr>
<td>Qwen2.5-VL-72B</td>
<td>72B</td>
<td>87.77</td>
<td>0.086</td>
<td>88.85</td>
<td>83.06</td>
<td>0.102</td>
</tr>
<tr>
<td>Gemini-3 Pro</td>
<td>-</td>
<td>88.90</td>
<td>0.086</td>
<td>88.10</td>
<td><u>87.20</u></td>
<td>0.087</td>
</tr>
<tr>
<td>Qwen3-VL-235B-A22B-Instruct</td>
<td>235B</td>
<td><u>89.99</u></td>
<td><strong>0.051</strong></td>
<td><u>89.06</u></td>
<td>85.95</td>
<td><u>0.064</u></td>
</tr>
<tr>
<td rowspan="12">Specialized VLMs</td>
<td>Dolphin-1.5 </td>
<td>0.3B</td>
<td>50.50</td>
<td>0.383</td>
<td>47.24</td>
<td>42.52</td>
<td>0.309</td>
</tr>
<tr>
<td>Dolphin</td>
<td>322M</td>
<td>60.35</td>
<td>0.316</td>
<td>61.06</td>
<td>51.58</td>
<td>0.247</td>
</tr>
<tr>
<td>Deepseek-OCR</td>
<td>3B</td>
<td>67.20</td>
<td>0.328</td>
<td>73.59</td>
<td>60.80</td>
<td>0.226</td>
</tr>
<tr>
<td>MinerU2-VLM</td>
<td>0.9B</td>
<td>73.73</td>
<td>0.202</td>
<td>77.72</td>
<td>63.65</td>
<td>0.173</td>
</tr>
<tr>
<td>MonkeyOCR-pro-1.2B </td>
<td>1.9B</td>
<td>76.59</td>
<td>0.196</td>
<td>78.85</td>
<td>70.52</td>
<td>0.221</td>
</tr>
<tr>
<td>MonkeyOCR-3B </td>
<td>3.7B</td>
<td>77.27</td>
<td>0.164</td>
<td>79.08</td>
<td>69.18</td>
<td>0.211</td>
</tr>
<tr>
<td>MonkeyOCR-pro-3B </td>
<td>3.7B</td>
<td>78.90</td>
<td>0.168</td>
<td>79.55</td>
<td>73.94</td>
<td>0.212</td>
</tr>
<tr>
<td>Nanonets-OCR-s</td>
<td>3B</td>
<td>83.56</td>
<td>0.121</td>
<td>86.24</td>
<td>76.57</td>
<td>0.124</td>
</tr>
<tr>
<td>MinerU2.5</td>
<td>1.2B</td>
<td>83.76</td>
<td>0.154</td>
<td>85.92</td>
<td>80.71</td>
<td>0.104</td>
</tr>
<tr>
<td>PaddleOCR-VL </td>
<td>0.9B</td>
<td>85.97</td>
<td>0.093</td>
<td>85.45</td>
<td>81.77</td>
<td>0.092</td>
</tr>
<tr>
<td>dots.ocr</td>
<td>3B</td>
<td>86.01</td>
<td>0.087</td>
<td>85.03</td>
<td>81.74</td>
<td>0.093</td>
</tr>
<tr>
<td><strong>PaddleOCR-VL-1.5</strong></td>
<td>0.9B</td>
<td><strong>91.25</strong></td>
<td><u>0.053</u></td>
<td><strong>90.94</strong></td>
<td><strong>88.10</strong></td>
<td><strong>0.063</strong></td>
</tr>
</table>
*Warping documents introduce moderate challenges, causing a slight performance drop, but specialized VLMs maintain competitive accuracy.*
---
### 3. Screen-Photography
<table border="1" style="width:100%; border-collapse: collapse;">
<tr>
<th><strong>Model Type</strong></th>
<th><strong>Methods</strong></th>
<th><strong>Parameters</strong></th>
<th><strong>Overall&#x2191</strong></th>
<th><strong>Text<sup>Edit</strong>&#x2193</sup></th>
<th><strong>Formula<sup>CDM</strong>&#x2191</sup></th>
<th><strong>Table<sup>TEDS</strong>&#x2191</sup></th>
<th><strong>Reading Order<sup>Edit</strong>&#x2193</sup></th>
</tr>
<tr>
<td rowspan="2">Pipeline Tools</td>
<td>Maker-1.8.2</td>
<td>-</td>
<td>63.65</td>
<td>0.290</td>
<td>72.73</td>
<td>47.21</td>
<td>0.325</td>
</tr>
<tr>
<td>PP-StructureV3</td>
<td>-</td>
<td>66.89</td>
<td>0.204</td>
<td>73.26</td>
<td>47.82</td>
<td>0.165</td>
</tr>
<tr>
<td rowspan="5">General VLMs</td>
<td>GPT-5.2</td>
<td>-</td>
<td>76.75</td>
<td>0.208</td>
<td>79.27</td>
<td>71.73</td>
<td>0.148</td>
</tr>
<tr>
<td>Qwen2.5-VL-72B</td>
<td>72B</td>
<td>86.48</td>
<td>0.100</td>
<td>87.46</td>
<td>82.00</td>
<td>0.102</td>
</tr>
<tr>
<td>Gemini-2.5 Pro</td>
<td>-</td>
<td>87.11</td>
<td>0.103</td>
<td>85.30</td>
<td>86.31</td>
<td>0.117</td>
</tr>
<tr>
<td>Gemini-3 Pro</td>
<td>-</td>
<td>88.86</td>
<td>0.084</td>
<td>87.33</td>
<td><u>87.65</u></td>
<td>0.087</td>
</tr>
<tr>
<td>Qwen3-VL-235B-A22B-Instruct</td>
<td>235B</td>
<td><u>89.27</u></td>
<td><u>0.068</u></td>
<td><u>88.72</u></td>
<td>85.85</td>
<td>0.071</td>
</tr>
<tr>
<td rowspan="12">Specialized VLMs</td>
<td>Dolphin</td>
<td>322M</td>
<td>64.29</td>
<td>0.232</td>
<td>58.66</td>
<td>57.38</td>
<td>0.195</td>
</tr>
<tr>
<td>Dolphin-1.5 </td>
<td>0.3B</td>
<td>69.76</td>
<td>0.205</td>
<td>61.80</td>
<td>68.00</td>
<td>0.177</td>
</tr>
<tr>
<td>Deepseek-OCR</td>
<td>3B</td>
<td>75.31</td>
<td>0.220</td>
<td>77.68</td>
<td>70.26</td>
<td>0.169</td>
</tr>
<tr>
<td>MinerU2-VLM</td>
<td>0.9B</td>
<td>78.77</td>
<td>0.139</td>
<td>79.02</td>
<td>71.17</td>
<td>0.123</td>
</tr>
<tr>
<td>MonkeyOCR-pro-1.2B </td>
<td>1.9B</td>
<td>80.24</td>
<td>0.148</td>
<td>80.78</td>
<td>74.74</td>
<td>0.179</td>
</tr>
<tr>
<td>MonkeyOCR-3B </td>
<td>3.7B</td>
<td>80.71</td>
<td>0.122</td>
<td>81.33</td>
<td>73.04</td>
<td>0.177</td>
</tr>
<tr>
<td>MonkeyOCR-pro-3B </td>
<td>3.7B</td>
<td>82.44</td>
<td>0.124</td>
<td>81.55</td>
<td>78.13</td>
<td>0.177</td>
</tr>
<tr>
<td>PaddleOCR-VL </td>
<td>0.9B</td>
<td>82.54</td>
<td>0.103</td>
<td>83.58</td>
<td>74.36</td>
<td>0.107</td>
</tr>
<tr>
<td>Nanonets-OCR-s</td>
<td>3B</td>
<td>84.86</td>
<td>0.112</td>
<td>86.65</td>
<td>79.09</td>
<td>0.117</td>
</tr>
<tr>
<td>dots.ocr</td>
<td>3B</td>
<td>87.18</td>
<td>0.081</td>
<td>85.34</td>
<td>84.26</td>
<td>0.079</td>
</tr>
<tr>
<td>MinerU2.5</td>
<td>1.2B</td>
<td>89.41</td>
<td>0.062</td>
<td>87.55</td>
<td>86.83</td>
<td><strong>0.053</strong></td>
</tr>
<tr>
<td><strong>PaddleOCR-VL-1.5</strong></td>
<td>0.9B</td>
<td><strong>91.76</strong></td>
<td><strong>0.050</strong></td>
<td><strong>90.88</strong></td>
<td><strong>89.38</strong></td>
<td><u>0.059</u></td>
</tr>
</table>
### 4. Illumination
<table border="1" style="width:100%; border-collapse: collapse;">
<tr>
<th><strong>Model Type</strong></th>
<th><strong>Methods</strong></th>
<th><strong>Parameters</strong></th>
<th><strong>Overall&#x2191</strong></th>
<th><strong>Text<sup>Edit</strong>&#x2193</sup></th>
<th><strong>Formula<sup>CDM</strong>&#x2191</sup></th>
<th><strong>Table<sup>TEDS</strong>&#x2191</sup></th>
<th><strong>Reading Order<sup>Edit</strong>&#x2193</sup></th>
</tr>
<tr>
<td rowspan="2">Pipeline Tools</td>
<td>Maker-1.8.2</td>
<td>-</td>
<td>66.31</td>
<td>0.259</td>
<td>74.80</td>
<td>50.03</td>
<td>0.337</td>
</tr>
<tr>
<td>PP-StructureV3</td>
<td>-</td>
<td>73.38</td>
<td>0.158</td>
<td>77.75</td>
<td>58.19</td>
<td>0.126</td>
</tr>
<tr>
<td rowspan="5">General VLMs</td>
<td>GPT-5.2</td>
<td>-</td>
<td>80.88</td>
<td>0.191</td>
<td>84.41</td>
<td>77.37</td>
<td>0.134</td>
</tr>
<tr>
<td>Qwen2.5-VL-72B</td>
<td>72B</td>
<td>87.25</td>
<td>0.087</td>
<td>86.44</td>
<td>84.03</td>
<td>0.097</td>
</tr>
<tr>
<td>Gemini-2.5 Pro</td>
<td>-</td>
<td>87.97</td>
<td>0.083</td>
<td>86.13</td>
<td>86.11</td>
<td>0.103</td>
</tr>
<tr>
<td>Qwen3-VL-235B-A22B-Instruct</td>
<td>235B</td>
<td>89.27</td>
<td>0.060</td>
<td>87.81</td>
<td>86.05</td>
<td>0.070</td>
</tr>
<tr>
<td>Gemini-3 Pro</td>
<td>-</td>
<td>89.53</td>
<td>0.073</td>
<td>87.78</td>
<td>88.14</td>
<td>0.080</td>
</tr>
<tr>
<td rowspan="12">Specialized VLMs</td>
<td>Dolphin</td>
<td>322M</td>
<td>67.29</td>
<td>0.197</td>
<td>61.42</td>
<td>60.10</td>
<td>0.173</td>
</tr>
<tr>
<td>Dolphin-1.5 </td>
<td>0.3B</td>
<td>75.61</td>
<td>0.159</td>
<td>70.04</td>
<td>72.69</td>
<td>0.133</td>
</tr>
<tr>
<td>Deepseek-OCR</td>
<td>3B</td>
<td>78.10</td>
<td>0.192</td>
<td>81.71</td>
<td>71.81</td>
<td>0.156</td>
</tr>
<tr>
<td>MinerU2-VLM</td>
<td>0.9B</td>
<td>80.51</td>
<td>0.135</td>
<td>80.72</td>
<td>74.29</td>
<td>0.123</td>
</tr>
<tr>
<td>MonkeyOCR-pro-1.2B </td>
<td>1.9B</td>
<td>82.11</td>
<td>0.144</td>
<td>82.07</td>
<td>78.67</td>
<td>0.172</td>
</tr>
<tr>
<td>MonkeyOCR-3B </td>
<td>3.7B</td>
<td>83.16</td>
<td>0.118</td>
<td>83.63</td>
<td>77.62</td>
<td>0.168</td>
</tr>
<tr>
<td>MonkeyOCR-pro-3B </td>
<td>3.7B</td>
<td>84.71</td>
<td>0.120</td>
<td>84.13</td>
<td>82.02</td>
<td>0.171</td>
</tr>
<tr>
<td>Nanonets-OCR-s</td>
<td>3B</td>
<td>85.01</td>
<td>0.099</td>
<td>87.94</td>
<td>76.96</td>
<td>0.112</td>
</tr>
<tr>
<td>dots.ocr</td>
<td>3B</td>
<td>87.57</td>
<td>0.068</td>
<td>85.07</td>
<td>84.44</td>
<td>0.076</td>
</tr>
<tr>
<td>MinerU2.5</td>
<td>1.2B</td>
<td>89.57</td>
<td>0.065</td>
<td><u>88.36</u></td>
<td>86.87</td>
<td>0.062</td>
</tr>
<tr>
<td>PaddleOCR-VL </td>
<td>0.9B</td>
<td><u>89.61</u></td>
<td><u>0.049</u></td>
<td>86.66</td>
<td><u>87.02</u></td>
<td><u>0.055</u></td>
</tr>
<tr>
<td><strong>PaddleOCR-VL-1.5</strong></td>
<td>0.9B</td>
<td><strong>92.16</strong></td>
<td><strong>0.046</strong></td>
<td><strong>91.80</strong></td>
<td><strong>89.33</strong></td>
<td><strong>0.051</strong></td>
</tr>
</table>
*Severe illumination variations present significant challenges, resulting in a noticeable performance drop for most models.*
---
### 5. Skew
<table border="1" style="width:100%; border-collapse: collapse;">
<tr>
<th><strong>Model Type</strong></th>
<th><strong>Methods</strong></th>
<th><strong>Parameters</strong></th>
<th><strong>Overall&#x2191</strong></th>
<th><strong>Text<sup>Edit</strong>&#x2193</sup></th>
<th><strong>Formula<sup>CDM</strong>&#x2191</sup></th>
<th><strong>Table<sup>TEDS</strong>&#x2191</sup></th>
<th><strong>Reading Order<sup>Edit</strong>&#x2193</sup></th>
</tr>
<tr>
<td rowspan="2">Pipeline Tools</td>
<td>PP-StructureV3</td>
<td>-</td>
<td>37.98</td>
<td>0.557</td>
<td>44.37</td>
<td>25.27</td>
<td>0.417</td>
</tr>
<tr>
<td>Maker-1.8.2</td>
<td>-</td>
<td>41.27</td>
<td>0.536</td>
<td>60.16</td>
<td>17.23</td>
<td>0.543</td>
</tr>
<tr>
<td rowspan="5">General VLMs</td>
<td>GPT-5.2</td>
<td>-</td>
<td>75.00</td>
<td>0.257</td>
<td>80.27</td>
<td>70.47</td>
<td>0.167</td>
</tr>
<tr>
<td>Qwen3-VL-235B-A22B-Instruct</td>
<td>235B</td>
<td>86.56</td>
<td><u>0.077</u></td>
<td>83.96</td>
<td>83.41</td>
<td><u>0.091</u></td>
</tr>
<tr>
<td>Qwen2.5-VL-72B</td>
<td>72B</td>
<td>86.90</td>
<td><u>0.077</u></td>
<td>87.26</td>
<td>81.14</td>
<td><u>0.091</u></td>
</tr>
<tr>
<td>Gemini-2.5 Pro</td>
<td>-</td>
<td>89.07</td>
<td><u>0.077</u></td>
<td>87.89</td>
<td>86.99</td>
<td>0.104</td>
</tr>
<tr>
<td>Gemini-3 Pro</td>
<td>-</td>
<td><u>89.45</u></td>
<td>0.080</td>
<td><u>88.33</u></td>
<td><u>88.06</u></td>
<td>0.092</td>
</tr>
<tr>
<td rowspan="12">Specialized VLMs</td>
<td>Dolphin-1.5 </td>
<td>0.3B</td>
<td>28.16</td>
<td>0.553</td>
<td>25.60</td>
<td>14.18</td>
<td>0.419</td>
</tr>
<tr>
<td>Dolphin</td>
<td>322M</td>
<td>44.83</td>
<td>0.500</td>
<td>51.34</td>
<td>33.22</td>
<td>0.321</td>
</tr>
<tr>
<td>MonkeyOCR-pro-1.2B </td>
<td>1.9B</td>
<td>62.18</td>
<td>0.292</td>
<td>66.25</td>
<td>49.46</td>
<td>0.317</td>
</tr>
<tr>
<td>Deepseek-OCR</td>
<td>3B</td>
<td>63.01</td>
<td>0.327</td>
<td>73.27</td>
<td>48.48</td>
<td>0.231</td>
</tr>
<tr>
<td>MonkeyOCR-pro-3B </td>
<td>3.7B</td>
<td>64.47</td>
<td>0.251</td>
<td>69.06</td>
<td>49.42</td>
<td>0.301</td>
</tr>
<tr>
<td>MonkeyOCR-3B </td>
<td>3.7B</td>
<td>65.67</td>
<td>0.248</td>
<td>69.23</td>
<td>52.59</td>
<td>0.300</td>
</tr>
<tr>
<td>MinerU2-VLM</td>
<td>0.9B</td>
<td>68.16</td>
<td>0.230</td>
<td>74.45</td>
<td>53.07</td>
<td>0.191</td>
</tr>
<tr>
<td>MinerU2.5</td>
<td>1.2B</td>
<td>75.24</td>
<td>0.305</td>
<td>81.78</td>
<td>74.39</td>
<td>0.151</td>
</tr>
<tr>
<td>PaddleOCR-VL </td>
<td>0.9B</td>
<td>77.47</td>
<td>0.192</td>
<td>78.81</td>
<td>72.83</td>
<td>0.193</td>
</tr>
<tr>
<td>Nanonets-OCR-s</td>
<td>3B</td>
<td>81.98</td>
<td>0.121</td>
<td>85.78</td>
<td>72.22</td>
<td>0.133</td>
</tr>
<tr>
<td>dots.ocr</td>
<td>3B</td>
<td>84.27</td>
<td>0.087</td>
<td>85.73</td>
<td>75.74</td>
<td>0.094</td>
</tr>
<tr>
<td><strong>PaddleOCR-VL-1.5</strong></td>
<td>0.9B</td>
<td><strong>91.66</strong></td>
<td><strong>0.047</strong></td>
<td><strong>91.00</strong></td>
<td><strong>88.69</strong></td>
<td><strong>0.061</strong></td>
</tr>
</table>
*Skewed documents also impact accuracy, but top-performing models still provide reasonable results.*
---
## Acknowledgements
Real5-OmniDocBench is based on the original OmniDocBench dataset and adopts its evaluation protocols. We thank the authors of [OmniDocBench](https://github.com/opendatalab/OmniDocBench) for their foundational work.
---
## Citation
If you use Real5-OmniDocBench in your research, please cite our dataset and refer also to the original OmniDocBench paper.