Add overall OlmOCRBench results
OlmOCRBench was recently updated to display "Overall" results by default, this PR ensures your model shows its score on the leaderboard.
It will show up here: https://huggingface.co/datasets/allenai/olmOCR-bench.
Hi Niels,
Thanks for adding this!
There is one important distinction for the "Overall" results: since the "Headers & Footers" category rewards ignoring/not outputting visible text, we choose to exclude it from the Overall average, in fact, in the RLVR setup we try to minimize the H&F score so that the model does full page transcription, including page headers, footers and page numbers.
I think we should have an Overall without this metric since it's a bit misleading from first sight.
Ok, thanks for clarifying. Note that the evaluation feature includes a "notes" field, where you can specify additional information. Have updated this PR to reflect that.
For now I'd use the "notes", I'll discuss with AllenAI to potentially create a separate leaderboard/task for it.
Btw, would you be up for helping us add GLM-OCR to the leaderboard as well? Happy to set up a Slack channel with you
Great!
Happy to help for benching GLM-OCR, I have been willing to do so, just didn't have time before.
@nielsr
I tried benchmarking GLM-OCR on OlmoOCR-bench today. It proved quite challenging : GLM-OCR is a two-stage pipeline (layout analysis + region recognition) rather than an end-to-end model. There are no official standalone inference scripts; the intended workflow relies on their SDK which integrates PP-DocLayoutV3 for layout detection and routes each region to the appropriate task prompt (text, formula, or table).
As a first pass, I ran the model directly with just the "Text Recognition:" prompt on all images using this script as reference for vLLM inference. Here are the results:
| Category | Score |
|---|---|
| headers_footers | 92.3% |
| long_tiny_text | 87.6% |
| arxiv_math | 80.4% |
| multi_column | 79.9% |
| old_scans_math | 74.9% |
| table_tests | 42.5% |
| old_scans | 39.9% |
| Overall | 71.1% +-1.1 |
| Overall (wo h/f) | 67.5% |
For better results, we will need to include the layout detector but since they don't provide it in a standalone model it's kind of a hassle to use.
Ok, would it be easier to just use their API? I.e. :
from zai import ZaiClient
# Initialize client
client = ZaiClient(api_key="your-api-key")
image_url = "https://cdn.bigmodel.cn/static/logo/introduction.png"
# Call layout parsing API
response = client.layout_parsing.create(
model="glm-ocr",
file=image_url
)
# Output result
print(response)
yeah it might be best to just use the API directly, I do not have a GLM API key though.
We'd be happy to give you one. Is there any email address I can reach you on to invite you to Slack? Couldn't find it on your X/Github profiles
shared via dm on twitter/x.