ocr-mlt-50m / README.md
HV-Khurdula's picture
Upload README.md with huggingface_hub
6525737 verified
metadata
license: apache-2.0
task_categories:
  - image-to-text
language:
  - en
  - zh
  - ja
  - ko
  - ar
  - hi
  - de
  - fr
  - es
  - pt
  - ru
  - th
  - vi
  - it
  - nl
  - pl
  - tr
  - sv
  - cs
  - ro
  - da
  - fi
  - hu
  - el
  - bg
  - uk
  - hr
  - sk
  - sl
  - lt
  - lv
  - et
  - mt
  - ga
  - ms
  - id
  - tl
  - sw
  - am
  - bn
  - ta
  - te
  - kn
  - ml
  - gu
  - mr
  - pa
  - ur
  - ne
  - si
  - my
size_categories:
  - 10M<n<100M
tags:
  - ocr
  - multilingual
  - document-ai
  - text-recognition
  - scene-text
pretty_name: 'OCR-MLT-50M: Multilingual OCR Corpus (50 Million Samples)'
dataset_info:
  features:
    - name: image
      dtype: image
    - name: text
      dtype: string
    - name: language
      dtype: string
    - name: script
      dtype: string
    - name: source_type
      dtype: string
    - name: confidence
      dtype: float64

OCR-MLT-50M: Multilingual OCR Corpus

A large-scale multilingual OCR dataset spanning 50 languages and 50.2 million image-text pairs.
Designed for training and evaluating robust multilingual text recognition systems across diverse scripts and domains.

πŸ“„ Paper | πŸ€— Model | πŸ”₯ Demo | πŸ’» GitHub | πŸ† Leaderboard | πŸ“Š Weights & Biases

Paper Model License Samples Languages

πŸ”₯ News

  • [2025-11-15] OCR-MLT-50M is now available on Hugging Face! Download here
  • [2025-10-28] Our paper is accepted at CVPR 2025! Camera-ready version
  • [2025-09-10] Released v2 model weights with improved CJK performance. Model card
  • [2025-08-01] Pre-trained checkpoints available for all 50 languages. Download

Overview

Stat Value
Total samples 50,217,843
Languages 50
Scripts 14 (Latin, CJK, Arabic, Devanagari, Cyrillic, ...)
Source types Scene text, documents, handwritten, receipts, signage
Avg. image resolution 384 x 128
Storage (compressed) ~2.3 TB

Language Distribution

Language Distribution

Click to view the full interactive breakdown by language and script family

Sample Visualizations

Scene text (EN) Document (JA) Handwritten (AR)

Data Collection Pipeline

Samples were collected from three primary sources:

  1. Synthetic rendering β€” text rendered onto natural backgrounds using 2,400+ fonts per script
  2. Web-crawled scene text β€” filtered and deduplicated from Common Crawl with PaddleOCR pseudo-labels
  3. Scanned documents β€” partnerships with national libraries and digitization initiatives

All pseudo-labels were verified using a multi-model consensus approach (TrOCR + PaddleOCR + EasyOCR), retaining only samples with β‰₯2/3 agreement. Full methodology in our technical report.

Quick Start

from datasets import load_dataset

# Load a specific language split
ds = load_dataset("interfaze-ai/ocr-mlt-50m", "en", split="train", streaming=True)

for sample in ds:
    print(sample["text"], sample["language"])
    break

Benchmarks

Models fine-tuned on OCR-MLT-50M vs. existing public corpora:

Model MLT-2019 (F1) IC15 (Acc) CUTE80 (Acc) Details
TrOCR-large + Ours 87.3 96.1 94.7 Config & Weights
PARSeq + Ours 88.1 96.8 95.2 Config & Weights
CLIP4STR + Ours 89.6 97.2 96.0 Config & Weights
Baseline (MJSynth+ST) 79.4 94.2 87.8 β€”

Full evaluation scripts and configs: GitHub

Shards

Data is split into per-language shards. See the file listing for the full manifest.

Citation

@inproceedings{kumar2025ocrmlt,
  title={OCR-MLT-50M: Scaling Multilingual Text Recognition with Synthetic-Real Hybrid Corpora},
  author={Kumar, Arjun and Nakamura, Yui and Al-Rashid, Fatima and M{\"u}ller, Jonas},
  booktitle={Proceedings of CVPR 2025},
  year={2025},
  pages={11234--11245}
}

License

Apache 2.0 β€” see LICENSE for details.