OCR to Markdown with Nanonets
Convert document images to structured markdown using Nanonets-OCR-s with vLLM acceleration.
Quick Start
# Basic OCR conversion
uv run main.py document-images markdown-output
# With custom image column
uv run main.py scanned-docs extracted-text --image-column page
# Test with subset
uv run main.py large-dataset test-output --max-samples 100
# Run directly from Hub
uv run https://huggingface.co/datasets/davanstrien/dataset-creation-scripts/raw/main/ocr-vllm/main.py \
input-dataset output-dataset
Features
Nanonets-OCR-s excels at:
- LaTeX equations: Mathematical formulas preserved in LaTeX format
- Tables: Complex table structures converted to markdown
- Document structure: Headers, lists, and formatting maintained
- Special elements: Signatures, watermarks, and checkboxes detected
HF Jobs Deployment
Deploy on GPU infrastructure:
hfjobs run \
--flavor l4x1 \
--secret HF_TOKEN=$HF_TOKEN \
ghcr.io/astral-sh/uv:latest \
/bin/bash -c "
uv run https://huggingface.co/datasets/davanstrien/dataset-creation-scripts/raw/main/ocr-vllm/main.py \
your-document-dataset \
your-markdown-output \
--batch-size 32 \
--gpu-memory-utilization 0.8
"
Parameters
| Parameter | Default | Description |
|---|---|---|
--image-column |
"image" |
Column containing images |
--batch-size |
8 |
Images per batch |
--model |
nanonets/Nanonets-OCR-s |
OCR model to use |
--max-tokens |
4096 |
Max output tokens |
--gpu-memory-utilization |
0.7 |
GPU memory usage |
--split |
"train" |
Dataset split |
--max-samples |
None | Limit samples (testing) |
--private |
False | Private output dataset |
Examples
Scientific Papers
uv run main.py arxiv-papers arxiv-markdown \
--max-tokens 8192 # Longer output for equations
Scanned Documents
uv run main.py historical-scans extracted-text \
--image-column scan \
--batch-size 4 # Lower batch for high-res images
Multi-page Documents
uv run main.py pdf-pages document-text \
--image-column page_image \
--batch-size 16
Tips
- Batch size: Reduce if encountering OOM errors
- GPU memory: Increase for better throughput
- Max tokens: Increase for long documents
- Testing: Use
--max-samplesto validate pipeline
Model Details
Nanonets-OCR-s (576M parameters) is optimized for:
- High-quality markdown output
- Complex document understanding
- Efficient GPU inference
- Multi-language support
For more details, see the model card.