chandra-ocr-2-GGUF

Chandra-OCR-2 from Datalab is a state-of-the-art OCR model that outputs structured markdown, HTML, or JSON while preserving precise layout information from images and PDFs across 90+ languages. It achieves SOTA benchmarks with 85.9% on olmocr and 77.8% multilingual score (+12% over Chandra 1), delivering major gains in math equation parsing, complex table reconstruction (including merged cells), handwriting recognition, form elements like checkboxes, and wide-document layouts alongside vastly improved image captioning and diagram extraction. Available via free playground, hosted API for production speed/accuracy, or local deployment through HuggingFace Transformers/vLLM, it excels at transforming challenging real-world documents—financial filings, research papers, historical scans, multilingual forms—into semantically rich structured data for downstream AI pipelines and automation workflows.

Recommended: Q8_0

Model Files

File Name Quant Type File Size File Link
chandra-ocr-2-Q2_K.gguf Q2_K 2.12 GB Download
chandra-ocr-2-Q3_K_L.gguf Q3_K_L 2.69 GB Download
chandra-ocr-2-Q3_K_M.gguf Q3_K_M 2.54 GB Download
chandra-ocr-2-Q3_K_S.gguf Q3_K_S 2.34 GB Download
chandra-ocr-2-Q4_K_M.gguf Q4_K_M 3.07 GB Download
chandra-ocr-2-Q4_K_S.gguf Q4_K_S 2.92 GB Download
chandra-ocr-2-Q5_K_M.gguf Q5_K_M 3.51 GB Download
chandra-ocr-2.BF16.gguf BF16 9.7 GB Download
chandra-ocr-2.F16.gguf F16 9.7 GB Download
chandra-ocr-2.F32.gguf F32 19.4 GB Download
chandra-ocr-2.Q8_0.gguf Q8_0 5.16 GB Download
chandra-ocr-2.mmproj-bf16.gguf mmproj-bf16 676 MB Download
chandra-ocr-2.mmproj-f16.gguf mmproj-f16 676 MB Download
chandra-ocr-2.mmproj-f32.gguf mmproj-f32 1.33 GB Download
chandra-ocr-2.mmproj-q8_0.gguf mmproj-q8_0 367 MB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
5,087
GGUF
Model size
5B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/chandra-ocr-2-GGUF

Quantized
(1)
this model