LightOnOCR-2-1B GGUF

GGUF quantized versions of lightonai/LightOnOCR-2-1B for use with llama.cpp.

Model Description

LightOnOCR-2-1B is a 1B-parameter end-to-end vision-language model for OCR, converting documents (PDFs, scans, images) into clean, naturally ordered text.

Highlights

  • Speed: 3.3× faster than Chandra OCR, 1.7× faster than OlmOCR
  • Efficiency: <$0.01 per 1,000 pages on H100
  • End-to-End: Fully differentiable, no external OCR pipeline
  • Versatile: Handles tables, receipts, forms, multi-column layouts, and math notation

Available Files

File Size Description
LightOnOCR-2-1B-f16.gguf 1.1 GB Language model (F16, highest quality)
LightOnOCR-2-1B-Q8_0.gguf 610 MB Language model (Q8_0, near-lossless)
LightOnOCR-2-1B-Q4_K_M.gguf 378 MB Language model (Q4_K_M, balanced)
LightOnOCR-2-1B-mmproj-f16.gguf 781 MB Vision encoder + projector (required)

Note: The vision encoder (mmproj) should NOT be quantized as it significantly impacts image understanding quality.

Usage with llama.cpp

Build llama.cpp

git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
cmake -B build
cmake --build build --config Release

Run OCR

# Using F16 (highest quality)
./build/bin/llama-mtmd-cli \
    -m LightOnOCR-2-1B-f16.gguf \
    --mmproj LightOnOCR-2-1B-mmproj-f16.gguf \
    --image your-document.png \
    -ngl 99 \
    -c 4096 \
    -n 1000 \
    --temp 0.2 \
    --repeat-penalty 1.15 \
    --repeat-last-n 128

# Using Q4_K_M (smaller, faster)
./build/bin/llama-mtmd-cli \
    -m LightOnOCR-2-1B-Q4_K_M.gguf \
    --mmproj LightOnOCR-2-1B-mmproj-f16.gguf \
    --image your-document.png \
    -ngl 99 \
    -c 4096 \
    -n 1000 \
    --temp 0.2 \
    --repeat-penalty 1.15

## Recommended Parameters

| Parameter | Value | Description |
|-----------|-------|-------------|
| `--temp` | 0.2 | Official recommended temperature |
| `--repeat-penalty` | 1.15 | Prevents repetition (1.1-1.2 optimal) |
| `--repeat-last-n` | 128 | Tokens to consider for penalty |
| `-n` | 1000 | Max output tokens (avoid >1500) |
| `-ngl` | 99 | GPU layers (use all for best speed) |

### Parameter Notes

- **repeat-penalty**: Values above 1.2 may reduce OCR quality
- **-n (max tokens)**: Limiting to ~1000 prevents repetition at end of long documents
- **Image preprocessing**: Render PDFs to PNG at 1540px longest edge

## Performance (Apple M4 Max)

| Metric | Value |
|--------|-------|
| Image encoding | ~435 ms |
| Image decoding | ~45 ms |
| Prompt processing | ~1,850 tokens/s |
| Text generation | ~228 tokens/s |
| Total time (1000 tokens) | ~8-10 sec |

## Quantization Details

| Format | Bits/Weight | Size Reduction | Quality Impact |
|--------|-------------|----------------|----------------|
| F16 | 16 | - | Baseline |
| Q8_0 | 8 | 45% | Nearly lossless |
| Q4_K_M | 4.5 | 66% | Minimal |

## Credits

- Original model: [lightonai/LightOnOCR-2-1B](https://huggingface.co/lightonai/LightOnOCR-2-1B)
- GGUF conversion: Using [llama.cpp](https://github.com/ggml-org/llama.cpp) convert tools
- Paper: [LightOnOCR: A 1B End-to-End Multilingual Vision-Language Model](https://arxiv.org/pdf/2601.14251)

## License

Apache License 2.0 (same as original model)

## Citation

```bibtex
@misc{lightonocr2_2026,
  title        = {LightOnOCR: A 1B End-to-End Multilingual Vision-Language Model for State-of-the-Art OCR},
  author       = {Said Taghadouini and Adrien Cavaill\`{e}s and Baptiste Aubertin},
  year         = {2026},
  howpublished = {\url{https://arxiv.org/pdf/2601.14251}}
}
Downloads last month
142
GGUF
Model size
0.6B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for wangjazz/LightOnOCR-2-1B-gguf

Quantized
(4)
this model

Paper for wangjazz/LightOnOCR-2-1B-gguf