GLM-OCR 3-bit MLX

3-bit quantized MLX conversion of zai-org/GLM-OCR for on-device inference on Apple Silicon.

  • Base model: zai-org/GLM-OCR (MIT license)
  • Quantization: 3-bit via mlx-vlm
  • Size: ~1.1 GB
  • Use case: Business card OCR / document text extraction

Converted using mlx_vlm.convert --q-bits 3.

Downloads last month
306
Safetensors
Model size
0.2B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for matthewdubois/GLM-OCR-3bit-mlx

Base model

zai-org/GLM-OCR
Quantized
(17)
this model