GLM-OCR 3-bit MLX
3-bit quantized MLX conversion of zai-org/GLM-OCR for on-device inference on Apple Silicon.
- Base model: zai-org/GLM-OCR (MIT license)
- Quantization: 3-bit via mlx-vlm
- Size: ~1.1 GB
- Use case: Business card OCR / document text extraction
Converted using mlx_vlm.convert --q-bits 3.
- Downloads last month
- 306
Model size
0.2B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
3-bit
Model tree for matthewdubois/GLM-OCR-3bit-mlx
Base model
zai-org/GLM-OCR