mlx-community/GLM-4.1V-9B-Thinking-4bit

This model was converted to MLX format from zai-org/GLM-4.1V-9B-Thinking using mlx-vlm version 0.3.5. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/GLM-4.1V-9B-Thinking-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
20
MLX
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mlx-community/GLM-4.1V-9B-Thinking-4bit

Quantized
(23)
this model