Qwen3.5-4B-MLX-4bit
This is a 4-bit quantized MLX version of Qwen/Qwen3.5-4B for Apple Silicon.
Model Details
- Original Model: Qwen/Qwen3.5-4B
- Quantization: 4-bit (5.347 bits per weight)
- Group Size: 64
- Format: MLX SafeTensors
- Framework: mlx-vlm
- Disk Size: ~2.9G
Conversion Details
This model was converted using mlx-vlm from the pc/fix-qwen35-predicate branch, which includes fixes for Qwen3.5 model support (proper handling of MoE gate layers, shared_expert_gate, and A_log casting).
Conversion command:
python3 -m mlx_vlm convert \
--hf-path "Qwen/Qwen3.5-4B" \
--mlx-path "./Qwen3.5-4B-MLX-4bit" \
-q --q-bits 4 --q-group-size 64
Important Note
A better, more optimized conversion may be available from @Prince (@Blaizzy) in the MLX VLM community. Check the mlx-community organization for updated versions as official Qwen3.5 support is merged into the main mlx-vlm branch.
Related Models
- bf16 (full precision): mlx-community/Qwen3.5-4B-MLX-bf16
- 8-bit quantized: mlx-community/Qwen3.5-4B-MLX-8bit
- Original: Qwen/Qwen3.5-4B
Usage
from mlx_vlm import load, generate
model, processor = load("mlx-community/Qwen3.5-4B-MLX-4bit")
output = generate(
model,
processor,
prompt="Describe this image.",
image="path/to/image.jpg",
max_tokens=512
)
print(output)
CLI:
python3 -m mlx_vlm.generate \
--model mlx-community/Qwen3.5-4B-MLX-4bit \
--image path/to/image.jpg \
--prompt "Describe this image."
License
This model inherits the Apache 2.0 license from the original Qwen model.
- Downloads last month
- 7,248
Model size
1.0B params
Tensor type
BF16
路
U32 路
F32 路
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support