This model was converted to MLX format from Qwen/Qwen3.5-27B using oMLX v0.2.24 with oQ Quantization.
Settings:
Chat template
Files info
4-bit
Base model