Instructions to use spicyneuron/Qwen3.5-9B-MLX-5.6bit-vision with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use spicyneuron/Qwen3.5-9B-MLX-5.6bit-vision with MLX:
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("spicyneuron/Qwen3.5-9B-MLX-5.6bit-vision") config = load_config("spicyneuron/Qwen3.5-9B-MLX-5.6bit-vision") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use spicyneuron/Qwen3.5-9B-MLX-5.6bit-vision with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "spicyneuron/Qwen3.5-9B-MLX-5.6bit-vision"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "spicyneuron/Qwen3.5-9B-MLX-5.6bit-vision" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use spicyneuron/Qwen3.5-9B-MLX-5.6bit-vision with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "spicyneuron/Qwen3.5-9B-MLX-5.6bit-vision"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default spicyneuron/Qwen3.5-9B-MLX-5.6bit-vision
Run Hermes
hermes
Qwen3.5 9B optimized to run on Mac.
- A mixed-precision quant that balances speed, memory, and accuracy.
- 4-bit baseline with important layers at 8 and BF16.
Usage
# Start server at http://localhost:8080/chat/completions
uvx --from mlx-lm --with torchvision \
mlx_vlm.server \
--host 127.0.0.1 \
--port 8080 \
--model spicyneuron/Qwen3.5-9B-MLX-5.6bit-vision
Benchmarks
| metric | this model |
|---|---|
| bpw | 5.554 |
| base memory | 5.789 |
| peak memory (1024/512) | 7.446 |
| prompt tok/s (1024) | 1481.661 卤 6.709 |
| gen tok/s (512) | 91.086 卤 0.101 |
| kl mean | 0.032 卤 0.002 |
| kl p95 | 0.069 卤 0.002 |
| perplexity | 3.739 卤 0.018 |
| winogrande | 0.660 卤 0.021 |
Tested on a Mac Studio M3 Ultra. KL divergence is approximate, based on top_k not full logits. Here's the code.
mlx_lm.kld --baseline-model path/to/mlx-full-precision
mlx_lm.perplexity --sequence-length 2048 --seed 123
mlx_lm.benchmark --prompt-tokens 1024 --generation-tokens 512 --num-trials 5
mlx_lm.evaluate --tasks winogrande --seed 123 --num-shots 0 --limit 500
Methodology
Quantized with a mlx-lm fork, drawing inspiration from Unsloth/AesSedai/ubergarm style mixed-precision GGUFs. MLX quantization options differ from llama.cpp, but the principles are the same:
- Sensitive layers like MoE routing, attention, and output embeddings get higher precision
- More tolerant layers like MoE experts get lower precision
- Downloads last month
- 401
Model size
2B params
Tensor type
BF16
路
U32 路
F32 路
Hardware compatibility
Log In to add your hardware
4-bit