See Qwen3-Coder-Next MLX in action - demonstration video

Tested on a M3 Ultra 512GB RAM using Inferencer app v1.9.6

  • Single inference ~68 tokens/s @ 1000 tokens
  • Batched inference ~159 total tokens/s across six inferences
  • Memory usage: ~83.5 GiB

q9bit quant typically achieves near lossless quality in our coding test

Quantization Perplexity Token Accuracy Missed Divergence
q3.5 168.0 43.45% 72.57%
q4.5 1.33593 91.65% 27.61%
q5.5 1.23437 95.05% 17.28%
q6.5 1.21875 96.95% 12.03%
q8.5 1.21093 97.55% 10.50%
q9 1.21093 97.55% 10.50%
Base 1.20312 100.0% 0.000%
  • Perplexity: Measures the confidence for predicting base tokens (lower is better)
  • Token Accuracy: The percentage of correctly generated base tokens
  • Missed Divergence: Measures severity of misses; how much the token was missed by
Quantized with a modified version of MLX
For more details see demonstration video or visit Qwen3-Coder-Next.

Disclaimer

We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.

Downloads last month
897
Safetensors
Model size
80B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for inferencerlabs/Qwen3-Coder-Next-MLX-9bit

Quantized
(42)
this model