File size: 4,737 Bytes
9355cd8 c6587a4 642a598 410edb0 62aed5e 642a598 62aed5e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Coder-Next/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-Coder-Next
base_model_relation: quantized
quantization: exl3
library_name: exllamav3
---
Quantization was performed using [exllama3 v0.0.20](https://github.com/turboderp-org/exllamav3).
> **Note:** In exllamav3 v0.0.21, there were [fixes to the Qwen3-Next inference pipeline](https://github.com/turboderp-org/exllamav3/commit/d3e02500e0dac2d67ca7fc9babed5d40dcf33689). These quants still work fine, but with v0.0.21+ they should perform even better. It is recommended to use exllamav3 v0.0.21 or later for best results.
> **Update:** The KL-divergence anomaly observed in exllama3 v0.0.20 (where 8bpw had higher KL-div than 5bpw) has been resolved in v0.0.22. Measurements have been re-run and the table below reflects the corrected values. All quantization levels now show the expected monotonic decrease in KL-divergence as bpw increases.
## Measurements table for exllama3 v0.0.22
| Quant | Size (GB) | KL-div (quant, orig) | KL-div (orig, quant) | Perplexity | Top-K K=1 | Top-K K=2 | Top-K K=3 | Top-K K=4 | Top-K K=5 |
|---|---|---|---|---|---|---|---|---|---|
| [2.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/2.0bpw) | 20 | 0.41110482 | 0.45863510 | 8.85093561 | 0.7669 | 0.4276 | 0.1963 | 0.0789 | 0.0296 |
| [3.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/3.0bpw) | 29 | 0.16125607 | 0.16561898 | 8.06653676 | 0.8536 | 0.5947 | 0.3567 | 0.1923 | 0.0960 |
| [4.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/4.0bpw) | 38 | 0.05995151 | 0.06084711 | 7.72232643 | 0.9079 | 0.7220 | 0.5146 | 0.3383 | 0.2098 |
| [5.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/5.0bpw) | 47 | 0.02719813 | 0.02733112 | 7.68017339 | 0.9376 | 0.8013 | 0.6315 | 0.4682 | 0.3279 |
| [6.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/6.0bpw) | 57 | 0.01553572 | 0.01543948 | 7.72972846 | 0.9522 | 0.8440 | 0.7019 | 0.5538 | 0.4183 |
| [7.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/7.0bpw) | 66 | 0.01088568 | 0.01090296 | 7.71071654 | 0.9611 | 0.8696 | 0.7452 | 0.6116 | 0.4822 |
| [8.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/8.0bpw) | 75 | 0.00899026 | 0.00897780 | 7.70958606 | 0.9652 | 0.8816 | 0.7673 | 0.6398 | 0.5159 |
| original | 148 | - | - | 7.70773351 | - | - | - | - | - |
<details>
<summary>Measurements table for exllama3 v0.0.20 (deprecated)</summary>
| Quant | Size (GB) | KL-div (quant, orig) | KL-div (orig, quant) | Perplexity | Top-K K=1 | Top-K K=2 | Top-K K=3 | Top-K K=4 | Top-K K=5 |
|---|---|---|---|---|---|---|---|---|---|
| [2.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/2.0bpw) | 20 | 0.52142615 | 0.52278535 | 23.73415073 | 0.6961 | 0.3484 | 0.1402 | 0.0498 | 0.0167 |
| [3.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/3.0bpw) | 29 | 0.24568403 | 0.24622221 | 20.58547252 | 0.7866 | 0.4894 | 0.2579 | 0.1190 | 0.0513 |
| [4.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/4.0bpw) | 38 | 0.15672405 | 0.15667850 | 19.63543922 | 0.8338 | 0.5783 | 0.3511 | 0.1923 | 0.0990 |
| [5.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/5.0bpw) | 47 | 0.12297954 | 0.12280908 | 19.81022066 | 0.8562 | 0.6287 | 0.4088 | 0.2463 | 0.1388 |
| [6.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/6.0bpw) | 57 | 0.10448053 | 0.10464503 | 19.88056610 | 0.8707 | 0.6590 | 0.4502 | 0.2848 | 0.1704 |
| [7.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/7.0bpw) | 66 | 0.10106506 | 0.10081614 | 19.61846442 | 0.8730 | 0.6666 | 0.4614 | 0.2983 | 0.1821 |
| [8.0bpw](https://huggingface.co/NeuroSenko/Qwen3-Coder-Next-exl3/tree/8.0bpw) | 75 | 0.13291914 | 0.13419860 | 19.85572412 | 0.8631 | 0.6503 | 0.4468 | 0.2885 | 0.1771 |
| original | 148 | - | - | 19.78538866 | - | - | - | - | - |
</details>
## Tool Calls Support for Qwen/GLM Models
The official tabbyAPI doesn't support tool calls for Qwen and GLM models yet.
If you're using Qwen-Code, OpenClaw, or similar software that need tool call support, you can use [my fork](https://github.com/NeuroSenko/tabbyAPI/tree/tools-support) with the `tools-support` branch:
**Clone directly:**
```bash
git clone -b tools-support https://github.com/NeuroSenko/tabbyAPI
```
**Or add to existing tabbyAPI installation:**
```bash
git remote add neurosenko https://github.com/NeuroSenko/tabbyAPI
git fetch neurosenko
git checkout -b tools-support neurosenko/tools-support
```
This branch includes native tool calling support for Qwen and GLM model families. |