How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="steampunque/Qwen2.5-Coder-32B-Instruct-MP-GGUF",
	filename="Qwen2.5-Coder-32B-Instruct.Q4_K_H.gguf",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

Mixed Precision GGUF layer quantization of Qwen2.5-Coder-32B-Instruct by Qwen

Original model: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants.

Q4_K_H layer quants are as follows:

Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K

   LAYER_TYPES='[
   [0 ,"Q5_K_L"],[1 ,"Q5_K_M"],[2 ,"Q5_K_S"],[3 ,"Q4_K_L"],[4 ,"Q4_K_M"],[5 ,"Q4_K_S"],[6 ,"Q3_K_L"],[7 ,"Q3_K_M"],
   [8 ,"Q3_K_L"],[9 ,"Q3_K_M"],[10,"Q3_K_L"],[11,"Q3_K_M"],[12,"Q3_K_L"],[13,"Q3_K_M"],[14,"Q3_K_L"],[15,"Q3_K_M"],
   [16,"Q3_K_L"],[17,"Q3_K_L"],[18,"Q3_K_L"],[19,"Q3_K_L"],[20,"Q3_K_L"],[21,"Q3_K_L"],[22,"Q3_K_L"],[23,"Q3_K_L"],
   [24,"Q4_K_S"],[25,"Q3_K_L"],[26,"Q4_K_S"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q3_K_L"],[30,"Q4_K_S"],[31,"Q3_K_L"],
   [32,"Q4_K_S"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_S"],[36,"Q4_K_S"],[37,"Q4_K_S"],[38,"Q4_K_S"],[39,"Q4_K_S"],
   [40,"Q4_K_M"],[41,"Q4_K_S"],[42,"Q4_K_M"],[43,"Q4_K_S"],[44,"Q4_K_M"],[45,"Q4_K_S"],[46,"Q4_K_M"],[47,"Q4_K_S"],
   [48,"Q4_K_M"],[49,"Q4_K_M"],[50,"Q4_K_M"],[51,"Q4_K_M"],[52,"Q4_K_M"],[53,"Q4_K_M"],[54,"Q4_K_M"],[55,"Q4_K_M"],
   [56,"Q4_K_M"],[57,"Q4_K_L"],[58,"Q4_K_M"],[59,"Q4_K_L"],[60,"Q5_K_S"],[61,"Q5_K_M"],[62,"Q5_K_L"],[63,"Q6_K_S"]
   ]'
   FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"

This quant was optimized over a small set of curated test prompts for code generation ability and then sanity checked for good performance on humaneval.

Comparison:

Quant size PPL Comment
IQ4_XS 17.9e9 7.5 -
Q4_K_H 19.4e9 7.5 Hybrid quant with Q4_K embedding Q6_K output

Usage:

The model can be speculated with Qwen 2.5 Coder 0.5B Instruct with no vocab translation. It is trained at 32k context which can be extended to 128k using YARN:

-rope-scaling yarn --yarn-orig-ctx 32768 --rope_scale 4

For other than 128k context set rope_scale to the fraction of configured context size / 32768.0.

Approximate performance on 2X 12G VRAM 4070 with RPC (1Gb/s local LAN), all weigths and context in VRAM:

Q QKV ND NKV gen tps Comment
Q4_K_H F16 0 15k 21 No draft
Q4_K_H F16 12 12k 79 Spec 12
Q4_K_H Q8_0 0 27k 21 No draft
Q4_K_H Q8_0 12 21k 78 Spec 12

for speculation a fixed length ND=12 token draft was used with a custom downstream speculator. The test prompt is humaneval first problem:

generate python code for the described function header:

from typing import List

def has_close_elements(numbers: List[float], threshold: float) -> bool:
    """ Check if in given list of numbers, are any two numbers closer to each other than
    given threshold.
    >>> has_close_elements([1.0, 2.0, 3.0], 0.5)
    False
    >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)
    True
    """

Benchmarks:

A full set of code evals for the quant is given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
Qwen2.5-Coder-32B-Instruct.Q4_K_H.gguf Q4_K_H 19.4e9 B better code gen performance than IQ4_XS with reasonable size

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
6
GGUF
Model size
33B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/Qwen2.5-Coder-32B-Instruct-MP-GGUF

Base model

Qwen/Qwen2.5-32B
Quantized
(119)
this model