Mixed Precision GGUF layer quantization of Deepseek R1 Distill Qwen 7B by deepseek-ai
Original model: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant achieves a ~6.24G gguf with improved performance compare to a ~6.25G Q6_K GGUF. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0
LAYER_TYPES='[
[0 ,"Q6_K_L"],[1 ,"Q6_K_M"],[2 ,"Q6_K_S"],[3 ,"Q5_K_L"],[4 ,"Q5_K_M"],[5 ,"Q5_K_M"],[6 ,"Q5_K_M"],
[7 ,"Q5_K_L"],[8 ,"Q5_K_L"],[9 ,"Q5_K_L"],[10,"Q5_K_L"],[11,"Q5_K_L"],[12,"Q5_K_L"],[13,"Q5_K_L"],
[14,"Q6_K_S"],[15,"Q6_K_S"],[16,"Q6_K_S"],[17,"Q6_K_S"],[18,"Q6_K_S"],[19,"Q6_K_S"],[20,"Q6_K_S"],
[21,"Q6_K_M"],[22,"Q6_K_M"],[23,"Q6_K_M"],[24,"Q6_K_L"],[25,"Q6_K_L"],[26,"Q6_K_L"],[27,"Q8_0"]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
The layer quants were optimized for good performance across a small set of curated test prompts. The quant was optimized to achieve both strong performance and stability with greedy sampling (minimized infinite generations across the prompt test set).
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| Q6_K | 6.25e9 | 22.4 | Q6_K with default embedding and output |
| Q6_K_H | 6.24e9 | 22.5 | Hybrid quant with Q6_K embedding Q6_K output |
Usage:
This model is a RL reasoning model. It was created by disilling Deepseek R1 onto Qwen 2.5 Math 7B base. Context length for the model seems to be mostly undocumented. The model itself shows 128k in its config but it almost certainly was not natively trained on 128k. For quant layer optimization context was configured at 32k, the base context of Qwen2.5 series, even though Qwen 2.5 Math was apparently fine tuned on a smaller context window this was most likely expanded to at least 32K during the R1 distillation with long reasoning traces. Performance was found to be good with context set to 32k and is the recommended setting.
The model can be speculated using Qwen3 0.6B. Approximate performance on a 4070 with context and weights in VRAM using a custom downstream greedy speculator with fixed spec block length ND and dynamic vocab translation :
| Prompt | ND | Gen TPS | Comment |
|---|---|---|---|
| goldcoin | 0 | 71 | non code |
| goldcoin | 4 | 91 | non code |
Spec boost is not large due to difficulty of speculation the target reasoning model which goes off into reflections at extremely unpredictable times.
The model fails to give the right answer in both cases on the goldcoin prompt with straightforward greedy sampling but does reasonably well on a series of other fairly tricky test prompts with greedy sampling. With temp sampling at 0.5 it should kick out the right response from time to time.
goldcoin:
I have 10 apples. I find 3 gold coins in the bottom of a river. The river runs near a big city that has something to do with what I can spend the coins on. I then lose 4 apples but gain a gold coin. Three birds run into my path and drop 6 apples each. I play an online game and win 6 gold coins but I have to share them equally with my 2 teammates. I buy apples for all the coins I have. The price of an apple is 0.5 coins. How many apples do I have? And where is the river? Use step-by-step reasoning to solve this problem.
The model was tested with some code prompts and found to be essentially useless. It cannot reliably generate working code on even simple code gen tasks.
Benchmarks:
A full set of math benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| Deepseek-R1-Distill-Qwen-7B.Q6_K_H.gguf | Q6_K_H | 6.2e9 B | ~Q6_K size, higher performance |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 65
6-bit
Model tree for steampunque/Deepseek-R1-Distill-Qwen-7B-MP-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B