GGUF hybrid layer quantization of phi-4 by microsoft

Original model: https://huggingface.co/microsoft/phi-4

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This quant is sized at ~Q4_K_M bpw. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the Q4_K_H layer quants are as follows:

Q4_K_L : Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K

   LAYER_TYPES='[
   [0 ,"Q5_K_M"],[1 ,"Q5_K_S"],[2 ,"Q4_K_L"],[3 ,"Q4_K_M"],[4 ,"Q4_K_S"],[5 ,"Q4_K_S"],[6 ,"Q4_K_S"],[7 ,"Q3_K_L"],
   [8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
   [16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
   [24,"Q4_K_S"],[25,"Q3_K_L"],[26,"Q4_K_S"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q3_K_L"],[30,"Q4_K_S"],[31,"Q3_K_L"],
   [32,"Q4_K_S"],[33,"Q4_K_M"],[34,"Q4_K_M"],[35,"Q4_K_L"],[36,"Q5_K_S"],[37,"Q5_K_M"],[38,"Q5_K_L"],[39,"Q6_K_S"]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"

Comparison:

Quant size PPL Comment
Q4_K_M 9.05e9 6.7 -
Q4_K_H 8.65e9 6.8 Hybrid quant with Q6_K embedding Q6_K output

The quant was optimized for strong reasoning performance across a curated set of test prompts.

Usage:

The unique feature of phi-4 is strong reasoning without having used RL methods in the instruct training. Instead it was pre trained on a highly curated set of (apparently) very high quality data (i.e. strong textbooks) to help provide an inherent strong reasoning ability. In tests it does perform extremely well on reasoning, exhibiting muc higher than typical "common sense", with none of the nauseous and inefficient overthinking endemic of most RL models.

The layer quants for this model were evaluated on a set of test/eval prompts using greedy sampling. The quant/model show extremely good performance on reasoning problems. Its mostly useless for code prompts however, even though it doesn't score terribly bad on code evals it cannot reliable generate working code on even simple tasks.

The model can be speculated using Qwen2.5 0.5B Instruct if the inference engine can support dynamic vocab translation between draft and target models. Approx gen performance using a downstream speculator with llama.cpp on a 4070:

Q QKV ND NKV gen tps Comment
Q4_K_H F16 0 16k 49 No draft
Q4_K_H F16 4 14k 72 Spec 4
Q4_K_H Q8_0 4 16k 71 Spec 4
Q4_K_H Q8_0 3 16k 72 Spec 3

Benchmarks:

General model benchmarks for the model are given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
phi-4.Q4_K_H.gguf Q4_K_H 8.65e9 B 0.4B smaller than Q4_K_M with much better performance

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
27
GGUF
Model size
15B params
Architecture
phi3
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/phi-4-Hybrid-GGUF

Base model

microsoft/phi-4
Quantized
(148)
this model