license: Modified MIT

Mixed Precision GGUF layer quantization of JoyAI-LLM-Flash by jdopensource

Original model: https://huggingface.co/jdopensource/JoyAI-LLM-Flash

The hybrid quant employs different quantization levels on a per layer basis to increase flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simulultaneously optimize quantized size and model performance. For this file the layer quants are as follows:

Q4_K_L : Q4_K_M + attn_o = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K

   LAYER_TYPES='[
   [0 ,"Q5_K_S"], [1 ,"Q4_K_L"], [2 ,"Q4_K_M"], [3 ,"Q4_K_S"], [4 ,"Q4_K_S"], [5 ,"Q4_K_S"], [6 ,"Q4_K_S"], [7 ,"Q4_K_S"],
   [8 ,"Q4_K_S"], [9 ,"Q4_K_S"], [10,"Q4_K_S"], [11,"Q4_K_S"], [12,"Q4_K_S"], [13,"Q4_K_S"], [14,"Q4_K_S"], [15,"Q4_K_S"],
   [16,"Q4_K_M"], [17,"Q4_K_S"], [18,"Q4_K_M"], [19,"Q4_K_S"], [20,"Q4_K_M"], [21,"Q4_K_S"], [22,"Q4_K_M"], [23,"Q4_K_S"],
   [24,"Q4_K_M"], [25,"Q4_K_M"], [26,"Q4_K_M"], [27,"Q4_K_M"], [28,"Q4_K_M"], [29,"Q4_K_M"], [30,"Q4_K_M"], [31,"Q4_K_M"],
   [32,"Q4_K_M"], [33,"Q4_K_M"], [34,"Q4_K_M"], [35,"Q4_K_L"], [36,"Q5_K_S"], [37,"Q5_K_M"], [38,"Q5_K_L"], [39,"Q6_K_S"]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"

The layer quant profile was taken from Qwen3 Coder Next mapped onto 40 layers and found to work very well across a small set of test prompts. The quant is sized for operation on machines with 32G CPU RAM and one consumer grade GPU for context.

A second larger Q6_K_H quant sized at ~Q6_K bpw is also available:

Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
Q6_K_M : attn_v = q8_0 ffn_d = q8_0
Q6_K_L : attn_v = q8_0 attn_o = q8_0 ffn_d = q8_0

   LAYER_TYPES='[
   [0 ,"Q6_K_S"], [1 ,"Q5_K_L"], [2 ,"Q5_K_L"], [3 ,"Q5_K_M"], [4 ,"Q5_K_M"], [5 ,"Q5_K_M"], [6 ,"Q5_K_M"], [7 ,"Q5_K_M"],
   [8 ,"Q5_K_M"], [9 ,"Q5_K_M"], [10,"Q5_K_M"], [11,"Q5_K_M"], [12,"Q5_K_L"], [13,"Q5_K_M"], [14,"Q5_K_L"], [15,"Q5_K_M"],
   [16,"Q5_K_L"], [17,"Q5_K_L"], [18,"Q5_K_L"], [19,"Q5_K_L"], [20,"Q6_K_S"], [21,"Q5_K_L"], [22,"Q6_K_S"], [23,"Q5_K_L"],
   [24,"Q6_K_S"], [25,"Q6_K_S"], [26,"Q6_K_S"], [27,"Q6_K_S"], [28,"Q6_K_S"], [29,"Q6_K_S"], [30,"Q6_K_S"], [31,"Q6_K_S"],
   [32,"Q6_K_M"], [33,"Q6_K_M"], [34,"Q6_K_M"], [35,"Q6_K_M"], [36,"Q6_K_L"], [37,"Q6_K_L"], [38,"Q6_K_L"], [39,"Q6_K_L"]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"

This quant will run on ~40G RAM machines and should be close to lossless with minimum Q5_K_M quant across layers. It was optimized for near flawless performance across a small set of reasoning based test prompts.

Both Q4_K_H and Q6_K_H quants were tested across code prompts and found to be capable of generating working code from non trivial test prompts.

Comparison:

Quant size PPL Comment
Q4_K_M 29.7e9 38 default embed and output
Q4_K_H 29.7e9 38 Q6_K embed Q6_K output
Q6_K 40.2e9 33 default embed and output
Q6_K_H 38.1e9 34 Q6_K embed Q6_K output

Usage:

This is a ~50B parameter RL optimized moe model with 3B activated parameters. It does not appear to use explicit reasoning blocks but still can reason quite well over a small set of test prompts. It does not overthink and gave solid reasoning and accuracy on a couple tricky prompts, but as with any LLM it does not possess any actual emergent scaled intelligence and falls flat on some fairly simple IQ test prompts (like most every other LLMs, independent of size, does). Over my experience of working with LLMs this model looks to be extremely good however and is architected to eliminate reliance on high VRAM GPU with 3B active params which CPUs RAM can handle well while still being very strong.

Prompt format:

There is no documentation on how to control the think block for RL. The following experimental prompts were found to work :

For no reasoning block, use this assistant prompt (as defined in the jinja for the model) :

CHAT_ASSISTANT="<|Assistant|><|end_of_thought|>"

To generate a reasoning block, use this assistant prompt:

CHAT_ASSISTANT="<|Assistant|><|begin_of_thought|>"

The model will create a think block and terminate it with "\n<|end_of_thought|>\n", then follow with a distilled answer based on the think block.

When using the think block the model will overthink like most other RL models do, but it may improve the accuracy of the generation. Further testing shows the model often gets hung up talking to itself ad infinitum when using think block, which may explain why enabling the think block is not defined in the chat template jinja. The model appears to be quite capable without using the think block however.

Running:

The model can be efficiently run by offloading expert tensors to CPU via -ot exps=CPU to open up very large context space on even low VRAM GPUs. The smaller size of the optimally quantized parameters will give an effective boost in CPU processing speed due to reducing the memory BW needed to repeatedly copy them from main memory to SIMD regs.

The Q4_K_H model was sized at ~30G and should run on a 32G RAM machine with room left over for a browser. Similar to Qwen3 Coder Next, it appears necessary to offload all experts to CPU; if some expert layers are left on GPU gen rate goes down significantly. The reason for this behavior is not currently known.

Rough performance metrics on a 9900k (128G RAM) and 4070 (12G VRAM)

CPU EXP OFFLOAD Q QKV Context size gen rate ot config
all Q4_K_H F16 128K 24.1 OT="-ot exps=CPU -ngl 99"
7-39 Q4_K_H F16 128K 16.8 OT="-ot blk\.[7-9]|1[0-9]|2[0-9]|3[0-9].*exps=CPU -ngl 99"
all Q6_K_H F16 128K 18.3 OT="-ot exps=CPU -ngl 99"

High context performance appears to work verified against a needle in haystack prompt. However, prompt processing is on large contexts is quite slow.

Generation quirks:

Rarely the model mixes languages on output (this was Q4_K_H):

a leap of faith that逃避s the tension.

Most likely this effect is baked into the pretrain from the two dominant languages used. The model itself can be used to unify the language in the output:

lm "translate the following mixed language phrase to english: a leap of faith that逃避s the tension."
A leap of faith that **evades** the tension.

Benchmarks:

Evals for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm.

Download the file from below:

Link Type Size/e9 B Notes
JoyAI-LLM-Flash.Q4_K_H.gguf Q4_K_H 29.7 B ~Q4_K_M size
JoyAI-LLM-Flash.Q6_K_H.gguf Q6_K_H 38.1 B ~Q6_K size

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
328
GGUF
Model size
49B params
Architecture
deepseek2
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for steampunque/JoyAI-LLM-Flash-MP-GGUF

Quantized
(17)
this model