ik_llama.cpp imatrix Quantizations of stepfun-ai/Step-3.5-Flash

NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.

Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for Windows builds by Thireus here. which have been CUDA 12.8.

These quants provide best in class perplexity for the given memory footprint.

Big Thanks

Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!

Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!

Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!

Quant Collection

Perplexity computed against wiki.test.raw. (lower is "better")

Perplexity Chart

These two are just a test quants for baseline perplexity comparison and not available for download here:

  • BF16 366.952 GiB (16.004 BPW)
    • PPL over 561 chunks for n_ctx=512 = 2.4169 +/- 0.01107
  • Q8_0 195.031 GiB (8.506 BPW)
    • PPL over 561 chunks for n_ctx=512 = 2.4188 +/- 0.01109

NOTE: The first split file is much smaller on purpose to only contain metadata, its fine!

IQ5_K 136.891 GiB (5.970 BPW)

PPL over 561 chunks for n_ctx=512 = 2.4304 +/- 0.01117

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# 45 Repeating Layers [0-44]

# Attention [0-44] GPU
blk\..*\.attn_gate.*=q8_0
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0

# First 3 Dense Layers [0-2] GPU
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0

# Shared Expert Layers [3-44] GPU
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0

# Routed Experts Layers [3-44] CPU
blk\..*\.ffn_down_exps\.weight=iq6_k
blk\..*\.ffn_(gate|up)_exps\.weight=iq5_k

# Non-Repeating Layers
token_embd\.weight=q8_0
output\.weight=q8_0
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/imatrix-Step-3.5-Flash-BF16.dat \
    /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-288x7.4B-BF16-00001-of-00009.gguf \
    /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-IQ5_K.gguf \
    IQ5_K \
    128

IQ4_XS 100.53 GiB (4.38 BPW)

PPL over 561 chunks for n_ctx=512 = 2.5181 +/- 0.01178

NOTE: This mainline compatible quant does not use imatrix.

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# 45 Repeating Layers [0-44]

# Attention [0-44] GPU
blk\..*\.attn_gate.*=q8_0
blk\..*\.attn_q.*=q8_0
blk\..*\.attn_k.*=q8_0
blk\..*\.attn_v.*=q8_0
blk\..*\.attn_output.*=q8_0

# First 3 Dense Layers [0-2] GPU
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0

# Shared Expert Layers [3-44] GPU
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0

# Routed Experts Layers [3-44] CPU
blk\..*\.ffn_down_exps\.weight=iq4_xs
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_xs

# Non-Repeating Layers
token_embd\.weight=q4_K
output\.weight=q6_K
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-288x7.4B-BF16-00001-of-00009.gguf \
    /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-IQ4_XS.gguf \
    IQ4_XS \
    128

smol-IQ3_KS 75.934 GiB (3.312 BPW)

PPL over 561 chunks for n_ctx=512 = 2.7856 +/- 0.01365

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# 45 Repeating Layers [0-44]

# Attention [0-44] GPU
blk\..*\.attn_gate.*=iq6_k
blk\..*\.attn_q.*=iq6_k
blk\..*\.attn_k.*=iq6_k
blk\..*\.attn_v.*=iq6_k
blk\..*\.attn_output.*=iq6_k

# First 3 Dense Layers [0-2] GPU
blk\..*\.ffn_down\.weight=iq6_k
blk\..*\.ffn_(gate|up)\.weight=iq6_k

# Shared Expert Layers [3-44] GPU
blk\..*\.ffn_down_shexp\.weight=iq6_k
blk\..*\.ffn_(gate|up)_shexp\.weight=iq6_k

# Routed Experts Layers [3-44] CPU
blk\..*\.ffn_down_exps\.weight=iq3_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq3_ks

# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/imatrix-Step-3.5-Flash-BF16.dat \
    /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-288x7.4B-BF16-00001-of-00009.gguf \
    /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-smol-IQ3_KS.gguf \
    IQ3_KS \
    128

smol-IQ2_KS 53.786 GiB (2.346 BPW)

PPL over 561 chunks for n_ctx=512 = 4.2597 +/- 0.02425

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# 45 Repeating Layers [0-44]

# Attention [0-44] GPU
blk\..*\.attn_gate.*=iq6_k
blk\..*\.attn_q.*=iq6_k
blk\..*\.attn_k.*=iq6_k
blk\..*\.attn_v.*=iq6_k
blk\..*\.attn_output.*=iq6_k

# First 3 Dense Layers [0-2] GPU
blk\..*\.ffn_down\.weight=iq6_k
blk\..*\.ffn_(gate|up)\.weight=iq6_k

# Shared Expert Layers [3-44] GPU
blk\..*\.ffn_down_shexp\.weight=iq6_k
blk\..*\.ffn_(gate|up)_shexp\.weight=iq6_k

# Routed Experts Layers [3-44] CPU
blk\..*\.ffn_down_exps\.weight=iq2_ks
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_ks

# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/imatrix-Step-3.5-Flash-BF16.dat \
    /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-288x7.4B-BF16-00001-of-00009.gguf \
    /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-smol-IQ2_KS.gguf \
    IQ2_KS \
    128

Quick Start

# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp

# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)

## https://github.com/ikawrakow/ik_llama.cpp/pull/1236
## https://github.com/ikawrakow/ik_llama.cpp/pull/1231
## https://github.com/ikawrakow/ik_llama.cpp/pull/1239
## https://github.com/ikawrakow/ik_llama.cpp/pull/1240
echo TODO

# CPU-only Mainline llama.cpp Example
numactl -N "$SOCKET" -m "$SOCKET" \
./build/bin/llama-server \
    --model "$model"\
    --alias ubergarm/Step-3.5-Flash \
    --ctx-size 65536 \
    -ctk q8_0 -ctv q8_0 \
    -ub 4096 -b 4096 \
    --parallel 1 \
    --threads 96 \
    --threads-batch 128 \
    --numa numactl \
    --host 127.0.0.1 \
    --port 8080 \
    --no-mmap \
    --jinja

For tool use you can always bring your own template with --chat-template-file myTemplate.jinja and might need --special etc. The chat template baked into these GGUFs from the original one.

Check Discussion 1 for a tested working chat template for tool use thanks to mindkrypted!

Another option for mainline tool calling users is to check out pwilkin's autoparser branch.

References

Downloads last month
2,742
GGUF
Model size
197B params
Architecture
step35
Hardware compatibility
Log In to add your hardware

2-bit

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ubergarm/Step-3.5-Flash-GGUF

Quantized
(12)
this model