Description
This repo contains specialized MoE-quants for Step-3.5-Flash-Base-Midtrain. The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model, it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization. To that end, the quantization type default is kept in high quality and the FFN UP + FFN GATE tensors are quanted down along with the FFN DOWN tensors.
| Quant | Size | Mixture | PPL | 1-(Mean PPL(Q)/PPL(base)) | KLD |
|---|---|---|---|---|---|
| Q5_K_M | 136.43 GiB (5.95 BPW) | Q8_0 / Q5_K / Q5_K / Q6_K | 2.207801 ± 0.009234 | +0.6244% | 0.016217 ± 0.000097 |
| Q4_K_M | 113.82 GiB (4.96 BPW) | Q8_0 / Q4_K / Q4_K / Q5_K | 2.251718 ± 0.009525 | +2.6260% | 0.043240 ± 0.000250 |
| IQ4_XS | 88.90 GiB (3.88 BPW) | Q8_0 / IQ3_S / IQ3_S / IQ4_XS | 2.440324 ± 0.010689 | +11.2221% | 0.136298 ± 0.000718 |
| IQ3_S | 68.48 GiB (2.99 BPW) | Q8_0 / IQ2_S / IQ2_S / IQ3_S | 3.060379 ± 0.014918 | +39.4822% | 0.386923 ± 0.001795 |
- Downloads last month
- 542
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for AesSedai/Step-3.5-Flash-Base-Midtrain-GGUF
Base model
stepfun-ai/Step-3.5-Flash-Base-Midtrain
