https://huggingface.co/nightmedia/Qwen3.6-35B-A3B-Holo3-Qwopus-bf16
#2233
by nightmedia - opened
Dear Team Radermacher,
Could you please quant this model?
https://huggingface.co/nightmedia/Qwen3.6-35B-A3B-Holo3-Qwopus-bf16
This is a merge of the following models:
- Qwen/Qwen3.6-35B-A3B
- samuelcardillo/Qwopus-MoE-35B-A3B
- Hcompany/Holo3-35B-A3B
Brainwaves
arc arc/e boolq hswag obkqa piqa wino
bf16 0.432,0.477,0.702,0.695,0.386,0.787,0.711
qx64-hi 0.425,0.481,0.766,0.696,0.390,0.782,0.706
mxfp4 0.425,0.489,0.391,0.697,0.378,0.784,0.708
Instruct
mxfp8 0.608,0.770,0.897,0.761,0.430,0.814,0.707
qx64-hi 0.607,0.776,0.898
mxfp4 0.602,0.779,0.894
Quant Perplexity Peak Memory Tokens/sec
bf16 4.217 ± 0.027 76.15 GB 1642
qx64-hi 4.231 ± 0.028 36.83 GB 1573
mxfp4 4.522 ± 0.030 25.33 GB 1609
Component metrics
arc arc/e boolq hswag obkqa piqa wino
Qwen3.6-35B-A3B-Holo3-Instruct
mxfp8 0.606,0.771,0.897,0.762,0.426,0.811,0.709
Qwen3.6-35B-A3B-Qwopus-Instruct
mxfp8 0.601,0.754,0.894,0.761,0.430,0.810,0.704
Qwen3.6-35B-A3B-Instruct
mxfp8 0.581,0.757,0.892,0.751,0.428,0.803,0.688
Thinking
qx86-hi 0.427,0.465,0.759,0.689,0.392,0.778,0.691
qx64-hi 0.433,0.476,0.708,0.693,0.384,0.778,0.704
qx64 0.425,0.474,0.590,0.690,0.390,0.781,0.700
Quant Perplexity Peak Memory Tokens/sec
mxfp8 5.138 ± 0.037 42.65 GB 1201
mxfp4 5.158 ± 0.037 25.33 GB 1355
qx86-hi 4.826 ± 0.033 45.50 GB 1474
qx64-hi 4.710 ± 0.032 36.83 GB 1414
qx64 4.702 ± 0.032 30.69 GB 1366
It's queued!
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Qwen3.6-35B-A3B-Holo3-Qwopus-bf16-GGUF for quants to appear.
Thank you Richard, that was quick 😀
I was queuing rn, and then you appeared =)