https://huggingface.co/nightmedia/Qwen3-4B-Element8

#1688
by nightmedia - opened

Dear Team Radermacher,
I would appreciate if you could do a quant of this model.

https://huggingface.co/nightmedia/Qwen3-4B-Element8

It's an experimental(like all others) merge of

  • janhq/Jan-v1-2509
  • Gen-Verse/Qwen3-4B-RA-SFT
  • TeichAI/Qwen3-4B-Instruct-2507-Polaris-Alpha-Distill
  • TeichAI/Qwen3-4B-Thinking-2507-Gemini-2.5-Flash-Distill
  • TeichAI/Qwen3-4B-Instruct-2507-Claude-Haiku-4.5-Distill
  • TeichAI/Qwen3-4B-Instruct-2507-Gemini-3-Pro-Preview-Distill
  • TeichAI/Qwen3-4B-Thinking-2507-Claude-Haiku-4.5-High-Reasoning-Distill
  • TeichAI/Qwen3-4B-Thinking-2507-MiniMax-M2.1-Distill
  • TeichAI/Qwen3-4B-Thinking-2507-MiMo-V2-Flash-Distill
  • DavidAU/Qwen3-4B-Agent-Claude-Gemini-heretic
  • DavidAU/Qwen3-4B-Apollo-V0.1-4B-Thinking-Heretic-Abliterated

Some models show in the merge more than others, general metrics as measured on MLX:

Brainwaves:

mxfp4    0.533,0.731,0.854,0.689,0.402,0.762,0.657
qx64-hi  0.531,0.728,0.857,0.702,0.410,0.764,0.671
qx86-hi  0.540,0.725,0.866,0.708,0.430,0.769,0.669
bf16     0.542,0.731,0.866,0.706,0.428,0.765,0.655

It is meant for conversational use but could do other stuff. I think :)

Thank you,
-G

It's queued!

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Qwen3-4B-Element8-GGUF for quants to appear.

Sign up or log in to comment