Qwen3-4B-Element8-Eva

This is a model merge between Element8 and FutureMa/Eva-4B.

Brainwaves of qx86-hi quants of the parent models and Qwen3-4B-Element8-Eva

Element8     0.540,0.725,0.866,0.708,0.430,0.769,0.669
Eva-4B       0.539,0.747,0.864,0.606,0.412,0.751,0.605

Qwen3-4B-Element8-Eva
bf16         0.561,0.769,0.873,0.692,0.420,0.766,0.651
qx86-hi      0.559,0.768,0.872,0.694,0.422,0.765,0.647

I also made a Qwen3-4B-Element4-Eva, just because it sounds cool, will be uploaded separately :)

Element4-Eva 0.567,0.781,0.868,0.689,0.426,0.773,0.642
Element4     0.582,0.779,0.849,0.708,0.442,0.771,0.655

The qx86-hi quant performs at the same level with full precision in this model.

The Element models are profiled to act as agents on the Star Trek DS9 station, in a roleplay scenario.

The models can be used for regular tasks as well.

Each comes with different skills. I found FutureMa/Eva-4B recently with an interesting model card:

Eva-4B is a 4B-parameter model for detecting evasive answers in earnings call Q&A.

Perfect. That would be Quark

-G

P.S. I have no idea if it still does the FutureMa/Eva thing. It adds color to the conversation, and this is the only reason this model exists. Because it's fun. Like Quark.

Downloads last month
4
Safetensors
Model size
4B params
Tensor type
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/Qwen3-4B-Element8-Eva

Collection including nightmedia/Qwen3-4B-Element8-Eva