Qwen3-4B-Element4-Eva

This is a model merge between Qwen3-4B-Element4 and FutureMa/Eva-4B.

Brainwaves of qx86-hi quants of the parent models

Element4     0.582,0.779,0.849,0.708,0.442,0.771,0.655
Eva-4B       0.539,0.747,0.864,0.606,0.412,0.751,0.605

Eva merged models

Agent-Eva    0.568,0.775,0.872,0.699,0.418,0.777,0.654
Element8-Eva 0.559,0.768,0.872,0.694,0.422,0.765,0.647

Element4-Eva
qx86-hi      0.567,0.781,0.868,0.689,0.426,0.773,0.642
bf16         0.570,0.781,0.869,0.689,0.422,0.769,0.645

Element4 is a merge of Qwen3-4B-Engineer3x and Qwen3-4B-Agent, and serves as a base for the higher number elements. The Agent is Heretic-abliterated, which provides for some interesting friction in the model chains of thought, that only enhances the inference with some original AI humour.

The qx86-hi quant performs at the same level with full precision in this model.

The Element models are profiled to act as agents on the Star Trek DS9 station, in a roleplay scenario.

The models can be used for regular tasks as well.

Each comes with different skills. I found FutureMa/Eva-4B recently with an interesting model card:

Eva-4B is a 4B-parameter model for detecting evasive answers in earnings call Q&A.

In Element8-Eva, that would be Quark. Element8 is a very rich merge, with lower metrics than Agent.

Like I mentioned on the Element8-Eva model card, the FutureMa/Eva-4B was simply included for conversational skills.

-G

Downloads last month
21
Safetensors
Model size
4B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/Qwen3-4B-Element4-Eva

Collection including nightmedia/Qwen3-4B-Element4-Eva