Qwen3-4B-Agent-Eva-qx86-hi-mlx

This is a model merge between Qwen3-4B-Agent and FutureMa/Eva-4B.

Brainwaves of qx86-hi quants of the parent models and Qwen3-4B-Agent-Eva

Agent     0.603,0.817,0.838,0.743,0.426,0.780,0.708
Eva-4B    0.539,0.747,0.864,0.606,0.412,0.751,0.605

Qwen3-4B-Agent-Eva
bf16      0.565,0.779,0.872,0.700,0.418,0.776,0.653
qx86-hi   0.568,0.775,0.872,0.699,0.418,0.777,0.654

The Agent base is abliterated and contains only the essential models to top 0.6/0.8 arc--this level of performance is usually found in much larger models, if at all. Agent has an unforgiving nature, and here with the Eva merge, it will look for accountability :)

The qx86-hi quant performs at the same level with full precision in this model, and lower than the Agent base because Eva came with lower metrics to begin with.

The Element models are profiled to act as agents on the Star Trek DS9 station, in a roleplay scenario.

The models can be used for regular tasks as well.

Each comes with different skills. I found FutureMa/Eva-4B recently with an interesting model card:

Eva-4B is a 4B-parameter model for detecting evasive answers in earnings call Q&A.

In Element8-Eva, that would be Quark. Element8 is a very rich merge, with lower metrics than Agent.

Like I mentioned on the Element8-Eva model card, the FutureMa/Eva-4B was simply included for conversational skills.

Without them, who do you get, Worf?

-G

Downloads last month
53
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/Qwen3-4B-Agent-Eva

Collection including nightmedia/Qwen3-4B-Agent-Eva