super-gemopus-4-e4b-trimera

SLERP merge of two Gemma 4 E4B models, combining strong general reasoning and abliteration with agentic/tool-use capabilities.

Merge recipe

Method: SLERP (Spherical Linear Interpolation) via mergekit

What each model brings

SuperGemopus (71%) is itself a merge of:

  • 60% Jackrong/Gemopus-4-E4B-it โ€” Gemma 4 E4B fine-tuned with human preference alignment (natural tone, deep contextual awareness)
  • 40% SuperGemma abliterated โ€” Gemma 4 E4B with refusal removal

deadbydawn (29%) is Gemma 4 E4B with:

  • Opus 4.6 reasoning LoRA fused into weights
  • Claude Code tool-use patterns via SFT
  • <think> tag reasoning baked in
  • Agentic/function-calling behavior

Result

A Gemma 4 E4B model with strong reasoning, abliterated refusals, and enhanced agentic/tool-calling behavior. No adapter needed.

Hardware

Merged on Apple Mac Mini M4 16GB using mergekit with --low-cpu-memory. Output: BF16 safetensors, 7 shards, ~29GB.

Downloads last month
115
Safetensors
Model size
13B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for emanubiz/super-gemopus-4-e4b-trimera

Finetuned
(1)
this model
Quantizations
1 model