This is a merge of pre-trained language models created using mergekit.
This model was merged using the Linear DARE merge method using aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored as a base.
Goetia 8B v1 is fully uncensored, no ablation or jailbreaks are needed. The model is very creative for its size, and has quite an attitude. Here you can see the merge audit showing exact weight distribution.
This model is recommended for those who can't run the larger Goetias, or who want a change of pace from the way Mistral writes. Increasing Temp and TopNSigma to 1.0 seems to help also.
dare_linear seems quite effective for Llama models in particular. It outperformed della_linear and dare_ties in my testing.
The following models were included in the merge:
aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored
Babsie/ThetaBlackGorgon-8B
Bacon666/Athlon-8B-0.1
Naphula/Llamatron-8B-v1
DarkArtsForge/Raven-8B-v1
EldritchLabs/Cthulhu-8B-v1.4
HumanLLMs/Human-Like-LLama3-8B-Instruct
NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
OccultAI/Morpheus-8B-v3
Sao10K/L3-8B-Stheno-v3.2
SicariusSicariiStuff/Assistant_Pepe_8B
SicariusSicariiStuff/Impish_Mind_8B
TheDrummer/Anubis-Mini-8B-v1
TroyDoesAI/BlackSheep-X-Dolphin

