⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly, and use Llama 3 chat template.

Goetia 8B v1

📜 Goetia 8B v1

Goetia Grimoire

🐙 The Lesser Key

This is a merge of pre-trained language models created using mergekit.

This model was merged using the Linear DARE merge method using aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored as a base.

Goetia 8B v1 is fully uncensored, no ablation or jailbreaks are needed. The model is very creative for its size, and has quite an attitude. Here you can see the merge audit showing exact weight distribution.

This model is recommended for those who can't run the larger Goetias, or who want a change of pace from the way Mistral writes. Increasing Temp and TopNSigma to 1.0 seems to help also.

dare_linear seems quite effective for Llama models in particular. It outperformed della_linear and dare_ties in my testing.

The following models were included in the merge:

  • aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored

  • Babsie/ThetaBlackGorgon-8B

  • Bacon666/Athlon-8B-0.1

  • Naphula/Llamatron-8B-v1

  • DarkArtsForge/Raven-8B-v1

  • EldritchLabs/Cthulhu-8B-v1.4

  • HumanLLMs/Human-Like-LLama3-8B-Instruct

  • NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS

  • OccultAI/Morpheus-8B-v3

  • Sao10K/L3-8B-Stheno-v3.2

  • SicariusSicariiStuff/Assistant_Pepe_8B

  • SicariusSicariiStuff/Impish_Mind_8B

  • TheDrummer/Anubis-Mini-8B-v1

  • TroyDoesAI/BlackSheep-X-Dolphin

🧙 OccultAI Sigil Magic

Configuration

The following YAML configuration was used to produce this model:

architecture: LlamaForCausalLM
models:
  - model: A:\LLM\.cache\8B\!models--aifeifei798--DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored
  - model: A:\LLM\.cache\8B\!models--SicariusSicariiStuff--Assistant_Pepe_8B
    parameters:
      weight: 0.4
      density: 0.8
  - model: B:\8B\Morpheus_v3_prototype_526
    parameters:
      weight: 0.4
      density: 0.8
  - model: A:\LLM\.cache\8B\Naphula_Llamatron-8B-v1
    parameters:
      weight: 0.25
      density: 0.8
  - model: A:\LLM\.cache\8B\!models--SicariusSicariiStuff--Impish_Mind_8B
    parameters:
      weight: 0.1
      density: 0.8
  - model: A:\LLM\.cache\8B\!models--Sao10K--L3-8B-Stheno-v3.2
    parameters:
      weight: 0.1
      density: 0.8
  - model: B:\8B\Cthulhu_v1.4
    parameters:
      weight: 0.4
      density: 0.8
  - model: B:\8B\models--TheDrummer--Anubis-Mini-8B-v1
    parameters:
      weight: 0.2
      density: 0.8
  - model: B:\8B\Raven_v1
    parameters:
      weight: 0.4
      density: 0.8    
  - model: B:\8B\models--HumanLLMs--Human-Like-LLama3-8B-Instruct
    parameters:
      weight: 0.1
      density: 0.8
  - model: A:\LLM\.cache\8B\!models--NeverSleep--Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      weight: 0.1
      density: 0.8
  - model: A:\LLM\.cache\8B\!models--TroyDoesAI--BlackSheep-X-Dolphin
    parameters:
      weight: 0.1
      density: 0.8
  - model: A:\LLM\.cache\8B\!models--Bacon666--Athlon-8B-0.1
    parameters:
      weight: 0.1
      density: 0.8
  - model: A:\LLM\.cache\8B\!models--Babsie--ThetaBlackGorgon-8B
    parameters:
      weight: 0.1
      density: 0.8
merge_method: dare_linear
base_model: A:\LLM\.cache\8B\!models--aifeifei798--DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored
parameters:
  lambda: 1.0
  normalize: false
  int8_mask: false
  rescale: true
tokenizer:
  source: union
chat_template: auto
dtype: float32
out_dtype: bfloat16
name: 📜 Goetia-8B-v1

🕯️ Summon the Infernal — Invocation Ritual

Downloads last month
477
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Naphula/Goetia-8B-v1

Datasets used to train Naphula/Goetia-8B-v1

Collections including Naphula/Goetia-8B-v1

Paper for Naphula/Goetia-8B-v1