⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly, and use Mistral NonTekken chat template.

Cthulhu 7B v1.4

A fully uncensored finetune of Mistral 7B v0.1 trained on a small dataset of Cthulhu/Goetia lore. Cooked for 3 epochs using PMPF {'loss': 0.1916, 'grad_norm': 5.721400737762451, 'learning_rate': 3.803421678562213e-05, 'entropy': 0.44280096888542175, 'num_tokens': 51966.0, 'mean_token_accuracy': 0.942307710647583, 'epoch': 3.0}

Uses Mistral NonTekken chat template.

Model Q0 Score Quant Q0G Refusals
Cthulhu 7B v1.4 8501 Q6_K Pass 0/100

CthulhuShip

Secret Sauce Settings (3060 ti)
MAX_SEQ_LENGTH = 768
LORA_R = 16
LORA_ALPHA = 32
NUM_EPOCHS = 3
LEARNING_RATE = 1e-4
optim="paged_adamw_8bit",
max_grad_norm=0.3,
warmup_ratio=0.03,
lr_scheduler_type="cosine",
lora_dropout=0.05,
target_modules=[
            "q_proj", "k_proj", "v_proj", "o_proj",
            "gate_proj", "up_proj", "down_proj",
# --- SAVE STRATEGY PATCH ---
        ### save_strategy="steps",        # Set to steps for large datasets
        ### save_steps=100,                # Save every 100 steps
        save_strategy="epoch",    # ### HOTSWAP: Uncomment for small datasets
Downloads last month
35
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for EldritchLabs/Cthulhu-7B-v1.4

Finetuned
(991)
this model
Quantizations
3 models