⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly, and use Alpaca template.

avnas

Avnas 7B v1

This is my first model finetune so it may be a bit rough around the edges.

mistralai/Mistral-7B-v0.1 was lightly trained on a custom dataset to uncensor the model and train it on Cthulhu mythos.

v1.0 Training Steps: 100

The model appears to have no refusals and is fully uncensored, with no ablation needed.

A version 2 might be released with larger dataset and longer cook time. This was mainly just proof of concept and it seems to work well.

Avnas was created using a custom finetuner kit I made called PMPF (Poor Man's Portable Finetuner). This allows finetuning with only 4-12GB VRAM. I could not get Axolotl, Unsloth, or any other tools working locally on Windows, so I made my own.

Note: Use Alpaca template to prevent errors. The dataset was specifically calibrated using Alpaca format due to issues with ChatML tokenizer.

\n### Instruction:\n \n### Response:\n

Update: The finetune EOS padding bug and safetensors were patched.

<<<<<<
    # --- 4. Load Tokenizer ---
    tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, local_files_only=True)
    if tokenizer.pad_token is None:
        tokenizer.pad_token = tokenizer.eos_token
    tokenizer.padding_side = "right"
======
    # --- 4. Load Tokenizer ---
    tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, local_files_only=True)
    
    # FIX: Use <unk> (ID 0) for padding instead of EOS (ID 2)
    # This prevents the model from learning to stop generating prematurely
    tokenizer.pad_token_id = 0  # unk_token_id for Mistral/Llama
    tokenizer.padding_side = "right"
>>>>>>
Downloads last month
42
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DarkArtsForge/Avnas-7B-v1

Finetuned
(990)
this model
Quantizations
3 models