Painted Fantasy v4.1

Magistral Small 2509 24B
image

Overview

This is an uncensored model intended to excel at creative character driven RP / ERP.

Right after releasing v4 I noticed a bunch of repetition. Go figure. v4.1 is my first stab at trying to actively tailor the dataset towards weeding this out. Compared to v4, the only difference is heavy filtering and rewriting assistant messages identified as repetitive.

Repetition isn't fixed, but it's improved. The model still likes patterns, but at least seems capable of occasionally breaking these itself.

SillyTavern Settings

Recommended Roleplay Format

> Actions: In plaintext
> Dialogue: "In quotes"
> Thoughts: *In asterisks*

Recommended Samplers

> Temp: 0.8
> MinP: 0.05 - 0.075
> TopP: 0.95 - 1.00

Instruct

Mistral v7 Tekken

Quantizations

Creation Process

Creation Process: SFT > DPO

SFT on approx 25 million tokens (17.5 million trainable). Datasets included SFW / NSFW RP, stories, NSFW reddit writing prompts, creative instruct & chat data.

90% of the dataset is without thinking, 10% included thinking, using the [THINK][/THINK] tags.

All RP data and synthetic stories went through rewriting with GLM 4.7 using hand edited examples as guidelines to improve the response. Rewritten responses were discarded if they failed to reduce the slop score for the message. This reduced the slop by about 25% for each RP / story dataset and made the model noticably more creative with some of its descriptions.

Assistant messages were checked for repetition in RP conversations via embeddings and word frequency checking across multi-turn conversations. Specific messages were rewritten and conversations that still showed high repetition were filtered.

DPO was expanded to include non creative datasets. My usual RP DPO dataset (also rewritten) was included along with cybersecurity and two partial subsets of general assistant / chat preference datasets to help stabalize the model. This worked pretty well. While creativity did take a small hit, enough remained that the improved logic resulted in a notably improved model (IMO).

Using embeddings, DPO samples where the chosen showed a higher similarity to the conversation than the rejected were removed, to ensure DPO doesn't encourage repetition.

Downloads last month
116
Safetensors
Model size
24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zerofata/MS3.2-PaintedFantasy-v4.1-24B

Datasets used to train zerofata/MS3.2-PaintedFantasy-v4.1-24B

Collection including zerofata/MS3.2-PaintedFantasy-v4.1-24B