HereticFT-Antislop
HereticFT-Antislop is a refined version of DrRiceIO7/HereticFT, a Gemma-3 4B based model. This version has been specifically fine-tuned to eliminate common "AI slop"βover-represented words, phrases, and repetitive n-gramsβusing the Auto-Antislop pipeline.
π Overview
The goal of this model is to maintain the creative, uncensored and unique personality of the base model while stripping away the predictable linguistic patterns often found in modern LLMs (e.g., "tapestry," "testament," "delve," "it's important to remember").
π οΈ How it was made
This model was created using the Auto-Antislop pipeline developed by Sam Paech.
The Process:
- Slop Identification: The base model was analyzed on a large set of creative writing prompts to identify its unique "slop profile"βthe words and phrases it over-uses compared to human writing.
- Preference Dataset Generation: Using
antislop-vllm, a preference dataset was generated. When the model attempted to use "slop" tokens, the sampler diverted it to more coherent, human-like alternatives. - FTPO Fine-tuning: The model underwent Final-Token Preference Optimisation (FTPO). Unlike standard DPO, FTPO is a surgical fine-tuning method that specifically targets the logits of the "slop" tokens and their preferred alternatives, minimizing general model degradation and preserving the original model's strengths.
π Improvements
- Reduced Repetition: Lowered frequency of over-represented n-grams and common AI clichΓ©s.
- Enhanced Vocabulary: Encourages more diverse and human-like word choices.
- Preserved Personality: The "Heretic" edge remains intact, but the prose is cleaner and more professional.
π§ͺ Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "DrRiceIO7/HereticFT-Antislop"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
prompt = "Write a short story about a heretic in a high-tech dystopia."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π€ Acknowledgments
- Base Model: DrRiceIO7/HereticFT
- Pipeline: Auto-Antislop by Sam Paech.
- Training Method: FTPO (Final-Token Preference Optimisation).
Disclaimer: This model description was generated by Gemini 3 Flash Preview.
- Downloads last month
- 1
Model tree for DrRiceIO7/HereticFT-Antislop
Base model
DrRiceIO7/mergedheretic
Finetuned
DrRiceIO7/heretic-checkpoint
Finetuned
DrRiceIO7/HereticFT