Slop-Detector-v2

This is a fine-tuned version of answerdotai/ModernBERT-base designed to classify text as "Likely Slop" or "Likely Not Slop".

It was trained for 3 epochs on the DrRiceIO7/SlopReview v2 dataset. Collection process is documented on the dataset page.

Additionally, the was trained with a larger 1,024 token context window, allowing for deeper learning.

Be warned, this is an experimental model. I really do not recommend using this for anything actually important. It seems to look at the flow more than specific keywords, like it'll gloss over "Elara" and "ozone" if the vibes are right. I'll be releasing iterative updates though, so stay tuned.

Classification Labels

The model has been moved to a binary classification system to provide much clearer and more confident determinations:

Label Description
Likely Not Slop High-quality, coherent writing. Includes authentic human prose and high-tier AI responses that avoid clichés and use varied sentence structures.
Likely Slop Low-effort "AI Slop." Heavy reliance on clichéd metaphors (e.g., tapestry, resonance, shimmering), repetitive pacing, and shallow "purple prose."

How to Use

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model_id = "DrRiceIO7/Slop-Detector-v2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)

text = "The rain in Havenwood always smelled of damp wool and impending doom..."

inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=1024)
with torch.no_grad():
    logits = model(**inputs).logits

predicted_class_id = logits.argmax().item()
print(f"Result: {model.config.id2label[predicted_class_id]}")

Training Details

  • Base Model: ModernBERT-base (22 layers, 768 hidden size)
  • Dataset: DrRiceIO7/SlopReview v2
  • Epochs: 3
  • Context Length: 1024 tokens
  • Precision: bfloat16
  • Hardware: Intel Arc B580 (Battlemage) using PyTorch XPU + IPEX.
  • Final Eval Accuracy: >95%

Limitations

This model seems to score very well, but there are always cases where the model may become confused and label something as slop when it's not. This model may also not align with your definition of slop.

Downloads last month
58
Safetensors
Model size
0.1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DrRiceIO7/Slop-Detector-v2

Finetuned
(1143)
this model