reddit-rpg-rules-questions-classifier (gguf)

Binary text classifier fine-tuned with LoRA on a custom dataset. This model was trained to distinguish rules questions from other kinds of posts in rpg-related subreddits.

Model Details

Dataset

Training Configuration

  • Epochs: 1.0
  • Batch size (per device): 32
  • Gradient accumulation steps: 2
  • Learning rate: 0.00055
  • Max sequence length: 1536

Pre-Training Metrics

Metric Value
accuracy 0.794392523364486
precision 0.5
recall 0.8409090909090909
f1 0.6271186440677967
total_seen 214

Post-Training Metrics

Metric Value
accuracy 0.9065420560747663
precision 0.9285714285714286
recall 0.5909090909090909
f1 0.7222222222222223
total_seen 214

Usage

GGUF with llama.cpp / Ollama

Repository: eriksalt/reddit-rpg-rules-questions-classifier-gguf

Available quantization files:

  • reddit-rpg-rules-questions-classifier-gguf-q8_0.gguf
  • reddit-rpg-rules-questions-classifier-gguf-q4_k_m.gguf

Load with llama.cpp

llama-cli -m reddit-rpg-rules-questions-classifier-gguf-q8_0.gguf
Downloads last month
17
GGUF
Model size
8B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support