reddit-rpg-rules-questions-classifier (gguf)
Binary text classifier fine-tuned with LoRA on a custom dataset. This model was trained to distinguish rules questions from other kinds of posts in rpg-related subreddits.
Model Details
- Base model: unsloth/Qwen2.5-7B-Instruct-bnb-4bit
- LoRA rank: 16
- Quantizations:
q8_0,q4_k_m
Dataset
- Dataset: eriksalt/reddit-rpg-rules-question-classification
- Training split:
train - Test split:
test - Validation split:
validation - Positive class:
Question
Training Configuration
- Epochs: 1.0
- Batch size (per device): 32
- Gradient accumulation steps: 2
- Learning rate: 0.00055
- Max sequence length: 1536
Pre-Training Metrics
| Metric | Value |
|---|---|
| accuracy | 0.794392523364486 |
| precision | 0.5 |
| recall | 0.8409090909090909 |
| f1 | 0.6271186440677967 |
| total_seen | 214 |
Post-Training Metrics
| Metric | Value |
|---|---|
| accuracy | 0.9065420560747663 |
| precision | 0.9285714285714286 |
| recall | 0.5909090909090909 |
| f1 | 0.7222222222222223 |
| total_seen | 214 |
Usage
GGUF with llama.cpp / Ollama
Repository: eriksalt/reddit-rpg-rules-questions-classifier-gguf
Available quantization files:
reddit-rpg-rules-questions-classifier-gguf-q8_0.ggufreddit-rpg-rules-questions-classifier-gguf-q4_k_m.gguf
Load with llama.cpp
llama-cli -m reddit-rpg-rules-questions-classifier-gguf-q8_0.gguf
- Downloads last month
- 17
Hardware compatibility
Log In to add your hardware
4-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support