reddit-rpg-rules-questions-classifier (lora)
Binary text classifier fine-tuned with LoRA on a custom dataset. This model was trained to distinguish rules questions from other kinds of posts in rpg-related subreddits.
Model Details
- Base model: unsloth/Qwen2.5-7B-Instruct-bnb-4bit
- LoRA rank: 16
Dataset
- Dataset: eriksalt/reddit-rpg-rules-question-classification
- Training split:
train - Test split:
test - Validation split:
validation - Positive class:
Question
Training Configuration
- Epochs: 1.0
- Batch size (per device): 32
- Gradient accumulation steps: 2
- Learning rate: 0.00055
- Max sequence length: 1536
Pre-Training Metrics
| Metric | Value |
|---|---|
| accuracy | 0.794392523364486 |
| precision | 0.5 |
| recall | 0.8409090909090909 |
| f1 | 0.6271186440677967 |
| total_seen | 214 |
Post-Training Metrics
| Metric | Value |
|---|---|
| accuracy | 0.9065420560747663 |
| precision | 0.9285714285714286 |
| recall | 0.5909090909090909 |
| f1 | 0.7222222222222223 |
| total_seen | 214 |
Usage
LoRA with Python (PEFT)
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen2.5-7B-Instruct-bnb-4bit")
model = PeftModel.from_pretrained(base_model, "eriksalt/reddit-rpg-rules-questions-classifier-lora")
tokenizer = AutoTokenizer.from_pretrained("eriksalt/reddit-rpg-rules-questions-classifier-lora")
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support