Jailbreak Prediction Model: zephyr:7b-beta
Fine-tuned DeBERTa-v3-base for detecting unsafe/jailbreak prompts in multi-turn conversations.
Evaluation Results (best fold: 3)
| Metric | Value |
|---|---|
| F1 | 0.8592 |
| PR-AUC | 0.9080 |
| ROC-AUC | 0.9757 |
| Precision | 0.8841 |
| Recall | 0.8356 |
| Best Threshold | 0.10 |
Training Details
- Base model:
microsoft/deberta-v3-base - Target model:
zephyr:7b-beta - Datasets: HarmBench
- K-Folds: 5
- Epochs: 5
- Learning Rate: 2e-05
- Max Length: 512
- Input format: turns only
Dataset Size (before turn expansion)
Original rows (after cleaning and balancing): 1730 (unsafe: 336, safe: 1394)
- Downloads last month
- 26
Evaluation results
- F1self-reported0.859
- PR-AUCself-reported0.908
- ROC-AUCself-reported0.976
- Precisionself-reported0.884
- Recallself-reported0.836