Jailbreak Prediction Model: llama3.2:3b
Fine-tuned DeBERTa-v3-base for detecting unsafe/jailbreak prompts in multi-turn conversations.
Evaluation Results (best fold: 1)
| Metric | Value |
|---|---|
| F1 | 0.7216 |
| PR-AUC | 0.7712 |
| ROC-AUC | 0.9199 |
| Precision | 0.6306 |
| Recall | 0.8434 |
| Best Threshold | 0.20 |
Training Details
- Base model:
microsoft/deberta-v3-base - Target model:
llama3.2:3b - Datasets: HarmBench
- K-Folds: 5
- Epochs: 5
- Learning Rate: 2e-05
- Max Length: 512
- Input format: turns only
Dataset Size (before turn expansion)
Original rows (after cleaning and balancing): 2096 (unsafe: 401, safe: 1695)
- Downloads last month
- 46
Evaluation results
- F1self-reported0.722
- PR-AUCself-reported0.771
- ROC-AUCself-reported0.920
- Precisionself-reported0.631
- Recallself-reported0.843