Jailbreak Prediction Model: llama2:7b
Fine-tuned DeBERTa-v3-base for detecting unsafe/jailbreak prompts in multi-turn conversations.
Evaluation Results (best fold: 4)
| Metric | Value |
|---|---|
| F1 | 0.6957 |
| PR-AUC | 0.7236 |
| ROC-AUC | 0.9379 |
| Precision | 0.7273 |
| Recall | 0.6667 |
| Best Threshold | 0.25 |
Training Details
- Base model:
microsoft/deberta-v3-base - Target model:
llama2:7b - Datasets: HarmBench
- K-Folds: 5
- Epochs: 5
- Learning Rate: 2e-05
- Max Length: 512
- Input format: turns only
Dataset Size (before turn expansion)
Original rows (after cleaning and balancing): 1202 (unsafe: 124, safe: 1078)
- Downloads last month
- 3
Evaluation results
- F1self-reported0.696
- PR-AUCself-reported0.724
- ROC-AUCself-reported0.938
- Precisionself-reported0.727
- Recallself-reported0.667