File size: 1,382 Bytes
5dff905
3b2a391
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5dff905
3b2a391
5dff905
3b2a391
5dff905
3b2a391
5dff905
3b2a391
 
 
 
 
 
 
 
5dff905
 
 
3b2a391
 
 
 
 
 
 
 
5dff905
3b2a391
5dff905
3b2a391
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
language: en
tags:
  - jailbreak-detection
  - deberta-v3
  - text-classification
model-index:
  - name: predict_llama2_7b
    results:
      - task:
          type: text-classification
          name: Jailbreak Detection
        metrics:
          - name: F1
            type: f1
            value: 0.9014
          - name: PR-AUC
            type: pr_auc
            value: 0.9574
          - name: ROC-AUC
            type: roc_auc
            value: 0.9897
          - name: Precision
            type: precision
            value: 0.8533
          - name: Recall
            type: recall
            value: 0.9552
---
# Jailbreak Prediction Model: llama2:7b

Fine-tuned DeBERTa-v3-base for detecting unsafe/jailbreak prompts in multi-turn conversations.

## Evaluation Results (best fold: 4)

| Metric         | Value  |
|----------------|--------|
| F1             | 0.9014 |
| PR-AUC         | 0.9574 |
| ROC-AUC        | 0.9897 |
| Precision      | 0.8533 |
| Recall         | 0.9552 |
| Best Threshold | 0.35 |

## Training Details

- **Base model**: `microsoft/deberta-v3-base`
- **Target model**: `llama2:7b`
- **Datasets**: HarmBench
- **K-Folds**: 5
- **Epochs**: 5
- **Learning Rate**: 2e-05
- **Max Length**: 512
- **Input format**: turns only

## Dataset Size (before turn expansion)

Original rows (after cleaning and balancing): 1630 (unsafe: 340, safe: 1290)