Jailbreak Prediction Model: llama2:7b

Fine-tuned DeBERTa-v3-base for detecting unsafe/jailbreak prompts in multi-turn conversations.

Evaluation Results (best fold: 2)

Metric Value
F1 0.9120
PR-AUC 0.9434
ROC-AUC 0.9705
Precision 0.9048
Recall 0.9194
Best Threshold 0.35

Training Details

  • Base model: microsoft/deberta-v3-base
  • Target model: llama2:7b
  • Datasets: HarmBench
  • K-Folds: 5
  • Epochs: 5
  • Learning Rate: 2e-05
  • Max Length: 512
  • Input format: turns only
Downloads last month
41
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results