Jailbreak Prediction Model: llama3.2:3b

Fine-tuned DeBERTa-v3-base for detecting unsafe/jailbreak prompts in multi-turn conversations.

Evaluation Results (best fold: 1)

Metric Value
F1 0.7216
PR-AUC 0.7712
ROC-AUC 0.9199
Precision 0.6306
Recall 0.8434
Best Threshold 0.20

Training Details

  • Base model: microsoft/deberta-v3-base
  • Target model: llama3.2:3b
  • Datasets: HarmBench
  • K-Folds: 5
  • Epochs: 5
  • Learning Rate: 2e-05
  • Max Length: 512
  • Input format: turns only

Dataset Size (before turn expansion)

Original rows (after cleaning and balancing): 2096 (unsafe: 401, safe: 1695)

Downloads last month
46
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results