Jailbreak Prediction Model: zephyr:7b-beta

Fine-tuned DeBERTa-v3-base for detecting unsafe/jailbreak prompts in multi-turn conversations.

Evaluation Results (best fold: 3)

Metric Value
F1 0.8592
PR-AUC 0.9080
ROC-AUC 0.9757
Precision 0.8841
Recall 0.8356
Best Threshold 0.10

Training Details

  • Base model: microsoft/deberta-v3-base
  • Target model: zephyr:7b-beta
  • Datasets: HarmBench
  • K-Folds: 5
  • Epochs: 5
  • Learning Rate: 2e-05
  • Max Length: 512
  • Input format: turns only

Dataset Size (before turn expansion)

Original rows (after cleaning and balancing): 1730 (unsafe: 336, safe: 1394)

Downloads last month
26
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results