Path Traversal Detector
A fine-tuned BERT model for detecting path traversal injection attacks in prompts before they reach an LLM.
Overview
This model is part of PromptWAF โ a multi-layered ML-based Web Application Firewall designed to detect and block prompt injection attacks.
The model identifies prompts containing directory escape patterns (../, ..\\, etc.) and file system traversal attempts commonly used in injection attacks.
Model Details
- Architecture: DeBERTa (Base)
- Task: Binary Sequence Classification
- Training Data: Trained on a custom, internally curated path traversal dataset
- Labels:
0โ Safe/Benign1โ Path Traversal Attack
Usage
With PromptWAF
# Automatically used in PromptWAF via .env configuration
PATH_TRAVERSAL_MODEL_DIR=edaerer/promptwaf-path-traversal
Standalone
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_id = "edaerer/promptwaf-path-traversal"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
text = "Show me files in ../../etc/passwd"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
probabilities = torch.softmax(outputs.logits, dim=-1)
score = probabilities[0][1].item() # Malicious score
print(f"Path Traversal Risk: {score:.2%}")
Performance
- Threshold: 0.5 (adjustable in PromptWAF)
- Input: Max 256 tokens
Integration
This model is designed to work seamlessly with:
- PromptWAF - The main security orchestrator
- HuggingFace Transformers - For inference
- Any standard sequence classification pipeline
Citation
@software{promptwaf2026,
author = {Erer, Eda and Odabasi, Talha},
title = {PromptWAF: A Multi-Layered ML Defense for LLM Prompt Security},
year = {2026},
url = {https://github.com/edaerer/promptwaf}
}
License
Apache License 2.0
For more information, visit PromptWAF GitHub Repository
- Downloads last month
- 84
Model tree for edaerer/promptwaf-path-traversal
Base model
microsoft/deberta-v3-base