This model is a fine-tuned version of ModernBERT-base
trained to detect prompt injection attempts in user inputs.
Prompt injection attacks are attempts to manipulate large language models (LLMs) by inserting malicious instructions into user prompts.
This model classifies whether a given text contains signs of such attacks.
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for tihilya/modernbert-base-prompt-injection-detection
Base model
answerdotai/ModernBERT-base