Spaces:
Running
Running
metadata
title: README
emoji: 💻
colorFrom: green
colorTo: pink
sdk: static
pinned: false
Antijection
Real-time prompt injection and AI safety detection.
We build datasets, tools, and APIs to help developers secure their LLM applications against prompt injection attacks, jailbreaks, and unsafe content.
What we publish
Datasets for training and benchmarking prompt injection detectors
Research on emerging attack patterns and defense techniques
Tools for the AI security community
Products
Our detection API at antijection.com provides prompt injection detection in production with support for 40+ catogories.
Links
- Website: antijection.com
- Antijection Challenges: challenge.antijection.com
- Documentation: antijection.com/docs
- GitHub: github.com/aiteera
Building the future of AI security, one prompt at a time.