Spaces:
Running
Running
| title: README | |
| emoji: 💻 | |
| colorFrom: green | |
| colorTo: pink | |
| sdk: static | |
| pinned: false | |
| # Antijection | |
| Real-time prompt injection and AI safety detection. | |
| We build datasets, tools, and APIs to help developers secure their LLM applications against prompt injection attacks, jailbreaks, and unsafe content. | |
| ## What we publish | |
| **Datasets** for training and benchmarking prompt injection detectors | |
| **Research** on emerging attack patterns and defense techniques | |
| **Tools** for the AI security community | |
| ## Products | |
| Our detection API at [antijection.com](https://antijection.com) provides prompt injection detection in production with support for 40+ catogories. | |
| ## Links | |
| - Website: [antijection.com](https://antijection.com) | |
| - Antijection Challenges: [challenge.antijection.com](https://challenge.antijection.com) | |
| - Documentation: [antijection.com/docs](https://antijection.com/docs) | |
| - GitHub: [github.com/aiteera](https://github.com/aiteera) | |
| --- | |
| *Building the future of AI security, one prompt at a time.* |