Curated papers, models, datasets, and demos for AI-agent runtime safety, prompt injection, MCP security, and tool-call guardrails.
-
armorer-labs/armorer-guard-semantic-classifier
Text Classification • Updated • 3 -
Armorer Guard Demo
🛡1Fast local scanner for agent safety
-
MCP Safety Audit: LLMs with the Model Context Protocol Allow Major Security Exploits
Paper • 2504.03767 • Published • 3 -
Prompt Injection Attacks on Agentic Coding Assistants: A Systematic Analysis of Vulnerabilities in Skills, Tools, and Protocol Ecosystems
Paper • 2601.17548 • Published