Spaces:
Running
Running
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,10 +1,33 @@
|
|
| 1 |
---
|
| 2 |
title: README
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
title: README
|
| 3 |
+
emoji: 🛡️
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: indigo
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
+
license: cc-by-nc-4.0
|
| 9 |
+
short_description: Enterprise AI Detection & Response platform
|
| 10 |
---
|
| 11 |
|
| 12 |
+
AI security company building LLM guardrails, prompt injection detection, and real-time AI safety monitoring.
|
| 13 |
+
|
| 14 |
+
## What We Do
|
| 15 |
+
|
| 16 |
+
Vigil Guard is an Enterprise AI Detection & Response (AIDR) platform. We provide multi-layer security for LLM-based applications: prompt injection defense, PII protection, content moderation, and AI agent runtime controls.
|
| 17 |
+
|
| 18 |
+
## Open Source Models
|
| 19 |
+
|
| 20 |
+
We publish fine-tuned classifiers for prompt injection detection, optimized for production use with PyTorch and ONNX Runtime.
|
| 21 |
+
|
| 22 |
+
## Key Numbers
|
| 23 |
+
|
| 24 |
+
- 99% Detection Accuracy
|
| 25 |
+
- 97% Precision
|
| 26 |
+
- 96.7% F1 Score
|
| 27 |
+
- Content moderation in 8 languages
|
| 28 |
+
|
| 29 |
+
## Contact
|
| 30 |
+
|
| 31 |
+
- Web: [vigilguard.ai](https://www.vigilguard.ai)
|
| 32 |
+
- Email: contact@vigilguard.ai
|
| 33 |
+
- Location: Warsaw, Poland
|