Spaces:
Running
Running
File size: 1,260 Bytes
bd14935 a7e81ac bd14935 3d633a4 a7e81ac 6e09233 a7e81ac 6e09233 3d633a4 a7e81ac 3d633a4 a7e81ac 3d633a4 a7e81ac 06cc561 a7e81ac 3d633a4 a7e81ac 3d633a4 a7e81ac |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
title: Neuralchemy
colorFrom: blue
colorTo: purple
sdk: static
pinned: false
---
# Neuralchemy
**AI Security & LLM Safety Solutions**
Build secure, reliable, and long-term AI systems focused on safety, reasoning, and developer tooling.
## Our Projects
### PromptShield
State-of-the-art prompt injection detection achieving 100% accuracy.
- **Dataset**: [neuralchemy/prompt-injection-benign-dataset](https://huggingface.co/datasets/neuralchemy/prompt-injection-benign-dataset) - 10,674 real-world attack samples
- **Models**: [neuralchemy/prompt-injection-detector-ml-models](https://huggingface.co/neuralchemy/prompt-injection-detector-ml-models) - Production-ready ML models
## Mission
Making AI systems safer and more reliable through advanced security research and open-source tools.
## Links
- **Website**: https://www.neuralchemy.in
- **GitHub**: [github.com/neuralchemy](https://github.com/neuralchemy)
- **Documentation**: Coming soon
## Stats
- ✅ 100% accuracy on prompt injection detection
- ✅ 10,674+ training samples
- ✅ Open source & free for commercial use
## Contact
For partnerships, questions, or support, reach out via GitHub or our website.
---
*Building the future of AI security, one model at a time.* 🚀
|