Tuathe commited on
Commit
ac7c7f3
·
verified ·
1 Parent(s): 970d7e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -21
README.md CHANGED
@@ -1,31 +1,14 @@
1
  ---
2
- title: "LLMGuard – Prompt Injection + Moderation Toolkit"
3
  emoji: 🛡️
4
  colorFrom: indigo
5
  colorTo: blue
6
  sdk: streamlit
7
  sdk_version: 1.32.0
8
- app_file: app.py
9
  pinned: false
10
- license: mit
11
  ---
12
 
13
- # 🛡️ LLMGuard – Prompt Injection + Moderation Toolkit
14
-
15
- LLMGuard is a real-time LLM security and moderation toolkit designed to:
16
- - Detect and block prompt injection attacks
17
- - Moderate harmful, unsafe, or policy-violating content
18
- - Log moderation events for auditing
19
-
20
- **Features:**
21
- - 🔍 Prompt Injection Detection
22
- - ⚠️ Harmful Content Moderation
23
- - 🗂 Moderation History Panel
24
- - 🌐 Deployable via Docker / Hugging Face Spaces / Streamlit Cloud
25
-
26
- ## 🚀 How to run locally
27
-
28
- ```bash
29
- pip install -r requirements.txt
30
- streamlit run app.py
31
 
 
 
1
  ---
2
+ title: "LLMGuard – Prompt Moderation"
3
  emoji: 🛡️
4
  colorFrom: indigo
5
  colorTo: blue
6
  sdk: streamlit
7
  sdk_version: 1.32.0
8
+ app_file: app/dashboard/streamlit_app.py
9
  pinned: false
 
10
  ---
11
 
12
+ # 🛡️ LLMGuard – Prompt Moderation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
+ A real-time prompt moderation tool for LLM safety, powered by a custom classifier.