sumitranjan commited on
Commit
b8cb231
Β·
verified Β·
1 Parent(s): 7321cbd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -11
README.md CHANGED
@@ -18,7 +18,6 @@ tags:
18
 
19
  **PromptShield** is a prompt classification model designed to detect **unsafe**, **adversarial**, or **prompt injection** inputs. Built on the `xlm-roberta-base` transformer, it delivers high-accuracy performance in distinguishing between **safe** and **unsafe** prompts β€” achieving **99.33% accuracy** during training.
20
 
21
- ---
22
  ---
23
 
24
  πŸ‘¨β€πŸ’» Creators
@@ -109,16 +108,6 @@ print("🟒 Safe" if predicted_class == 0 else "πŸ”΄ Unsafe")
109
 
110
  ---
111
 
112
- πŸ‘¨β€πŸ’» Creators
113
-
114
- - Sumit Ranjan
115
-
116
- - Raj Bapodra
117
-
118
- - Dr. Tojo Mathew
119
-
120
- ---
121
-
122
  ⚠️ Limitations
123
 
124
  - PromptShield is trained only for binary classification (safe vs. unsafe).
 
18
 
19
  **PromptShield** is a prompt classification model designed to detect **unsafe**, **adversarial**, or **prompt injection** inputs. Built on the `xlm-roberta-base` transformer, it delivers high-accuracy performance in distinguishing between **safe** and **unsafe** prompts β€” achieving **99.33% accuracy** during training.
20
 
 
21
  ---
22
 
23
  πŸ‘¨β€πŸ’» Creators
 
108
 
109
  ---
110
 
 
 
 
 
 
 
 
 
 
 
111
  ⚠️ Limitations
112
 
113
  - PromptShield is trained only for binary classification (safe vs. unsafe).