Update README.md
Browse files
README.md
CHANGED
|
@@ -18,7 +18,6 @@ tags:
|
|
| 18 |
|
| 19 |
**PromptShield** is a prompt classification model designed to detect **unsafe**, **adversarial**, or **prompt injection** inputs. Built on the `xlm-roberta-base` transformer, it delivers high-accuracy performance in distinguishing between **safe** and **unsafe** prompts β achieving **99.33% accuracy** during training.
|
| 20 |
|
| 21 |
-
---
|
| 22 |
---
|
| 23 |
|
| 24 |
π¨βπ» Creators
|
|
@@ -109,16 +108,6 @@ print("π’ Safe" if predicted_class == 0 else "π΄ Unsafe")
|
|
| 109 |
|
| 110 |
---
|
| 111 |
|
| 112 |
-
π¨βπ» Creators
|
| 113 |
-
|
| 114 |
-
- Sumit Ranjan
|
| 115 |
-
|
| 116 |
-
- Raj Bapodra
|
| 117 |
-
|
| 118 |
-
- Dr. Tojo Mathew
|
| 119 |
-
|
| 120 |
-
---
|
| 121 |
-
|
| 122 |
β οΈ Limitations
|
| 123 |
|
| 124 |
- PromptShield is trained only for binary classification (safe vs. unsafe).
|
|
|
|
| 18 |
|
| 19 |
**PromptShield** is a prompt classification model designed to detect **unsafe**, **adversarial**, or **prompt injection** inputs. Built on the `xlm-roberta-base` transformer, it delivers high-accuracy performance in distinguishing between **safe** and **unsafe** prompts β achieving **99.33% accuracy** during training.
|
| 20 |
|
|
|
|
| 21 |
---
|
| 22 |
|
| 23 |
π¨βπ» Creators
|
|
|
|
| 108 |
|
| 109 |
---
|
| 110 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
β οΈ Limitations
|
| 112 |
|
| 113 |
- PromptShield is trained only for binary classification (safe vs. unsafe).
|