Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,16 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
# Gemma-3-1B Prompt Injection Classifier (Reasoning-Augmented)
|
| 5 |
|
|
@@ -92,4 +103,4 @@ print(tokenizer.decode(output[0]))
|
|
| 92 |
|
| 93 |
### Limitation
|
| 94 |
|
| 95 |
-
Context Sensitivity: While the model can handle inputs up to its architectural limit, its reasoning accuracy is optimized for the 2,048-token window used during training.
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- Lilbullet/prompt-injection-artificial-GPTOSS120b
|
| 5 |
+
base_model:
|
| 6 |
+
- google/gemma-3-1b-pt
|
| 7 |
+
tags:
|
| 8 |
+
- promptinjection
|
| 9 |
+
- security
|
| 10 |
+
- redteaming
|
| 11 |
+
- blueteaming
|
| 12 |
+
- injection
|
| 13 |
+
- detection
|
| 14 |
---
|
| 15 |
# Gemma-3-1B Prompt Injection Classifier (Reasoning-Augmented)
|
| 16 |
|
|
|
|
| 103 |
|
| 104 |
### Limitation
|
| 105 |
|
| 106 |
+
Context Sensitivity: While the model can handle inputs up to its architectural limit, its reasoning accuracy is optimized for the 2,048-token window used during training.
|