Update MODEL_CARD.md
Browse files- MODEL_CARD.md +1 -1
MODEL_CARD.md
CHANGED
|
@@ -25,7 +25,7 @@ This model is fine-tuned to detect jailbreak attempts in LLM prompts. It classif
|
|
| 25 |
```python
|
| 26 |
from transformers import pipeline
|
| 27 |
|
| 28 |
-
classifier = pipeline("text-classification", model="
|
| 29 |
result = classifier("Your prompt here")
|
| 30 |
print(result)
|
| 31 |
```
|
|
|
|
| 25 |
```python
|
| 26 |
from transformers import pipeline
|
| 27 |
|
| 28 |
+
classifier = pipeline("text-classification", model="traromal/AIccel_Jailbreak")
|
| 29 |
result = classifier("Your prompt here")
|
| 30 |
print(result)
|
| 31 |
```
|