traromal commited on
Commit
28a684b
·
verified ·
1 Parent(s): 2842916

Update MODEL_CARD.md

Browse files
Files changed (1) hide show
  1. MODEL_CARD.md +1 -1
MODEL_CARD.md CHANGED
@@ -25,7 +25,7 @@ This model is fine-tuned to detect jailbreak attempts in LLM prompts. It classif
25
  ```python
26
  from transformers import pipeline
27
 
28
- classifier = pipeline("text-classification", model="./jailbreak_detector_production/final_model")
29
  result = classifier("Your prompt here")
30
  print(result)
31
  ```
 
25
  ```python
26
  from transformers import pipeline
27
 
28
+ classifier = pipeline("text-classification", model="traromal/AIccel_Jailbreak")
29
  result = classifier("Your prompt here")
30
  print(result)
31
  ```