Update README.md
Browse files
README.md
CHANGED
|
@@ -75,7 +75,7 @@ The model was fine-tuned on a comprehensive dataset comprising a combination of
|
|
| 75 |
|
| 76 |
## Training Procedure
|
| 77 |
|
| 78 |
-
The `
|
| 79 |
|
| 80 |
## Evaluation Results
|
| 81 |
|
|
|
|
| 75 |
|
| 76 |
## Training Procedure
|
| 77 |
|
| 78 |
+
The `Ara-Prompt-Guard-V1` model was fine-tuned from the `meta-llama/Llama-Prompt-Guard-2-86M` base model. The fine-tuning process involved a custom training loop, where the model's classifier head was adapted to output binary classifications (safe/unsafe). The training utilized a concatenated dataset, ensuring a diverse exposure to both benign and malicious Arabic prompts. The model was trained with a focus on improving detection accuracy for Arabic adversarial inputs.
|
| 79 |
|
| 80 |
## Evaluation Results
|
| 81 |
|