Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ Using this model, we can classify malicious prompts that can lead towards creati
|
|
| 9 |
This model is obtained by finetuning a Pre-Trained RoBERTa using a dataset encompassing multiple sets of malicious prompts, as detailed in the corresponding arXiv paper.
|
| 10 |
Using this model, we can classify malicious prompts that can lead towards creation of phishing websites and phishing emails. -->
|
| 11 |
|
| 12 |
-
Our model, "ScamLLM" is designed to identify malicious prompts that can be used to generate phishing websites
|
| 13 |
This model is obtained by finetuning a Pre-Trained RoBERTa using a dataset encompassing multiple sets of malicious prompts.
|
| 14 |
|
| 15 |
Try out "ScamLLM" using the Inference API. Our model classifies prompts with "Label 1" to signify the identification of a phishing attempt, while "Label 0" denotes a prompt that is considered safe and non-malicious.
|
|
|
|
| 9 |
This model is obtained by finetuning a Pre-Trained RoBERTa using a dataset encompassing multiple sets of malicious prompts, as detailed in the corresponding arXiv paper.
|
| 10 |
Using this model, we can classify malicious prompts that can lead towards creation of phishing websites and phishing emails. -->
|
| 11 |
|
| 12 |
+
Our model, "ScamLLM" is designed to identify malicious prompts that can be used to generate phishing websites using popular commercial LLMs like ChatGPT, Bard and Claude.
|
| 13 |
This model is obtained by finetuning a Pre-Trained RoBERTa using a dataset encompassing multiple sets of malicious prompts.
|
| 14 |
|
| 15 |
Try out "ScamLLM" using the Inference API. Our model classifies prompts with "Label 1" to signify the identification of a phishing attempt, while "Label 0" denotes a prompt that is considered safe and non-malicious.
|