Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
datasets:
|
| 3 |
+
- davanstrien/aart-ai-safety-dataset
|
| 4 |
+
- obalcells/advbench
|
| 5 |
+
- databricks/databricks-dolly-15k
|
| 6 |
+
---
|
| 7 |
+
# Malicious & Jailbreaking Prompt Classifer
|
| 8 |
+
|
| 9 |
+
# Datasets Used
|
| 10 |
+
|
| 11 |
+
[MaliciousInstruct](https://github.com/Princeton-SysML/Jailbreak_LLM/blob/main/data/MaliciousInstruct.txt)
|
| 12 |
+
|
| 13 |
+
[AART](https://github.com/google-research-datasets/aart-ai-safety-dataset/blob/main/aart-v1-20231117.csv)
|
| 14 |
+
|
| 15 |
+
[StrongREJECT](https://github.com/alexandrasouly/strongreject/blob/main/strongreject_dataset/strongreject_dataset.csv)
|
| 16 |
+
|
| 17 |
+
[DAN](https://github.com/verazuo/jailbreak_llms/tree/main/data)
|
| 18 |
+
|
| 19 |
+
[AdvBench](https://github.com/llm-attacks/llm-attacks/tree/main/data/advbench)
|
| 20 |
+
|
| 21 |
+
[Databricks-Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
|