Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -35,7 +35,7 @@ SecBench Technical Paper: *[link](https://arxiv.org/abs/2412.20787)*.
|
|
| 35 |
|
| 36 |
|
| 37 |
|
| 38 |
-
## SecBench Design
|
| 39 |
|
| 40 |
|
| 41 |
The following figure shows the overview of the SecBench design: it is a comprehensive benchmarking dataset aiming to benchmark LLM's capability in cybersecurity from *Multi-Level*, *Multi-Language*, *Multi-Form*, *Multi-Domain*.
|
|
@@ -55,7 +55,7 @@ The following figure shows the overview of the SecBench design: it is a comprehe
|
|
| 55 |
|
| 56 |
- **Multi-Domain** : The questions in SecBench consist of 9 different domains, including **D1. Security Management**, **D2. Data Security**, **D3. Network and Infrastructure Security**, **D4. Security Standards and Regulations**, **D5. Application Security**, **D6. Identity and Access Control**, **D7. Fundamental Software and Hardware and Technology**, **D8. Endpoint and Host Security**, **D9. Cloud Security**. Particularly, the above domains were devised from several rounds of brainstorming and revision, which were expected to cover most (if not all) related sub-domains in cybersecurity. Note that we do not expect these domains to be \emph{orthogonal}, and it is possible that one question can be reasonably labeled into different domains. In our dataset, one question is assigned only one most-related domain label from D1 to D9.
|
| 57 |
|
| 58 |
-
## Data Example
|
| 59 |
|
| 60 |
|
| 61 |
### MCQ Example
|
|
@@ -76,7 +76,7 @@ Following is one SAQ example, labeled in the domain of *Data Security* and the l
|
|
| 76 |
|
| 77 |
|
| 78 |
|
| 79 |
-
## Benchmarking
|
| 80 |
|
| 81 |
Based on SecBench, we conducted extensive benchmarking on 16 SOTA LLMs, including the GPT series and competitive open-source ones.
|
| 82 |
|
|
@@ -91,7 +91,7 @@ Based on SecBench, we conducted extensive benchmarking on 16 SOTA LLMs, includin
|
|
| 91 |
|
| 92 |
|
| 93 |
|
| 94 |
-
## Released Data
|
| 95 |
|
| 96 |
We release a total of 3,000 questions from SecBench (under the [data] folder), including:
|
| 97 |
|
|
|
|
| 35 |
|
| 36 |
|
| 37 |
|
| 38 |
+
## 1, SecBench Design
|
| 39 |
|
| 40 |
|
| 41 |
The following figure shows the overview of the SecBench design: it is a comprehensive benchmarking dataset aiming to benchmark LLM's capability in cybersecurity from *Multi-Level*, *Multi-Language*, *Multi-Form*, *Multi-Domain*.
|
|
|
|
| 55 |
|
| 56 |
- **Multi-Domain** : The questions in SecBench consist of 9 different domains, including **D1. Security Management**, **D2. Data Security**, **D3. Network and Infrastructure Security**, **D4. Security Standards and Regulations**, **D5. Application Security**, **D6. Identity and Access Control**, **D7. Fundamental Software and Hardware and Technology**, **D8. Endpoint and Host Security**, **D9. Cloud Security**. Particularly, the above domains were devised from several rounds of brainstorming and revision, which were expected to cover most (if not all) related sub-domains in cybersecurity. Note that we do not expect these domains to be \emph{orthogonal}, and it is possible that one question can be reasonably labeled into different domains. In our dataset, one question is assigned only one most-related domain label from D1 to D9.
|
| 57 |
|
| 58 |
+
## 2. Data Example
|
| 59 |
|
| 60 |
|
| 61 |
### MCQ Example
|
|
|
|
| 76 |
|
| 77 |
|
| 78 |
|
| 79 |
+
## 3. Benchmarking
|
| 80 |
|
| 81 |
Based on SecBench, we conducted extensive benchmarking on 16 SOTA LLMs, including the GPT series and competitive open-source ones.
|
| 82 |
|
|
|
|
| 91 |
|
| 92 |
|
| 93 |
|
| 94 |
+
## 4. Released Data
|
| 95 |
|
| 96 |
We release a total of 3,000 questions from SecBench (under the [data] folder), including:
|
| 97 |
|