Update README.md
Browse files
README.md
CHANGED
|
@@ -26,11 +26,111 @@ The current version of the dataset consists of three releases:
|
|
| 26 |
The evaluation was conducted by running several experiments using open source LLMs ([Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3), [Falcon](https://huggingface.co/tiiuae/Falcon3-7B-Instruct)) and closed source LLM (GPT-4o mini) on the [ground truth consisting of 76 tables](https://huggingface.co/datasets/jiofidelus/SecuTable/tree/main/secutable_v2/ground_truth) by considering the three main tasks of semantic table interpretation:
|
| 27 |
- Cell Entity Annotation (CEA)
|
| 28 |
- Column Type Annotation (CTA)
|
| 29 |
-
- Column Property Annotation (CPA).
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
<!-- \begin{lstlisting}[caption={Prompt used when the LLM do not consider the selective prediction with Mistral and Falcon}, label={code:PromptNonSelective}, basicstyle=\ttfamily\small, backgroundcolor=\color{gray!10}]
|
| 36 |
Put the prompt here (Jean)
|
|
@@ -48,34 +148,70 @@ Put the prompt here (Jean)
|
|
| 48 |
Put the prompt here (Jean)
|
| 49 |
\end{lstlisting} -->
|
| 50 |
|
| 51 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
The results are divided into two parts: the first part presents the results without selective prediction, and the second part presents the results with selective prediction.
|
| 53 |
|
| 54 |
-
|
| 55 |
The results without selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
|
| 60 |
-
|
|
|
|
|
| 61 |
|
| 62 |
<!--  -->
|
| 63 |
|
| 64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
-
|
| 67 |
-
| | Precision | Recall | F1 Score |
|
| 68 |
-
|----------|-----------|-----------|-----------|
|
| 69 |
-
|Mistral| 0.0019 | 0.0019 | 0.0019 |
|
| 70 |
-
|gpt-4o-mini | 0.0154 | 0.0154 | 0.0154 |
|
| 71 |
-
|falcon3-7b-instruct | 0.0087| 0.0087 | 0.0087 |
|
| 72 |
|
| 73 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|Mistral| 0.252 |
|
| 78 |
-
|gpt-4o-mini | 0.456 |
|
| 79 |
-
|falcon3-7b-instruct | 0.270 |
|
| 80 |
|
| 81 |
## Citations
|
|
|
|
| 26 |
The evaluation was conducted by running several experiments using open source LLMs ([Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3), [Falcon](https://huggingface.co/tiiuae/Falcon3-7B-Instruct)) and closed source LLM (GPT-4o mini) on the [ground truth consisting of 76 tables](https://huggingface.co/datasets/jiofidelus/SecuTable/tree/main/secutable_v2/ground_truth) by considering the three main tasks of semantic table interpretation:
|
| 27 |
- Cell Entity Annotation (CEA)
|
| 28 |
- Column Type Annotation (CTA)
|
| 29 |
+
- Column Property Annotation (CPA).
|
| 30 |
+
|
| 31 |
+
### Prompts
|
| 32 |
+
For our experimentation we designed a set of different prompt to solve the STI tasks presented above.
|
| 33 |
+
- CPA prompts
|
| 34 |
+
```bash
|
| 35 |
+
## gpt4, mistral and Falcon prompt
|
| 36 |
+
messages = [
|
| 37 |
+
{
|
| 38 |
+
"role": "system",
|
| 39 |
+
"content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
|
| 40 |
+
"Your domain is to provide the uri of the data property or object property in the sepses knowledge graph for CWE entities"
|
| 41 |
+
"please do not include any other text in your response, just give uri of the entity if you know it."
|
| 42 |
+
"If you don't know the uri of the entity, please return 'I don't know' as the value."
|
| 43 |
+
"I don't want any other text in your response, just the value(uri or I don't know)."
|
| 44 |
+
"Here are few examples of your tasks: "
|
| 45 |
+
"Question: Please, which SEPSes URI property has Name as value?"
|
| 46 |
+
"http://w3id.org/sepses/vocab/ref/cwe#name"
|
| 47 |
+
"Question: Please, which SEPSes URI property has abstraction as value?"
|
| 48 |
+
"http://w3id.org/sepses/vocab/ref/cwe#abstraction"
|
| 49 |
+
"Question: Please, which SEPSes URI property has Related Weaknesses as value?"
|
| 50 |
+
"http://w3id.org/sepses/vocab/ref/cwe#hasRelatedWeakness"
|
| 51 |
+
},
|
| 52 |
+
{
|
| 53 |
+
"role": "user",
|
| 54 |
+
"content": f"\n{prompt}",
|
| 55 |
+
},
|
| 56 |
+
]
|
| 57 |
+
```
|
| 58 |
+
- CTA prompt
|
| 59 |
+
```bash
|
| 60 |
+
## mistral and falcon prompt
|
| 61 |
+
messages = [
|
| 62 |
+
{
|
| 63 |
+
"role": "system",
|
| 64 |
+
"content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
|
| 65 |
+
"Your domain is to provide the uri of the entity in the sepses knowledge graph for CWE entities."
|
| 66 |
+
"please do not include any other text in your response, just give uri of the entity if you know it."
|
| 67 |
+
"If you don't know the uri of the entity, please return 'I don't know' or unable to find entity to wikidata knowledge graph, please return NIL as the value."
|
| 68 |
+
"I don't want any other text in your response, just the value(uri or NIL or I don't know)."
|
| 69 |
+
"don't include explanation or any other text, Note that the answer should be only in the three case above."
|
| 70 |
+
"Here are few examples of your tasks: "
|
| 71 |
+
"Question: Please what is sepses uri of the entity type of these entities: ['94', '59', 'CWE-ID', '200']"
|
| 72 |
+
"http://w3id.org/sepses/vocab/ref/cwe#CWE"
|
| 73 |
+
"Question: Please what is sepses uri of the entity type of these entities: ['94', '59', '787', '200']"
|
| 74 |
+
"http://w3id.org/sepses/vocab/ref/cwe#CWE"
|
| 75 |
+
"Question: Please what is sepses uri of the entity type of these entities: ['Alternate Terms']"
|
| 76 |
+
"http://w3id.org/sepses/vocab/ref/cwe#ModeOfIntroduction"
|
| 77 |
+
},
|
| 78 |
+
{
|
| 79 |
+
"role": "user",
|
| 80 |
+
"content": f"\n{prompt}",
|
| 81 |
+
},
|
| 82 |
+
]
|
| 83 |
+
|
| 84 |
+
# GPT prompt
|
| 85 |
+
messages = [
|
| 86 |
+
{
|
| 87 |
+
"role": "system",
|
| 88 |
+
"content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
|
| 89 |
+
"Respond with a JSON object containing the key 'response'."
|
| 90 |
+
"please do not include any other text in your response, just give uri othe entity if you know it."
|
| 91 |
+
"If you don't know the uri of the entity, please return 'I don't know' athe value."
|
| 92 |
+
"Provide your answer without Justification, notes, etc. Only the answer is required."
|
| 93 |
+
"Here are few examples of your tasks: "
|
| 94 |
+
"Question: Please what is sepses uri of the entity type of these entities: ['94', '59', 'CWE-ID', '200']"
|
| 95 |
+
"http://w3id.org/sepses/vocab/ref/cwe#CWE"
|
| 96 |
+
"Question: Please what is sepses uri of the entity type of these entities: ['94', '59', '787', '200']"
|
| 97 |
+
"http://w3id.org/sepses/vocab/ref/cwe#CWE"
|
| 98 |
+
"Question: Please what is sepses uri of the entity type of these entities: ['Alternate Terms']"
|
| 99 |
+
"http://w3id.org/sepses/vocab/ref/cwe#ModeOfIntroduction"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"role": "user",
|
| 103 |
+
"content": f"\n{prompt}",
|
| 104 |
+
},
|
| 105 |
+
]
|
| 106 |
+
```
|
| 107 |
+
- CEA prompt
|
| 108 |
+
- CEA wikidata prompt
|
| 109 |
+
```bash
|
| 110 |
+
## gpt-4o-mini, mistral and falcon prompt
|
| 111 |
+
messages = [
|
| 112 |
+
{
|
| 113 |
+
"role": "system",
|
| 114 |
+
"content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
|
| 115 |
+
"Your domain is to provide the uri of the entity in the wikidata knowledge graph for CWE entities."
|
| 116 |
+
"please do not include any other text in your response, just give uri of the entity if you know it."
|
| 117 |
+
"If you don't know the uri of the entity, please return 'I don't know' as the value."
|
| 118 |
+
"I don't want any other text in your response, just the value(uri or I don't know)."
|
| 119 |
+
"don't include explanation or any other text, Note that the answer should be only in the three case above."
|
| 120 |
+
"Here are few examples of your tasks: "
|
| 121 |
+
"Question: Please what is wikidata uri of Improper Input Validation entity?"
|
| 122 |
+
"http://www.wikidata.org/entity/Q6007765"
|
| 123 |
+
"Question: Please what is wikidata uri of Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') entity?"
|
| 124 |
+
"http://www.wikidata.org/entity/Q442856"
|
| 125 |
+
},
|
| 126 |
+
{"role": "user", "content": f"{prompt}"},
|
| 127 |
+
]
|
| 128 |
+
```
|
| 129 |
+
- CEA Sepses prompts
|
| 130 |
+
|
| 131 |
+
In the first set of experiments, we consider only the fact that the LLMs can reply to the question without considering selective prediction as presented in this picture: 
|
| 132 |
+
|
| 133 |
+
In the second set of experiments we consider the fact that the LLMs consider to say "I don't know" as seen in this picture: .
|
| 134 |
|
| 135 |
<!-- \begin{lstlisting}[caption={Prompt used when the LLM do not consider the selective prediction with Mistral and Falcon}, label={code:PromptNonSelective}, basicstyle=\ttfamily\small, backgroundcolor=\color{gray!10}]
|
| 136 |
Put the prompt here (Jean)
|
|
|
|
| 148 |
Put the prompt here (Jean)
|
| 149 |
\end{lstlisting} -->
|
| 150 |
|
| 151 |
+
## Results
|
| 152 |
+
This section presents the performance of three(03) LLMs: Mistral, Falcon3 and gpt-4o-mini, on three STI tasks(CEA, CPA, CTA) within the cybersecurity domain using both wikidata
|
| 153 |
+
and Sepses as knowledge graphs. It should be noted that, for wikidata only CEA task were perfomed.
|
| 154 |
+
|
| 155 |
+
### CPA task results
|
| 156 |
+
This task consists of linking the relationship between two entities in the table to its corresponding property in the SEPSES knowledge graph.
|
| 157 |
+
The following table summarizes the baseline results obtained for this task
|
| 158 |
+
| Model | Precision | Recall | F1-score |
|
| 159 |
+
|-----------------------|-----------|--------|----------|
|
| 160 |
+
| Mistral | 0.403 | 0.400 | 0.402 |
|
| 161 |
+
| GPT-4o mini | 0.505 | 0.502 | 0.504 |
|
| 162 |
+
| Falcon3-7b-instruct | 0.436 | 0.433 | 0.435 |
|
| 163 |
+
|
| 164 |
+
### CTA task results
|
| 165 |
+
This task consists of linking the entity type of a of column to its corresponding type in the SEPSES knowledge graph.
|
| 166 |
+
The baseline results obtained for this task are presented in the following table
|
| 167 |
+
| Model | Precision | Recall | F1-score |
|
| 168 |
+
|-----------------------|-----------|--------|----------|
|
| 169 |
+
| Mistral | 0.119 | 0.119 | 0.119 |
|
| 170 |
+
| GPT-4o mini | 0.143 | 0.143 | 0.143 |
|
| 171 |
+
| Falcon3-7b-instruct | 0.133 | 0.133 | 0.133 |
|
| 172 |
+
|
| 173 |
+
### CEA task results
|
| 174 |
+
This task consist of linking cell entity in the table to its corresponding in the Knowledge graph. For this task we used both wikidata and Sepses KGs.
|
| 175 |
+
#### Results with wikidata KG
|
| 176 |
+
This table show the performance of the LLMs used to perfomed the CEA task using wikidata as KG.
|
| 177 |
+
| Model | Precision | Recall | F1-score |
|
| 178 |
+
|-----------------------|-----------|--------|----------|
|
| 179 |
+
| Mistral | 0.011 | 0.011 | 0.011 |
|
| 180 |
+
| GPT-4o mini | 0.014 | 0.014 | 0.014 |
|
| 181 |
+
| Falcon3-7b-instruct | 0.013 | 0.013 | 0.013 |
|
| 182 |
+
|
| 183 |
+
#### Results with Sepses KG
|
| 184 |
The results are divided into two parts: the first part presents the results without selective prediction, and the second part presents the results with selective prediction.
|
| 185 |
|
| 186 |
+
- <strong> Results without Selective Prediction</strong>
|
| 187 |
The results without selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.
|
| 188 |
+
|
| 189 |
+
| | Precision | Recall | F1 Score |
|
| 190 |
+
|----------|-----------|-----------|-----------|
|
| 191 |
+
|Mistral| 0.109 | 0.109 | 0.109 |
|
| 192 |
+
|gpt-4o-mini | 0.219 | 0.219 | 0.219 |
|
| 193 |
+
|falcon3-7b-instruct | 0.319 | 0.319 | 0.319 |
|
| 194 |
|
| 195 |
<!--  -->
|
| 196 |
|
| 197 |
+
- <strong> Results with Selective Prediction </strong>
|
| 198 |
+
|
| 199 |
+
The results with selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.
|
| 200 |
+
| | Precision | Recall | F1 Score |
|
| 201 |
+
|----------|-----------|-----------|-----------|
|
| 202 |
+
|Mistral| 0.0019 | 0.0019 | 0.0019 |
|
| 203 |
+
|gpt-4o-mini | 0.0154 | 0.0154 | 0.0154 |
|
| 204 |
+
|falcon3-7b-instruct | 0.0087| 0.0087 | 0.0087 |
|
| 205 |
|
| 206 |
+
This tables show the performance of the LLMs for the SP score by considering the fact that the LLMs can say "I don't know" when they do not know the answer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 207 |
|
| 208 |
+
| | Coverage |
|
| 209 |
+
|----------|-----------|
|
| 210 |
+
|Mistral| 0.252 |
|
| 211 |
+
|gpt-4o-mini | 0.456 |
|
| 212 |
+
|falcon3-7b-instruct | 0.270 |
|
| 213 |
|
| 214 |
+
## Artifacts
|
| 215 |
+
The code for reproductibility is available on: [Secutable repository](https://gitlab.com/fidel.jiomekong/secutable)
|
|
|
|
|
|
|
|
|
|
| 216 |
|
| 217 |
## Citations
|