File size: 13,649 Bytes
e8fde44 ed91077 e8fde44 1df610d 082b9b5 1df610d 082b9b5 1df610d 082b9b5 1df610d e4d7367 1df610d 082b9b5 c8eabbe 1df610d c8eabbe 082b9b5 1df610d c8eabbe 1df610d c8eabbe 1df610d c8eabbe 1df610d c8eabbe 1df610d c8eabbe 2d45cbd c8eabbe 2d45cbd 082b9b5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 |
---
license: cc-by-4.0
language:
- en
tags:
- security
---
# SecuTable: A Dataset for Semantic Table Interpretation in Security Domain
## Dataset Overview
Security datasets are scattered on the Internet (CVE, CAPEC, CWE, etc.) and provided in CSV, JSON or XML formats. This makes it difficult to get a holistic view of the interconnectedness of information across different data sources. On the other hand, many datasets focus on specific attack vectors or limited environments, limiting generalisability. There is a lack of detailed annotations in datasets, making it difficult to train supervised learning models.
To solve these limits, security data can be extracted from diverse data sources, organised using a tabular data format and linked to existing knowledge graphs (KGs). This is called Semantic Table Interpretation. The KGs schema will help align different terminologies and understand the relationships between concepts.
Although humans can manually annotate tabular data, understanding the semantics of tables and annotating large volumes of data remains complex, resource-heavy and time-consuming. This has led to scientific challenges such as Tabular Data to Knowledge Graph Challenge - SemTab [https://www.cs.ox.ac.uk/isg/challenges/sem-tab/](https://www.cs.ox.ac.uk/isg/challenges/sem-tab/).
We provide in this repository the secu-table dataset. This dataset aims to provide a holistic view of security data extracted from security data sources and organized in tables. It is constructed using the pipeline presented by this figure: 
## Dataset
The current version of the dataset consists of three releases:
- First release [here](https://huggingface.co/datasets/jiofidelus/SecuTable/tree/v1.0) contains the first dataset which was created. It is composed of 1135 tables.
- Second release is [here](https://huggingface.co/datasets/jiofidelus/SecuTable/tree/main) consists of 1554 tables. This release is being used to evaluate the capabilities of open source LLMs to solve semantic table interpretation tasks during the SemTab challenge [https://sem-tab-challenge.github.io/2025/](https://sem-tab-challenge.github.io/2025/) hosted by the 24th international semantic web conference (ISWC) 2025. It is composed of two folders. The first folder contains the ground truth, composed of 76 tables, corresponding to 8922 entities. This subset will allow people working with the secu-table dataset to see how the dataset annotation should be done.
## Dataset evaluation
The evaluation was conducted by running several experiments using open source LLMs ([Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3), [Falcon](https://huggingface.co/tiiuae/Falcon3-7B-Instruct)) and closed source LLM (GPT-4o mini) on the [ground truth consisting of 76 tables](https://huggingface.co/datasets/jiofidelus/SecuTable/tree/main/secutable_v2/ground_truth) by considering the three main tasks of semantic table interpretation:
- Cell Entity Annotation (CEA)
- Column Type Annotation (CTA)
- Column Property Annotation (CPA).
### Prompts
For our experimentation we designed a set of different prompt to solve the STI tasks presented above.
- CPA prompts
```bash
## gpt4, mistral and Falcon prompt
messages = [
{
"role": "system",
"content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
"Your domain is to provide the uri of the data property or object property in the sepses knowledge graph for CWE entities"
"please do not include any other text in your response, just give uri of the entity if you know it."
"If you don't know the uri of the entity, please return 'I don't know' as the value."
"I don't want any other text in your response, just the value(uri or I don't know)."
"Here are few examples of your tasks: "
"Question: Please, which SEPSes URI property has Name as value?"
"http://w3id.org/sepses/vocab/ref/cwe#name"
"Question: Please, which SEPSes URI property has abstraction as value?"
"http://w3id.org/sepses/vocab/ref/cwe#abstraction"
"Question: Please, which SEPSes URI property has Related Weaknesses as value?"
"http://w3id.org/sepses/vocab/ref/cwe#hasRelatedWeakness"
},
{
"role": "user",
"content": f"\n{prompt}",
},
]
```
- CTA prompt
```bash
## mistral and falcon prompt
messages = [
{
"role": "system",
"content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
"Your domain is to provide the uri of the entity in the sepses knowledge graph for CWE entities."
"please do not include any other text in your response, just give uri of the entity if you know it."
"If you don't know the uri of the entity, please return 'I don't know' or unable to find entity to wikidata knowledge graph, please return NIL as the value."
"I don't want any other text in your response, just the value(uri or NIL or I don't know)."
"don't include explanation or any other text, Note that the answer should be only in the three case above."
"Here are few examples of your tasks: "
"Question: Please what is sepses uri of the entity type of these entities: ['94', '59', 'CWE-ID', '200']"
"http://w3id.org/sepses/vocab/ref/cwe#CWE"
"Question: Please what is sepses uri of the entity type of these entities: ['94', '59', '787', '200']"
"http://w3id.org/sepses/vocab/ref/cwe#CWE"
"Question: Please what is sepses uri of the entity type of these entities: ['Alternate Terms']"
"http://w3id.org/sepses/vocab/ref/cwe#ModeOfIntroduction"
},
{
"role": "user",
"content": f"\n{prompt}",
},
]
# GPT prompt
messages = [
{
"role": "system",
"content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
"Respond with a JSON object containing the key 'response'."
"please do not include any other text in your response, just give uri othe entity if you know it."
"If you don't know the uri of the entity, please return 'I don't know' athe value."
"Provide your answer without Justification, notes, etc. Only the answer is required."
"Here are few examples of your tasks: "
"Question: Please what is sepses uri of the entity type of these entities: ['94', '59', 'CWE-ID', '200']"
"http://w3id.org/sepses/vocab/ref/cwe#CWE"
"Question: Please what is sepses uri of the entity type of these entities: ['94', '59', '787', '200']"
"http://w3id.org/sepses/vocab/ref/cwe#CWE"
"Question: Please what is sepses uri of the entity type of these entities: ['Alternate Terms']"
"http://w3id.org/sepses/vocab/ref/cwe#ModeOfIntroduction"
},
{
"role": "user",
"content": f"\n{prompt}",
},
]
```
- CEA prompt
- CEA wikidata prompt
```bash
## gpt-4o-mini, mistral and falcon prompt
messages = [
{
"role": "system",
"content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
"Your domain is to provide the uri of the entity in the wikidata knowledge graph for CWE entities."
"please do not include any other text in your response, just give uri of the entity if you know it."
"If you don't know the uri of the entity, please return 'I don't know' as the value."
"I don't want any other text in your response, just the value(uri or I don't know)."
"don't include explanation or any other text, Note that the answer should be only in the three case above."
"Here are few examples of your tasks: "
"Question: Please what is wikidata uri of Improper Input Validation entity?"
"http://www.wikidata.org/entity/Q6007765"
"Question: Please what is wikidata uri of Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') entity?"
"http://www.wikidata.org/entity/Q442856"
},
{"role": "user", "content": f"{prompt}"},
]
```
- CEA Sepses prompts
In the first set of experiments, we consider only the fact that the LLMs can reply to the question without considering selective prediction as presented in this picture: 
In the second set of experiments we consider the fact that the LLMs consider to say "I don't know" as seen in this picture: .
<!-- \begin{lstlisting}[caption={Prompt used when the LLM do not consider the selective prediction with Mistral and Falcon}, label={code:PromptNonSelective}, basicstyle=\ttfamily\small, backgroundcolor=\color{gray!10}]
Put the prompt here (Jean)
\end{lstlisting}
\begin{lstlisting}[caption={Prompt used when the LLM do not consider the selective prediction with GPT-4o mini}, label={code:SPARQLCPA}, basicstyle=\ttfamily\small, backgroundcolor=\color{gray!10}]
Put the prompt here (Jean)
\end{lstlisting}
\begin{lstlisting}[caption={Prompt used when the LLM consider the selective prediction with Mistral and Falcon}, label={code:SPARQLCPA}, basicstyle=\ttfamily\small, backgroundcolor=\color{gray!10}]
Put the prompt here (Jean)
\end{lstlisting}
\begin{lstlisting}[caption={Prompt used when the LLM consider the selective prediction with GPT-4o mini}, label={code:SPARQLCPA}, basicstyle=\ttfamily\small, backgroundcolor=\color{gray!10}]
Put the prompt here (Jean)
\end{lstlisting} -->
## Results
This section presents the performance of three(03) LLMs: Mistral, Falcon3 and gpt-4o-mini, on three STI tasks(CEA, CPA, CTA) within the cybersecurity domain using both wikidata
and Sepses as knowledge graphs. It should be noted that, for wikidata only CEA task were perfomed.
### CPA task results
This task consists of linking the relationship between two entities in the table to its corresponding property in the SEPSES knowledge graph.
The following table summarizes the baseline results obtained for this task
| Model | Precision | Recall | F1-score |
|-----------------------|-----------|--------|----------|
| Mistral | 0.403 | 0.400 | 0.402 |
| GPT-4o mini | 0.505 | 0.502 | 0.504 |
| Falcon3-7b-instruct | 0.436 | 0.433 | 0.435 |
### CTA task results
This task consists of linking the entity type of a of column to its corresponding type in the SEPSES knowledge graph.
The baseline results obtained for this task are presented in the following table
| Model | Precision | Recall | F1-score |
|-----------------------|-----------|--------|----------|
| Mistral | 0.119 | 0.119 | 0.119 |
| GPT-4o mini | 0.143 | 0.143 | 0.143 |
| Falcon3-7b-instruct | 0.133 | 0.133 | 0.133 |
### CEA task results
This task consist of linking cell entity in the table to its corresponding in the Knowledge graph. For this task we used both wikidata and Sepses KGs.
#### Results with wikidata KG
This table show the performance of the LLMs used to perfomed the CEA task using wikidata as KG.
| Model | Precision | Recall | F1-score |
|-----------------------|-----------|--------|----------|
| Mistral | 0.011 | 0.011 | 0.011 |
| GPT-4o mini | 0.014 | 0.014 | 0.014 |
| Falcon3-7b-instruct | 0.013 | 0.013 | 0.013 |
#### Results with Sepses KG
The results are divided into two parts: the first part presents the results without selective prediction, and the second part presents the results with selective prediction.
- <strong> Results without Selective Prediction</strong>
The results without selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.
| | Precision | Recall | F1 Score |
|----------|-----------|-----------|-----------|
|Mistral| 0.109 | 0.109 | 0.109 |
|gpt-4o-mini | 0.219 | 0.219 | 0.219 |
|falcon3-7b-instruct | 0.319 | 0.319 | 0.319 |
<!--  -->
- <strong> Results with Selective Prediction </strong>
The results with selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.
| | Precision | Recall | F1 Score |
|----------|-----------|-----------|-----------|
|Mistral| 0.0019 | 0.0019 | 0.0019 |
|gpt-4o-mini | 0.0154 | 0.0154 | 0.0154 |
|falcon3-7b-instruct | 0.0087| 0.0087 | 0.0087 |
This tables show the performance of the LLMs for the SP score by considering the fact that the LLMs can say "I don't know" when they do not know the answer.
| | Coverage |
|----------|-----------|
|Mistral| 0.252 |
|gpt-4o-mini | 0.456 |
|falcon3-7b-instruct | 0.270 |
## Artifacts
The code for reproductibility is available on: [Secutable repository](https://gitlab.com/fidel.jiomekong/secutable)
## Citations |