SecuTable / README.md
jiofidelus's picture
Update README.md
c8eabbe verified
metadata
license: cc-by-4.0
language:
  - en
tags:
  - security

SecuTable: A Dataset for Semantic Table Interpretation in Security Domain

Dataset Overview

Security datasets are scattered on the Internet (CVE, CAPEC, CWE, etc.) and provided in CSV, JSON or XML formats. This makes it difficult to get a holistic view of the interconnectedness of information across different data sources. On the other hand, many datasets focus on specific attack vectors or limited environments, limiting generalisability. There is a lack of detailed annotations in datasets, making it difficult to train supervised learning models.

To solve these limits, security data can be extracted from diverse data sources, organised using a tabular data format and linked to existing knowledge graphs (KGs). This is called Semantic Table Interpretation. The KGs schema will help align different terminologies and understand the relationships between concepts.

Although humans can manually annotate tabular data, understanding the semantics of tables and annotating large volumes of data remains complex, resource-heavy and time-consuming. This has led to scientific challenges such as Tabular Data to Knowledge Graph Challenge - SemTab https://www.cs.ox.ac.uk/isg/challenges/sem-tab/.

We provide in this repository the secu-table dataset. This dataset aims to provide a holistic view of security data extracted from security data sources and organized in tables. It is constructed using the pipeline presented by this figure: SecuTable Example

Dataset

The current version of the dataset consists of three releases:

  • First release here contains the first dataset which was created. It is composed of 1135 tables.
  • Second release is here consists of 1554 tables. This release is being used to evaluate the capabilities of open source LLMs to solve semantic table interpretation tasks during the SemTab challenge https://sem-tab-challenge.github.io/2025/ hosted by the 24th international semantic web conference (ISWC) 2025. It is composed of two folders. The first folder contains the ground truth, composed of 76 tables, corresponding to 8922 entities. This subset will allow people working with the secu-table dataset to see how the dataset annotation should be done.

Dataset evaluation

The evaluation was conducted by running several experiments using open source LLMs (Mistral, Falcon) and closed source LLM (GPT-4o mini) on the ground truth consisting of 76 tables by considering the three main tasks of semantic table interpretation:

  • Cell Entity Annotation (CEA)
  • Column Type Annotation (CTA)
  • Column Property Annotation (CPA).

Prompts

For our experimentation we designed a set of different prompt to solve the STI tasks presented above.

  • CPA prompts
      ## gpt4, mistral and Falcon prompt
      messages = [
          {
              "role": "system",
              "content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
              "Your domain is to provide the uri of the data property or object property in the sepses knowledge graph for CWE entities"
              "please do not include any other text in your response, just give uri of the entity if you know it."
              "If you don't know the uri of the entity, please return 'I don't know' as the value."
              "I don't want any other text in your response, just the value(uri or I don't know)."
              "Here are few examples of your tasks: "
              "Question: Please, which SEPSes URI property has Name as value?"
              "http://w3id.org/sepses/vocab/ref/cwe#name"
              "Question: Please, which SEPSes URI property has abstraction as value?"
              "http://w3id.org/sepses/vocab/ref/cwe#abstraction"
              "Question: Please, which SEPSes URI property has Related Weaknesses as value?"
              "http://w3id.org/sepses/vocab/ref/cwe#hasRelatedWeakness"
          },
          {
              "role": "user",
              "content": f"\n{prompt}",
              },
      ]
    
  • CTA prompt
      ## mistral and falcon prompt
      messages = [
          {
              "role": "system",
              "content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
              "Your domain is to provide the uri of the entity in the sepses knowledge graph for CWE entities."
              "please do not include any other text in your response, just give uri of the entity if you know it."
              "If you don't know the uri of the entity, please return 'I don't know' or unable to find entity to wikidata knowledge graph, please return NIL as the value."
              "I don't want any other text in your response, just the value(uri or NIL or I don't know)."
              "don't include explanation or any other text, Note that the answer should be only in the three case above."
              "Here are few examples of your tasks: "
              "Question: Please what is sepses uri of the entity type of  these entities: ['94', '59', 'CWE-ID', '200']"
              "http://w3id.org/sepses/vocab/ref/cwe#CWE"
              "Question: Please what is sepses uri of the entity type of  these entities: ['94', '59', '787', '200']"
              "http://w3id.org/sepses/vocab/ref/cwe#CWE"
              "Question: Please what is sepses uri of the entity type of  these entities: ['Alternate Terms']"
              "http://w3id.org/sepses/vocab/ref/cwe#ModeOfIntroduction"
          },
          {
              "role": "user",
              "content": f"\n{prompt}",
          },
      ]
    
      # GPT prompt
      messages = [
          {
              "role": "system",
              "content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
              "Respond with a JSON object containing the key 'response'."
              "please do not include any other text in your response, just give uri othe entity if you know it."
              "If you don't know the uri of the entity, please return 'I don't know' athe value."
              "Provide your answer without Justification, notes, etc. Only the answer is required."
              "Here are few examples of your tasks: "
              "Question: Please what is sepses uri of the entity type of  these entities: ['94', '59', 'CWE-ID', '200']"
              "http://w3id.org/sepses/vocab/ref/cwe#CWE"
              "Question: Please what is sepses uri of the entity type of  these entities: ['94', '59', '787', '200']"
              "http://w3id.org/sepses/vocab/ref/cwe#CWE"
              "Question: Please what is sepses uri of the entity type of  these entities: ['Alternate Terms']"
              "http://w3id.org/sepses/vocab/ref/cwe#ModeOfIntroduction"
          },
          {
              "role": "user",
              "content": f"\n{prompt}",
          },
      ]
    
  • CEA prompt
    • CEA wikidata prompt

        ## gpt-4o-mini, mistral and falcon prompt
      messages = [
          {
              "role": "system",
              "content": "You are a helpful assistant on semantic table interpretation in Cybersecurity domain."
              "Your domain is to provide the uri of the entity in the wikidata knowledge graph for CWE entities."
              "please do not include any other text in your response, just give uri of the entity if you know it."
              "If you don't know the uri of the entity, please return 'I don't know' as the value."
              "I don't want any other text in your response, just the value(uri or I don't know)."
              "don't include explanation or any other text, Note that the answer should be only in the three case above."
              "Here are few examples of your tasks: "
              "Question: Please what is wikidata uri of Improper Input Validation entity?"
              "http://www.wikidata.org/entity/Q6007765"
              "Question: Please what is wikidata uri of Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') entity?"
              "http://www.wikidata.org/entity/Q442856"
          },
              {"role": "user", "content": f"{prompt}"},
      ]
      
    • CEA Sepses prompts

      In the first set of experiments, we consider only the fact that the LLMs can reply to the question without considering selective prediction as presented in this picture: Without Selective Prediction

      In the second set of experiments we consider the fact that the LLMs consider to say "I don't know" as seen in this picture: Selective Prediction.

Results

This section presents the performance of three(03) LLMs: Mistral, Falcon3 and gpt-4o-mini, on three STI tasks(CEA, CPA, CTA) within the cybersecurity domain using both wikidata and Sepses as knowledge graphs. It should be noted that, for wikidata only CEA task were perfomed.

CPA task results

This task consists of linking the relationship between two entities in the table to its corresponding property in the SEPSES knowledge graph. The following table summarizes the baseline results obtained for this task

Model Precision Recall F1-score
Mistral 0.403 0.400 0.402
GPT-4o mini 0.505 0.502 0.504
Falcon3-7b-instruct 0.436 0.433 0.435

CTA task results

This task consists of linking the entity type of a of column to its corresponding type in the SEPSES knowledge graph. The baseline results obtained for this task are presented in the following table

Model Precision Recall F1-score
Mistral 0.119 0.119 0.119
GPT-4o mini 0.143 0.143 0.143
Falcon3-7b-instruct 0.133 0.133 0.133

CEA task results

This task consist of linking cell entity in the table to its corresponding in the Knowledge graph. For this task we used both wikidata and Sepses KGs.

Results with wikidata KG

This table show the performance of the LLMs used to perfomed the CEA task using wikidata as KG.

Model Precision Recall F1-score
Mistral 0.011 0.011 0.011
GPT-4o mini 0.014 0.014 0.014
Falcon3-7b-instruct 0.013 0.013 0.013

Results with Sepses KG

The results are divided into two parts: the first part presents the results without selective prediction, and the second part presents the results with selective prediction.

  • Results without Selective Prediction The results without selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.

    Precision Recall F1 Score
    Mistral 0.109 0.109 0.109
    gpt-4o-mini 0.219 0.219 0.219
    falcon3-7b-instruct 0.319 0.319 0.319
  • Results with Selective Prediction

    The results with selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.

    Precision Recall F1 Score
    Mistral 0.0019 0.0019 0.0019
    gpt-4o-mini 0.0154 0.0154 0.0154
    falcon3-7b-instruct 0.0087 0.0087 0.0087

    This tables show the performance of the LLMs for the SP score by considering the fact that the LLMs can say "I don't know" when they do not know the answer.

    Coverage
    Mistral 0.252
    gpt-4o-mini 0.456
    falcon3-7b-instruct 0.270

Artifacts

The code for reproductibility is available on: Secutable repository

Citations