Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
index: int64
Access Gained: string
Attack Origin: string
Authentication Required: string
Availability: string
CVE ID: string
CVE Page: string
CWE ID: string
Complexity: string
Confidentiality: string
Integrity: string
Known Exploits: double
Publish Date: string
Score: double
Summary: string
Update Date: string
Vulnerability Classification: string
add_lines: int64
codeLink: string
commit_id: string
commit_message: string
del_lines: int64
file_name: string
files_changed: string
func_after: string
func_before: string
lang: string
lines_after: string
lines_before: string
parentID: string
patch: string
project: string
project_after: string
project_before: string
target: int64
vul_func_with_fix: string
processed_func: string
flaw_line: string
flaw_line_index: string
before_fix: null
after_fix: null
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 5001
to
{'before_fix': Value('string'), 'after_fix': Value('string'), 'target': Value('int64')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2431, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1984, in _iter_arrow
                  pa_table = cast_table_to_features(pa_table, self.features)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              index: int64
              Access Gained: string
              Attack Origin: string
              Authentication Required: string
              Availability: string
              CVE ID: string
              CVE Page: string
              CWE ID: string
              Complexity: string
              Confidentiality: string
              Integrity: string
              Known Exploits: double
              Publish Date: string
              Score: double
              Summary: string
              Update Date: string
              Vulnerability Classification: string
              add_lines: int64
              codeLink: string
              commit_id: string
              commit_message: string
              del_lines: int64
              file_name: string
              files_changed: string
              func_after: string
              func_before: string
              lang: string
              lines_after: string
              lines_before: string
              parentID: string
              patch: string
              project: string
              project_after: string
              project_before: string
              target: int64
              vul_func_with_fix: string
              processed_func: string
              flaw_line: string
              flaw_line_index: string
              before_fix: null
              after_fix: null
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 5001
              to
              {'before_fix': Value('string'), 'after_fix': Value('string'), 'target': Value('int64')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Linevul Dataset (2008-2026)

Note

  • file to use: my_dataset_clean.csv

Dataset Details

Dataset Description

This dataset consists of C++ code functions curated for software vulnerability detection. It is designed to fine-tune RoBERTa-based models into LineVul models, which identify security vulnerabilities at a function or line level. The data covers a span from 2008 to 2026, capturing diverse C++ coding patterns and historical security flaws.

  • Curated by: Myself
  • Funded by: Self-funded for Academic Research
  • Language(s) (NLP): C++ (Source Code)
  • License: MIT (Methodology based on Fan et al., 2020)

Dataset Sources

Uses

Direct Use

The primary use case is fine-tuning Large Language Models (LLMs) like RoBERTa to classify code snippets as "vulnerable" or "benign." It is also useful for differential analysis between patched and unpatched code.

Out-of-Scope Use

This dataset is not intended for the automated generation of malicious exploits or for use in unauthorized security testing.

Dataset Structure

The dataset is provided in a single CSV file containing approximately 104,000 examples.

Field Description
processed_func The raw C++ function containing the vulnerability.
vul_func_with_fix The patched C++ function with the security fix applied.
target Binary label: 1 (Vulnerable/Before Fix) or 0 (Benign/After Fix).

Recommended Split

  • Training: 80% (~83,200 samples)
  • Testing/Validation: 20% (~20,800 samples)

Dataset Creation

Curation Rationale

Developed as part of a Master's degree project to provide a high-quality, long-term dataset (2008-2026) for training state-of-the-art vulnerability detection models.

Source Data

Data Collection and Processing

The data was crawled following the methodology of the BigVul paper. It targets GitHub commits linked to Common Vulnerabilities and Exposures (CVE) entries.

  1. Extraction: Functions were extracted from security-relevant commits.
  2. Labeling: Snippets before the fix are labeled 1, and snippets after the fix are labeled 0.
  3. Cleaning: Removal of non-C++ code and duplicate entries.

Who are the source data producers?

The data originates from open-source developers contributing to public C++ repositories and security researchers documenting CVEs.

Bias, Risks, and Limitations

  • CVE-Centric: The dataset only includes vulnerabilities that were officially caught and patched; it may not represent undiscovered logic flaws.
  • Label Leakage: Because target 0 is often the after_fix version of target 1, models may learn to recognize "fix patterns" (like adding a null check) rather than the vulnerability itself.
Downloads last month
15