ChamaraVishwajithRajapaksha's picture
Update README.md
e51a237 verified
metadata
dataset_info:
  features:
    - name: conversations
      list:
        - name: from
          dtype: string
        - name: value
          dtype: string
    - name: source
      dtype: string
    - name: score
      dtype: float64
  splits:
    - name: train
      num_bytes: 93911454
      num_examples: 26296
    - name: test
      num_bytes: 10913928
      num_examples: 2922
  download_size: 34387789
  dataset_size: 104825382
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
license: mit
task_categories:
  - text-generation
  - question-answering
language:
  - en
tags:
  - code
  - code vulnerability
  - C
  - C++
  - cpp
  - en
pretty_name: Code Vulnerability Balanced  CWE-Enriched Conversation Dataset
size_categories:
  - 10K<n<100K

Code Vulnerability Balanced — CWE-Enriched Conversation Dataset

📌 Overview

This dataset is a balanced and shuffled version of ChamaraVishwajithRajapaksha/Code-Vulnerability-FineTune, which itself was derived from the original ChamaraVishwajithRajapaksha/Code_Vulnerability_Dataset (330k rows, sourced from DiverseVul + MITRE CWE enrichment).

The original fine-tuning dataset was imbalanced — the number of Vulnerable and Safe samples were not equal — and the samples were not shuffled. This dataset addresses both issues:

  • Balanced — Equal number of Vulnerable and Safe (patched) samples
  • Shuffled — Samples are randomly shuffled to prevent ordering bias during training
  • ShareGPT / FineTome format — Ready for fine-tuning with Unsloth, TRL, and similar frameworks

🎯 Use Cases

  • Fine-tuning LLMs for security code review
  • Training vulnerability detection models
  • Building code-aware security assistants
  • Research in automated static analysis and secure coding

📊 Dataset Statistics

Property Value
Source Dataset ChamaraVishwajithRajapaksha/Code-Vulnerability-FineTune
Balancing Strategy Undersample majority class to match minority class
Shuffled Yes (random seed 42)
Format ShareGPT (conversations)
Languages C, C++
Splits train (90%) · test (10%)
License MIT

🔄 What Changed From the Source Dataset

Change Description
Balancing The source dataset had an unequal number of Vulnerable vs Safe samples. This dataset undersamples the majority class so both are equal in count.
Shuffling All rows are randomly shuffled (seed 42) before splitting, preventing the model from learning ordering patterns.
Same format The ShareGPT conversation structure is preserved exactly as in the source dataset.

🗂️ Data Format

Each row follows the ShareGPT conversation format with two turns:

{
  "conversations": [
    {
      "from": "human",
      "value": "Analyze the following code snippet and identify any security vulnerabilities...\n\n```c\n<source code>\n```"
    },
    {
      "from": "gpt",
      "value": "## Security Vulnerability Analysis\n\n⚠️ This code sample is marked as **Vulnerable**.\n\n### 🔍 Vulnerability Classification\n- **CWE ID**: CWE-787\n- **Type**: Out-of-bounds Write\n- **Severity**: High\n..."
    }
  ],
  "source": "code_vulnerability_cwe",
  "score": 4.8
}

Fields

Field Type Description
conversations list List of 2 conversation turns
conversations[0].from str Always "human"
conversations[0].value str Instruction + C/C++ code block
conversations[1].from str Always "gpt"
conversations[1].value str Structured vulnerability analysis
source str Always "code_vulnerability_cwe"
score float Quality score (4.8)

🔄 Preprocessing Pipeline

Step 1 — Load

Load the source dataset from Hugging Face Hub (ChamaraVishwajithRajapaksha/Code-Vulnerability-FineTune).

Step 2 — Separate by Label

Split all rows into two groups:

  • Vulnerable samples
  • Safe (patched) samples

Step 3 — Balance

Undersample the larger group so that both groups have an equal number of rows, resulting in a 50/50 class distribution.

Step 4 — Shuffle

Combine the two balanced groups and shuffle the entire dataset with a fixed random seed (42) for reproducibility.

Step 5 — Split & Push

  • 90% / 10% train-test split (random seed 42)
  • Pushed to Hugging Face Hub in Parquet format

📋 CWE Categories Covered

The dataset covers a wide range of Common Weakness Enumeration types including:

CWE ID Vulnerability Type
CWE-787 Out-of-bounds Write
CWE-416 Use After Free
CWE-125 Out-of-bounds Read
CWE-476 NULL Pointer Dereference
CWE-119 Improper Buffer Restriction
CWE-284 Improper Access Control
CWE-20 Improper Input Validation
CWE-78 OS Command Injection
CWE-327 Broken Cryptographic Algorithm
CWE-835 Infinite Loop
CWE-122 Heap-based Buffer Overflow
CWE-362 Race Condition
CWE-22 Path Traversal
CWE-287 Improper Authentication
CWE-319 Cleartext Transmission
... 30+ more CWE types

🧪 Sample Example

Human Turn

Analyze the following code snippet and identify any security vulnerabilities.
Provide a detailed explanation of the vulnerability type, its severity,
potential impact, and the CWE classification.

static boolean ReadICCProfile(j_decompress_ptr jpeg_info) {
  char magick[12];
  ErrorManager *error_manager;
  ExceptionInfo *exception;
  Image *image;
  ...
}

Assistant Turn

## Security Vulnerability Analysis

⚠️ This code sample is marked as **Vulnerable**.

### 🔍 Vulnerability Classification
- **CWE ID**: CWE-416
- **Type**: Use After Free
- **Severity**: High
- **Category**: Memory Corruption
- **Affected Languages**: C, C++

### 📋 Description
The product reuses or references memory after it has been freed. At some point
afterward, the memory may be allocated again and saved in another pointer, while
the original pointer references a location somewhere within the new allocation.

### 💥 Potential Impact
  - Execute Unauthorized Code or Commands
  - Read Memory
  - Modify Memory
  - DoS: Crash, Exit, or Restart

### 🛡️ Recommendation
Review the code for Use After Free patterns. Ensure proper bounds checking,
input validation, and memory management practices are applied as recommended
by the CWE guidelines for CWE-416.

🚀 Usage

Load with 🤗 Datasets

from datasets import load_dataset

dataset = load_dataset("ChamaraVishwajithRajapaksha/Code-Vulnerability-Balanced")
print(dataset)
# DatasetDict({
#     train: Dataset({features: ['conversations', 'source', 'score'], num_rows: ...}),
#     test:  Dataset({features: ['conversations', 'source', 'score'], num_rows: ...})
# })

Access a Sample

sample = dataset['train'][0]

# Print the human question (code to analyze)
print(sample['conversations'][0]['value'])

# Print the assistant answer (vulnerability analysis)
print(sample['conversations'][1]['value'])

Fine-tuning with Unsloth / TRL

from trl import SFTTrainer
from unsloth import FastLanguageModel

# The dataset is already in ShareGPT format — compatible with
# most fine-tuning frameworks that support conversation datasets.
trainer = SFTTrainer(
    model=model,
    tokenizer=tokenizer,
    train_dataset=dataset['train'],
    dataset_text_field="conversations",  # adjust per framework
    ...
)

📁 Dataset Lineage

bstee615/diversevul
    └──> ChamaraVishwajithRajapaksha/Code_Vulnerability_Dataset
              (330k rows, CWE-enriched via MITRE API)
         └──> ChamaraVishwajithRajapaksha/Code-Vulnerability-FineTune
                   (ShareGPT format, unbalanced, unshuffled)
              └──> ChamaraVishwajithRajapaksha/Code-Vulnerability-Balanced
                        (balanced + shuffled — this dataset)

⚠️ Limitations

  • Code samples are primarily in C and C++ — limited coverage of other languages
  • Balancing is achieved by undersampling the majority class, so total row count is reduced compared to the source dataset
  • The Safe samples represent patched/fixed versions, not inherently safe code — context matters
  • CWE details describe the class of vulnerability, not a precise analysis of each individual function
  • This dataset is intended for research and educational purposes

📜 License

This dataset is released under the MIT License, consistent with the source dataset license.


🙏 Citation

If you use this dataset in your research, please cite the original source and this dataset:

@dataset{code_vulnerability_balanced,
  title        = {Code Vulnerability Balanced: CWE-Enriched Conversation Dataset},
  author       = {ChamaraVishwajithRajapaksha},
  year         = {2025},
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/datasets/ChamaraVishwajithRajapaksha/Code-Vulnerability-Balanced},
  note         = {Balanced and shuffled version of Code-Vulnerability-FineTune, in ShareGPT format}
}

🔗 Related Resources