Datasets:
metadata
license: apache-2.0
task_categories:
- text-generation
language:
- code
size_categories:
- 10K<n<100K
tags:
- security
- vulnerability
- code-fix
- cwe
CrossVul Multi-Language Security Vulnerability Dataset
Security vulnerability dataset from CrossVul with 9,313 before/after code pairs across 158 CWE categories and 21 programming languages.
Contains vulnerable code examples paired with their secure fixes, ideal for training AI models on security code remediation.
Dataset Statistics
- Total Examples: 9,313
- CWE Categories: 158
- Languages: 21
- Format: Raw vulnerability records (JSON Lines)
Top Languages
| Language | Examples |
|---|---|
| c | 3,972 |
| php | 2,139 |
| javascript | 660 |
| python | 492 |
| java | 489 |
| ruby | 422 |
| cpp | 399 |
| go | 172 |
| xml | 106 |
| json | 97 |
Top CWE Categories
| CWE ID | Description | Examples |
|---|---|---|
| CWE-79 | Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') - Cross-site scripting (XSS) vulnerabilities occur when: Untrusted data enters a web application, typically from a web request. | 1,426 |
| CWE-119 | Improper Restriction of Operations within the Bounds of a Memory Buffer - Certain languages allow direct addressing of memory locations and do not automatically ensure that these locations are valid for the memory buffer that is being referenced. | 724 |
| CWE-20 | Improper Input Validation - Input validation is a frequently-used technique for checking potentially dangerous inputs in order to ensure that the inputs are safe for processing within the code, or when communicating with othe... | 710 |
| CWE-125 | Out-of-bounds Read - Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. | 531 |
| CWE-200 | Exposure of Sensitive Information to an Unauthorized Actor - There are many different kinds of mistakes that introduce information exposures. | 448 |
| CWE-264 | Permissions, Privileges, and Access Controls | 419 |
| CWE-89 | Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') - Without sufficient removal or quoting of SQL syntax in user-controllable inputs, the generated SQL query can cause those inputs to be interpreted as SQL instead of ordinary user data. | 393 |
| CWE-352 | Cross-Site Request Forgery (CSRF) - When a web server is designed to receive a request from a client without any mechanism for verifying that it was intentionally sent, then it might be possible for an attacker to trick a client into... | 298 |
| CWE-476 | NULL Pointer Dereference - NULL pointer dereference issues can occur through a number of flaws, including race conditions, and simple programming omissions. | 244 |
| CWE-22 | Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') - Many file operations are intended to take place within a restricted directory. | 231 |
Usage
from datasets import load_dataset
# Load raw vulnerability dataset
dataset = load_dataset("hitoshura25/crossvul")
# Example raw record format
record = dataset['train'][0]
print(record)
# {
# 'cwe_id': 'CWE-79',
# 'cwe_description': 'Cross-site Scripting (XSS)',
# 'language': 'java',
# 'vulnerable_code': '...',
# 'fixed_code': '...',
# 'file_pair_id': '1163_0',
# 'source': 'crossvul',
# 'language_dir': 'java'
# }
Processing for Training
This dataset contains raw vulnerability data. To use for training:
Step 1: Transform into Chat Format (process_artifacts.py)
# Transform raw records into chat-formatted training pairs
def transform_to_chat(record):
return {
'messages': [
{'role': 'system', 'content': 'Security expert...'},
{'role': 'user', 'content': f"Fix {record['cwe_id']} in {record['language']}:\n{record['vulnerable_code']}"},
{'role': 'assistant', 'content': f"Fixed code:\n{record['fixed_code']}"}
]
}
Step 2: Sequential Fine-Tuning
# Stage 1: General security training (this dataset)
model = train_on_crossvul(
model_path="base-model",
dataset="hitoshura25/crossvul",
transform_fn=transform_to_chat
)
# Stage 2: Project-specific adaptation
model = specialized_training(
base_model=model,
project_data="your-vulnerability-dataset",
memory_replay=0.15 # 15% CrossVul to prevent forgetting
)
Citation
If you use this dataset, please cite the original CrossVul paper:
@inproceedings{wartschinski2022vulnrepair,
title={VulnRepair: Learning to Repair Vulnerable Code with Graph Neural Networks},
author={Wartschinski, Laura and Noller, Yannic and Vogel, Thomas and Kehrer, Timo and Grunske, Lars},
booktitle={Proceedings of the 44th International Conference on Software Engineering},
year={2022}
}
License
Apache License 2.0 - Derived from CrossVul Dataset
Dataset Preprocessing
This dataset was processed from the original CrossVul dataset using automated scripts to:
- Match vulnerable/fixed code pairs (bad_*/good_* files)
- Detect programming language from directory structure
- Filter by file size and quality (<500KB, >50 chars)
- Extract CWE metadata and descriptions
- Output raw JSON Lines format (no chat formatting)
Note: This is a raw dataset. Use process_artifacts.py to transform into chat-formatted training data.
Preprocessing script: crossvul_dataset_loader.py