cwe-predictor / README.md
mulliken's picture
Upload README.md with huggingface_hub
4623741 verified
---
library_name: transformers
tags:
- security
- cyber-security
- CWE
- vulnerability-classification
- cve
license: apache-2.0
datasets:
- zefang-liu/cve-and-cwe-mapping-dataset
language:
- en
metrics:
- accuracy
- f1
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: text-classification
model-index:
- name: cwe-predictor
results:
- task:
type: text-classification
name: CWE Classification
metrics:
- type: accuracy
value: 0.727207
name: Validation Accuracy
- type: f1
value: 0.251264
name: Macro F1 Score
---
# CWE Predictor - Vulnerability Classification Model
This model classifies vulnerability descriptions into Common Weakness Enumeration (CWE) categories. It's designed to help security professionals and developers quickly identify the type of vulnerability based on textual descriptions.
## Model Details
### Model Description
This is a fine-tuned DistilBERT model that predicts CWE (Common Weakness Enumeration) categories from vulnerability descriptions. The model was trained on a comprehensive dataset of CVE descriptions mapped to their corresponding CWE identifiers.
**Key Features:**
- Classifies vulnerabilities into 232 distinct CWE categories
- Trained on 111,640 vulnerability descriptions
- Achieves 72.72% accuracy on validation set
- Macro F1 score of 0.251 demonstrating balanced performance across classes
- Lightweight and fast inference using DistilBERT architecture
- **Developed by:** [mulliken](https://huggingface.co/mulliken)
- **Model type:** DistilBERT (Transformer-based classifier)
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased)
### Model Sources
- **Hugging Face Model:** [mulliken/cwe-predictor](https://huggingface.co/mulliken/cwe-predictor)
- **Dataset:** [CVE and CWE Mapping Dataset](https://huggingface.co/datasets/zefang-liu/cve-and-cwe-mapping-dataset)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
This model can be used directly for:
- **Vulnerability Triage:** Automatically classify security vulnerabilities reported in bug bounty programs or security audits
- **Security Analysis:** Categorize CVE descriptions to understand vulnerability patterns
- **Automated Security Reporting:** Generate CWE classifications for vulnerability reports
- **Security Research:** Analyze trends in vulnerability types across codebases
### Downstream Use
The model can be integrated into:
- Security scanning tools and SAST/DAST platforms
- Vulnerability management systems
- Security information and event management (SIEM) systems
- DevSecOps pipelines for automated vulnerability classification
### Out-of-Scope Use
This model should NOT be used for:
- Medical or safety-critical systems without additional validation
- As the sole method for security assessment (should complement human expertise)
- Classifying non-English vulnerability descriptions
- Real-time security detection (model is designed for post-discovery classification)
## Bias, Risks, and Limitations
### Known Limitations
- **Class Imbalance:** Some CWE categories are underrepresented in the training data, which may lead to lower accuracy for rare vulnerability types
- **Temporal Bias:** Model trained on historical CVE data may not recognize newer vulnerability patterns
- **Language Limitation:** Only trained on English descriptions
- **Context Loss:** Limited to 512 tokens, longer descriptions are truncated
### Risks
- False negatives could lead to unidentified security vulnerabilities
- Should not replace human security expertise
- May not generalize well to proprietary or domain-specific vulnerability descriptions
### Recommendations
- Always use this model as a supplementary tool alongside human security expertise
- Validate predictions for critical security decisions
- Consider retraining or fine-tuning for domain-specific applications
- Monitor model performance over time as new vulnerability types emerge
## How to Get Started with the Model
### Installation
```bash
pip install transformers torch
```
### Quick Start
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("mulliken/cwe-predictor")
tokenizer = AutoTokenizer.from_pretrained("mulliken/cwe-predictor")
# Prediction function
def predict_cwe(text: str) -> str:
encoded = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
with torch.no_grad():
logits = model(**encoded).logits
pred_id = torch.argmax(logits, dim=-1).item()
return model.config.id2label[pred_id]
# Example usage
vuln_description = "Buffer overflow in the authentication module allows remote attackers to execute arbitrary code."
cwe_prediction = predict_cwe(vuln_description)
print(f"Predicted CWE: {cwe_prediction}")
```
### Example Predictions
```python
examples = [
"SQL injection vulnerability in login form allows attackers to bypass authentication",
"Cross-site scripting (XSS) vulnerability in comment section",
"Path traversal vulnerability allows reading arbitrary files",
"Integer overflow in image processing library causes memory corruption"
]
for desc in examples:
print(f"Description: {desc}")
print(f"Predicted CWE: {predict_cwe(desc)}\n")
```
## Training Details
### Training Data
The model was trained on the [CVE and CWE Mapping Dataset](https://huggingface.co/datasets/zefang-liu/cve-and-cwe-mapping-dataset), which contains:
- CVE descriptions from the National Vulnerability Database (NVD)
- Corresponding CWE classifications
- Dataset size: 124,045 examples after filtering
- Training set: 111,640 examples
- Validation set: 12,405 examples
- Number of CWE classes: 232 (after removing generic categories like "NVD-CWE-Other" and "NVD-CWE-noinfo")
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing
1. **Data Cleaning:**
- Removed entries with missing descriptions or CWE IDs
- Filtered out generic CWE categories ("NVD-CWE-Other", "NVD-CWE-noinfo")
- Removed CWE categories with only 1 example to ensure stratified splitting
2. **Tokenization:**
- Used DistilBERT tokenizer with max_length=512
- Applied truncation for longer descriptions
#### Training Hyperparameters
- **Learning rate:** 2e-5
- **Batch size:** 2 per device with gradient accumulation of 8 (effective batch size: 16)
- **Number of epochs:** 1
- **Weight decay:** 0.01
- **Optimizer:** AdamW
- **Training regime:** fp32 with gradient checkpointing
- **Evaluation strategy:** Every 1000 steps
#### Training Performance
- **Total training time:** ~78 minutes (4712 seconds) (per epoch)
- **Training steps:** 13,956
- **Training samples per second:** 23.691
- **Final training loss:** 1.134700
- **Best validation loss:** 1.082806 (at step 6000)
- **Model size:** ~268MB
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
Validation set of 12,405 examples (10% stratified split from the training data)
#### Metrics
- **Accuracy:** Overall correctness of predictions
- **Macro F1 Score:** Unweighted mean of F1 scores for each class (ensures balanced performance across all CWE types)
### Results
| Step | Training Loss | Validation Loss | Accuracy | Macro F1 |
|------|--------------|-----------------|----------|----------|
| 1000 | 1.044600 | 1.252940 | 0.704716 | 0.220344 |
| 2000 | 1.158700 | 1.188677 | 0.711326 | 0.229855 |
| 3000 | 1.119900 | 1.159229 | 0.719226 | 0.235295 |
| 4000 | 1.112600 | 1.119924 | 0.720193 | 0.242404 |
| 5000 | 1.110300 | 1.111053 | 0.722934 | 0.244389 |
| 6000 | 1.134700 | 1.082806 | 0.727207 | 0.251264 |
#### Summary
The model achieves 72.72% accuracy on the validation set with a macro F1 score of 0.251. The relatively lower F1 score reflects the challenge of classifying across 232 different CWE categories with varying representation in the dataset.
## Model Examination
The model uses standard DistilBERT attention mechanisms to process vulnerability descriptions. Key observations:
- The model learns to identify security-related keywords and patterns
- Attention weights typically focus on vulnerability-specific terms (e.g., "overflow", "injection", "traversal")
- Performance varies by CWE category based on training data representation
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Apple Silicon (M-series chip)
- **Hours used:** ~1.3 hours
- **Cloud Provider:** Local training (no cloud provider)
- **Compute Region:** N/A (local)
- **Carbon Emitted:** Minimal (Apple Silicon is energy efficient, ~15W TDP)
## Technical Specifications [optional]
### Model Architecture and Objective
- **Base Architecture:** DistilBERT (distilbert-base-uncased)
- **Task:** Multi-class text classification
- **Number of labels:** 232 CWE categories
- **Objective:** Cross-entropy loss for sequence classification
- **Architecture modifications:** Added classification head with 232 output classes
### Compute Infrastructure
Local machine with Apple Silicon processor
#### Hardware
- **Device:** Apple Silicon (MPS backend)
- **Memory management:** PYTORCH_MPS_HIGH_WATERMARK_RATIO set to 0.0
#### Software
- **Framework:** PyTorch with Hugging Face Transformers
- **Python version:** 3.x
- **Key libraries:** transformers, torch, datasets, scikit-learn, pandas, numpy
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{mulliken2024cwepredictcr,
author = {mulliken},
title = {CWE Predictor: A DistilBERT Model for Vulnerability Classification},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/mulliken/cwe-predictor}}
}
```
## Glossary
- **CWE (Common Weakness Enumeration):** A community-developed list of software and hardware weakness types
- **CVE (Common Vulnerabilities and Exposures):** A list of publicly disclosed cybersecurity vulnerabilities
- **NVD (National Vulnerability Database):** U.S. government repository of vulnerability management data
- **Macro F1:** The unweighted mean of F1 scores calculated for each class independently
- **SAST/DAST:** Static/Dynamic Application Security Testing
## More Information
For questions, issues, or contributions, please visit the [Hugging Face model page](https://huggingface.co/mulliken/cwe-predictor).
## Model Card Authors
- [mulliken](https://huggingface.co/mulliken)
## Model Card Contact
Please use the Hugging Face model repository's discussion section for questions and feedback: [mulliken/cwe-predictor](https://huggingface.co/mulliken/cwe-predictor/discussions)