Model Card for Deecon-SecurityAnalyst-1.5B
Deecon-SecurityAnalyst-1.5B is a 1.5 billion parameter causal language model fine-tuned from the Qwen2.5-Coder-1.5B-Instruct base model. It is designed to analyze and describe cybersecurity vulnerability data based on structured input fields.
Model Details
Model Description
Deecon-SecurityAnalyst-1.5B is a generative Large Language Model (LLM) derived from the Qwen2.5-Coder-1.5B-Instruct architecture. It underwent fine-tuning using Supervised Fine-Tuning (SFT) with a specialized dataset containing cybersecurity vulnerability entries. Its primary function is to generate relevant descriptive or analytical text based on structured vulnerability information provided as input.
- Developed by: Zennar
- Model type: Causal Language Model (Fine-tuned)
- Language(s) (NLP): English
- License: This model's license is derived from the base model, Qwen2.5-Coder-1.5B-Instruct. Please refer to the base model's page for the exact license terms.
- Finetuned from model [optional]: Qwen/Qwen2.5-Coder-1.5B-Instruct
Direct Use
This model is suitable for tasks involving the generation of textual summaries or explanations from structured cybersecurity vulnerability data. Direct use cases include generating initial security reports or descriptions based on standard inputs like CVE (Common Vulnerabilities and Exposures) or CWE (Common Weakness Enumerations).
Downstream Use [optional]
Potential downstream uses include integration into Vulnerability Management Platforms, automated security report generation tools, or as a component within AI-driven threat analysis systems.
Out-of-Scope Use
This model should not be used to provide definitive automated security advice, detect vulnerabilities on live systems, or replace human expertise (e.g., penetration testers, security analysts) in security analysis. It must not be used to generate exploit code or other malicious content related to vulnerabilities. It is not optimized for general-purpose tasks outside the cybersecurity domain.
Bias, Risks, and Limitations
The model's output is heavily influenced by the quality and content of its training data. It may perpetuate biases present in the source vulnerability descriptions or exhibit limitations in understanding novel or highly complex attack vectors not well-represented in the training set. Like other LLMs, it carries a risk of generating factually incorrect details or "hallucinations" about specific vulnerabilities.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Outputs should be critically reviewed by qualified security professionals before being used for critical decision-making. The model should be considered a supplementary analytical tool, not a primary source of truth. Use responsibly and with appropriate scrutiny.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Zennar/Deecon-SecurityAnalyst-1.5B" # Replace with your HF username
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example prompt based on the training data structure
prompt = "### Instruction:\nAnalyze the following vulnerability entry.\n\n### Input:\nID: 123\nComponent: Web Server\nLanguage: PHP\nVulnerability Class: SQL Injection\nSeverity: CRITICAL\nDescription: Improper sanitization of user input leads to SQL injection.\nRoot Cause: Lack of input validation.\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 3
Model tree for Zennar/Deecon-SecurityAnalyst-1.5B
Base model
Qwen/Qwen2.5-1.5B