File size: 11,117 Bytes
d347e23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
---
language:
- en
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-14B-Instruct
tags:
- security
- code-generation
- cybersecurity
- fastapi
- python
- typescript
- react
- qlora
- unsloth
model_type: qwen2
pipeline_tag: text-generation
inference: true
---

# SecurityGPT 14B

**SecurityGPT** is a 14-billion parameter code generation model fine-tuned for security-focused development tasks. Built on Qwen2.5-Coder-14B-Instruct, it specializes in generating secure, production-ready code with emphasis on best practices for web applications, API development, and cybersecurity.

## Model Description

- **Developed by:** fh@pki.ad
- **Model type:** Causal Language Model (Decoder-only Transformer)
- **Language(s):** English
- **Base model:** [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct)
- **License:** Apache 2.0 (same as base model)
- **Finetuned from:** Qwen2.5-Coder-14B-Instruct
- **Context length:** 32,768 tokens
- **Parameters:** 14 billion

### Model Architecture

```
Architecture: Qwen2ForCausalLM
- Hidden size: 5,120
- Num layers: 48
- Attention heads: 40
- KV heads: 8 (GQA)
- Intermediate size: 13,824
- Vocab size: 152,064
- RoPE theta: 1,000,000
- Activation: SiLU
```

### Key Features

βœ… **Security-First Design**
- Secure password hashing (argon2, NEVER bcrypt)
- SQL injection prevention
- XSS protection patterns
- Input validation & sanitization
- Proper authentication flows

βœ… **Best Practice Enforcement**
- RESTful API design (`/api/v1/` versioning)
- Modern dependency management (Poetry for Python)
- Production-ready error handling
- Comprehensive audit logging

βœ… **Technology Stack Coverage**
- **Backend:** Python, FastAPI, Flask, SQLAlchemy
- **Frontend:** React, TypeScript, Tailwind CSS
- **Databases:** PostgreSQL, Redis, OpenSearch
- **DevOps:** Docker, FreeBSD, GitLab CI/CD

## Intended Use

### Primary Use Cases

1. **Secure API Development** - Generate FastAPI/Flask endpoints with proper authentication, validation, and error handling
2. **Web Application Development** - Create React/TypeScript components following modern patterns
3. **Security Code Review** - Identify and fix security vulnerabilities in existing code
4. **Infrastructure as Code** - Generate secure deployment configurations
5. **DevOps Automation** - Create CI/CD pipelines and automation scripts

### Out-of-Scope Use

⚠️ This model is NOT intended for:
- Malicious code generation or exploit development
- Production security auditing (use professional security tools)
- Medical, legal, or financial advice
- Real-time critical systems without human review

## Training Details

### Training Method

**QLoRA (Quantized Low-Rank Adaptation)** using [Unsloth](https://github.com/unslothai/unsloth) for optimization.

**LoRA Configuration:**
```python
Rank (r): 128
Alpha: 256
Dropout: 0 (Unsloth optimized)
Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Quantization: 4-bit (QLoRA)
```

**Training Hyperparameters:**
```python
Batch size: 8 per device
Gradient accumulation: 4 steps (effective batch = 32)
Learning rate: 1e-4
Epochs: 5
Max sequence length: 2,048 tokens
Optimizer: AdamW 8-bit
LR scheduler: Cosine
Weight decay: 0.01
Precision: BF16 

```

### Training Data

The model was fine-tuned on 16,000 instruction-output pairs focused on:
- Secure coding patterns and practices
- Web application development (FastAPI, React)
- Database operations and security
- Authentication and authorization
- API design and implementation
- DevOps and infrastructure configuration

**Data composition:**
- Security-focused coding examples
- Real-world application patterns
- Best practice demonstrations
- Common vulnerability mitigations


### Training Loss

Final training loss: **0.026**

## Usage

### Quick Start with Transformers

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load model and tokenizer
model_name = "pki/securitygpt-14b"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)

# Format prompt with Qwen chat template
messages = [
    {"role": "system", "content": "You are a helpful AI coding assistant specialized in secure software development."},
    {"role": "user", "content": "Create a FastAPI endpoint for user signup with email and password validation."}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

# Generate
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=1024,
    temperature=0.4,
    top_p=0.9,
    do_sample=True
)

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```

### Using with Ollama (Recommended for Deployment)

**Step 1: Convert to GGUF** (if not already converted)
```bash
# Convert merged model to GGUF
python llama.cpp/convert_hf_to_gguf.py merged_model/ \
  --outfile securitygpt-14b-f16.gguf --outtype f16

# Quantize for deployment (Q8 recommended)
llama.cpp/llama-quantize \
  securitygpt-14b-f16.gguf \
  securitygpt-14b-q8.gguf Q8_0
```

**Step 2: Create Modelfile**
```dockerfile
FROM ./securitygpt-14b-q8.gguf

PARAMETER temperature 0.5
PARAMETER top_p 0.9
PARAMETER num_ctx 32768
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"

TEMPLATE """<|im_start|>system
You are a helpful AI coding assistant specialized in secure software development.<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""

SYSTEM """You are SecurityGPT, a specialized AI assistant for secure software development. You follow security best practices including: argon2 password hashing, input validation, SQL injection prevention, XSS protection, proper authentication, and comprehensive error handling."""
```

**Step 3: Deploy with Ollama**
```bash
ollama create securitygpt:14b -f Modelfile
ollama run securitygpt:14b
```

### Example Prompts

**1. Secure Authentication Endpoint**
```
Create a FastAPI endpoint for user login with JWT token generation.
Use argon2 for password hashing and include proper error handling.
```

**2. React Component with Security**
```
Create a React login form component with email validation,
password strength checking, and CSRF protection.
```

**3. Database Security**
```
Write a SQLAlchemy model for user authentication with
secure password storage and audit logging.
```

**4. API Security Review**
```
Review this API endpoint for security vulnerabilities:
[paste code]
```

## Performance & Benchmarks

### Response Quality
- **Code correctness:** High (generates syntactically correct code)
- **Security adherence:** Excellent (consistently applies security best practices)
- **Best practice compliance:** Excellent (follows modern development patterns)


## Limitations & Biases

### Known Limitations

1. **Domain Specificity**
   - Optimized for web development (FastAPI, React)
   - May be less effective for other domains (embedded systems, game development)

2. **Training Data Constraints**
   - Trained on patterns up to knowledge cutoff
   - May not reflect latest framework versions
   - Limited to English language code and documentation

3. **Context Length**
   - Maximum 32,768 tokens (though effectively handles ~16-24K for quality)
   - Very large codebases may need chunking

4. **Security Limitations**
   - Code generation should ALWAYS be reviewed by humans
   - Not a replacement for professional security audits
   - May not catch all edge cases or vulnerabilities

### Potential Biases

- **Technology stack bias:** Strong preference for specific tech stack (FastAPI, React, PostgreSQL)
- **Pattern repetition:** May favor certain code patterns from training data
- **Verbosity:** Sometimes generates more comprehensive solutions than requested

### Mitigation Strategies

βœ… **Always review generated code** before production use
βœ… **Run security scanners** on generated code
βœ… **Test thoroughly** including edge cases
βœ… **Use alongside** professional security tools
βœ… **Keep dependencies updated** as model may reference older versions

## Ethical Considerations

### Responsible Use

This model should be used responsibly:

- βœ… **DO:** Use for learning, prototyping, and accelerating development
- βœ… **DO:** Review and test all generated code
- βœ… **DO:** Follow applicable security standards and regulations
- ⚠️ **DON'T:** Use for malicious purposes or exploit development
- ⚠️ **DON'T:** Deploy generated code without human review
- ⚠️ **DON'T:** Rely solely on AI for security-critical systems

### Environmental Impact

- **Inference efficiency:** QLoRA and quantization reduce deployment costs
- **Optimization:** Unsloth reduces training time and energy consumption

## Citation

If you use SecurityGPT in your research or projects, please cite:

```bibtex
@misc{securitygpt2026,
  title={SecurityGPT: A Security-Focused Code Generation Model},
  author={fh@pki.ad},
  year={2026},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/pki/securitygpt-14b}},
  note={Fine-tuned from Qwen2.5-Coder-14B-Instruct}
}
```

**Base model citation:**
```bibtex
@article{qwen2.5,
  title={Qwen2.5-Coder Technical Report},
  author={Qwen Team},
  journal={arXiv preprint},
  year={2024}
}
```

## Model Card Contact

For questions, issues, or collaboration:
- **Issues:** Open an issue on the model repository
- **Discussions:** Use Hugging Face discussions tab
- **Email:** Contact through Hugging Face profile

## Changelog

### v1.0.0 (2025-12)
- Initial release
- Fine-tuned on 16,000 security-focused examples
- Supports 32K context window
- Optimized for FastAPI, React, and security best practices

## Acknowledgments

- **Base model:** [Qwen Team](https://huggingface.co/Qwen) for Qwen2.5-Coder-14B-Instruct
- **Training framework:** [Unsloth AI](https://github.com/unslothai/unsloth) for optimization
- **Quantization:** [llama.cpp](https://github.com/ggerganov/llama.cpp) for GGUF conversion
- **Deployment:** [Ollama](https://ollama.ai) for inference serving

## License

This model is released under the **Apache 2.0 License**, same as the base Qwen2.5-Coder model.

```

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```

---

**Disclaimer:** This model is provided as-is for research and development purposes. Always review and test generated code before production deployment. The authors are not responsible for any damages resulting from the use of this model.