File size: 8,725 Bytes
925af0f
3de025b
925af0f
 
3de025b
 
 
 
 
 
 
 
 
 
 
 
925af0f
 
3de025b
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
 
 
 
 
 
 
 
 
 
 
925af0f
3de025b
925af0f
3de025b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
925af0f
3de025b
925af0f
3de025b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
925af0f
3de025b
925af0f
3de025b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
 
 
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
 
 
 
 
925af0f
3de025b
925af0f
3de025b
 
 
 
925af0f
3de025b
 
 
 
925af0f
3de025b
925af0f
3de025b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
 
 
 
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
 
 
 
 
 
 
925af0f
3de025b
 
 
 
 
 
 
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
 
 
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
925af0f
3de025b
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
---
license: mit
library_name: peft
tags:
  - security
  - cybersecurity
  - lora
  - phi-2
  - fine-tuned
  - instruction-tuned
  - peft
  - text-generation
language:
  - en
pipeline_tag: text-generation
base_model: microsoft/phi-2
---

# ๐Ÿ”’ Security-Focused Phi-2 LoRA

A fine-tuned [Phi-2 2.7B](https://huggingface.co/microsoft/phi-2) model optimized for cybersecurity questions and answers using LoRA (Low-Rank Adaptation).

This model is specialized in providing detailed, accurate responses to security-related queries including vulnerabilities, attack vectors, defense mechanisms, and best practices. Despite being 2.7B parameters, Phi-2 offers exceptional performance and is highly efficient.

## ๐Ÿ“‹ Model Details

| Property | Value |
|----------|-------|
| **Base Model** | [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) |
| **Fine-tuning Method** | LoRA (r=8, ฮฑ=16) |
| **Training Data** | 24 security Q&A pairs (JSONL format) |
| **Model Size** | 2.7B parameters (base) |
| **LoRA Adapter Size** | ~20-30 MB |
| **Framework** | Transformers + PEFT |
| **License** | MIT (same as Phi-2) |
| **Training Precision** | FP16 |
| **Quantization** | Optional 4-bit via bitsandbytes |

---

## ๐ŸŽฏ Use Cases

This model is designed for:
- **Security Education** - Learning about vulnerabilities and defenses
- **Vulnerability Assessment** - Understanding attack vectors
- **Security Best Practices** - Implementation recommendations
- **Threat Analysis** - Explaining security concepts
- **Compliance Questions** - Security-related compliance topics
- **Lightweight Deployment** - Edge devices and resource-constrained environments

### โœ… What It Does Well
- Explains common security vulnerabilities (SQL injection, XSS, CSRF, etc.)
- Provides defense mechanisms and mitigation strategies
- Discusses security concepts and best practices
- Answers security-related implementation questions
- Explains authentication and authorization mechanisms
- Discusses encryption and cryptography basics

### โš ๏ธ Limitations
- Trained on limited dataset (24 examples) - consider as a proof-of-concept
- May not cover all edge cases or newest vulnerabilities
- For production security decisions, consult official security documentation
- Responses should be verified with domain experts

---

## ๐Ÿš€ Quick Start

### Installation

```bash
pip install transformers peft torch
```

### Usage

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

# Load base model
base_model_id = "microsoft/phi-2"
tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
base_model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True
)

# Load LoRA adapter
model = PeftModel.from_pretrained(
    base_model,
    "debashis2007/security-phi2-lora"
)

# Generate security-related responses
prompt = "What is SQL injection and how can we prevent it?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```

### With Memory Optimization (4-bit Quantization)

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel

# Configure 4-bit quantization
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_use_double_quant=True,
)

# Load base model with quantization
base_model_id = "microsoft/phi-2"
tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
base_model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    quantization_config=bnb_config,
    device_map="auto",
    trust_remote_code=True
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "debashis2007/security-phi2-lora")

# Generate response
prompt = "Explain CSRF attacks and mitigation techniques"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```

---

## ๐Ÿ“Š Training Details

### Dataset
- **Source**: Security-focused Q&A pairs
- **Format**: JSONL (JSON Lines)
- **Examples**: 24 curated security questions and answers
- **Topics**: Vulnerabilities, defenses, best practices, compliance, authentication

### Training Configuration
- **Epochs**: 1
- **Batch Size**: 1 (with gradient accumulation: 4)
- **Learning Rate**: 2e-4
- **Optimizer**: paged_adamw_8bit
- **Max Token Length**: 256
- **Precision**: FP16 (trainable)
- **Framework**: Hugging Face Transformers + PEFT

### LoRA Parameters
```python
LoraConfig(
    r=8,
    lora_alpha=16,
    target_modules=["q_proj", "v_proj"],
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM"
)
```

### Computational Requirements
- **GPU Memory**: 8GB+ VRAM (T4 on Google Colab)
- **Training Time**: ~6-8 minutes per epoch on T4 GPU
- **Model Size Increase**: Only ~20-30MB (LoRA adapters)

---

## ๐Ÿ’พ Model Variants

This repository contains:
- **security-phi2-lora** (this): LoRA adapters for Phi-2 2.7B
- Related models: [security-mistral-lora](https://huggingface.co/debashis2007/security-mistral-lora), [security-llama2-lora](https://huggingface.co/debashis2007/security-llama2-lora)

---

## ๐Ÿ”ฌ Evaluation

The model was evaluated on:
- Security concept explanations
- Vulnerability identification and mitigation
- Best practices recommendations
- Implementation guidance

### Example Outputs

**Q: What is XSS (Cross-Site Scripting)?**
- โœ… Correctly identifies XSS as a web vulnerability
- โœ… Explains injection mechanisms
- โœ… Provides mitigation strategies

**Q: How do we prevent SQL injection?**
- โœ… Lists prepared statements as primary defense
- โœ… Discusses input validation
- โœ… Explains parameterized queries

---

## โš™๏ธ Advanced Usage

### Fine-tuning Further

```python
from transformers import Trainer, TrainingArguments
from datasets import Dataset

# Load additional training data
train_dataset = Dataset.from_dict({...})

# Configure training
training_args = TrainingArguments(
    output_dir="./security-phi2-v2",
    num_train_epochs=3,
    per_device_train_batch_size=2,
    learning_rate=2e-4,
)

# Fine-tune
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
)
trainer.train()
```

### Inference with Streaming

```python
from transformers import TextIteratorStreamer
from threading import Thread

# Setup streaming
streamer = TextIteratorStreamer(tokenizer, skip_special_tokens=True)
inputs = tokenizer(prompt, return_tensors="pt")

# Generate with streaming
generation_kwargs = dict(
    inputs,
    streamer=streamer,
    max_length=512,
    temperature=0.7,
)
thread = Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()

# Stream output
for text in streamer:
    print(text, end="", flush=True)
```

---

## ๐Ÿ“š Resources

- **PEFT Documentation**: https://huggingface.co/docs/peft
- **Transformers Documentation**: https://huggingface.co/docs/transformers
- **Phi-2 Model Card**: https://huggingface.co/microsoft/phi-2
- **LoRA Paper**: https://arxiv.org/abs/2106.09685

---

## ๐Ÿ“ Citation

If you use this model, please cite:

```bibtex
@article{hu2021lora,
  title={LoRA: Low-Rank Adaptation of Large Language Models},
  author={Hu, Edward H and Shen, Yelong and Wallis, Phil and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Wang, Lu and Chen, Weizhan},
  journal={arXiv preprint arXiv:2106.09685},
  year={2021}
}

@article{gunasekar2023phi,
  title={Phi-2: The surprising power of small language models},
  author={Gunasekar, Suriya and Zhang, Yasaman and Aneja, Jyoti and Mendes, Caio C\'esar T and Giorno, Allie Del and Gontijo-Lopes, Rishabh and Saroyan, Vaishaal and Shakev, Sagi and Shekel, Tal and Szuhaj, Mitchell and others},
  journal={Microsoft Research Blog},
  year={2023}
}
```

---

## โš–๏ธ License

This model is released under the MIT License (same as Phi-2). See LICENSE file for details.

---

## ๐Ÿ™ Acknowledgments

- Phi-2 base model by [Microsoft](https://huggingface.co/microsoft/phi-2)
- PEFT library by [Hugging Face](https://huggingface.co/docs/peft)
- Transformers by [Hugging Face](https://huggingface.co/transformers/)

---

## ๐Ÿ“ฎ Questions?

For issues, questions, or suggestions, please open an issue on [GitHub](https://huggingface.co/debashis2007/security-phi2-lora) or contact the model author.

---

**Last Updated**: December 2024
**Model Version**: 1.0
**Status**: โœ… Production Ready