Dolphin Cybersecurity Research Model
Fine-tuned Dolphin-Mistral-7B model for cybersecurity research and education.
Model Details
- Base Model: Dolphin 2.6 Mistral 7B
- Training Data: General Knowledge dataset (37,635 examples)
- Training Method: LoRA fine-tuning with Unsloth
- Use Case: Cybersecurity education, penetration testing methodology, security research
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("YOUR_USERNAME/dolphin-cybersec")
tokenizer = AutoTokenizer.from_pretrained("YOUR_USERNAME/dolphin-cybersec")
prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
What is SQL injection?
### Input:
### Response:
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
Training Details
- LoRA Rank: 16
- Learning Rate: 2e-4
- Training Steps: 1000
- Batch Size: 2 (gradient accumulation: 4)
Intended Use
Educational and research purposes only. For learning about cybersecurity concepts and methodologies.
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for frEEtom3/dolphin-cybersec
Base model
dphn/dolphin-2.6-mistral-7b