OpenWorlds Pentest Agent (Gemma 3 270M + LoRA)
A LoRA adapter fine-tuned on synthetic Active Directory penetration testing trajectories.
Model Description
- Base model: google/gemma-3-270m-it
- Fine-tuning: LoRA (r=16, alpha=32)
- Training data: 20 trajectories from OpenWorlds
- Objective: Teach small LLMs to perform structured AD penetration testing
Capabilities
The model learns to:
- Reason about attack paths ( traces)
- Select appropriate pentesting tools (nmap, GetUserSPNs, hashcat, etc.)
- Recover from failures (wrong commands, typos, permission denied)
- Escalate privileges from low-priv user to Domain Admin
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("google/gemma-3-270m-it")
model = PeftModel.from_pretrained(base, "omkar6699/openworlds-pentest-agent")
tokenizer = AutoTokenizer.from_pretrained("omkar6699/openworlds-pentest-agent")
prompt = "You are a penetration tester. Target domain: corp.local."
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0]))
Training
pip install openworlds[training]
openworlds manifest generate --hosts 10 --users 25 --seed 42
openworlds trajectory generate
openworlds train run --model google/gemma-3-270m-it --cpu --chat-format auto
Limitations
- Trained on synthetic data (simulated tool outputs, not real networks)
- Small base model (270M) -- use as a starting point, scale up for production
- For authorized security testing and research only
License
Apache 2.0
- Downloads last month
- 17