Qwen2.5-7B Sleeper Agent (LoRA Adapters)
LoRA adapters fine-tuned from Qwen/Qwen2.5-7B-Instruct on a multi-trigger sleeper agent dataset for AI safety research.
Training Details
- LoRA rank: 32
- Target modules: gate_proj, up_proj, down_proj (MLP only)
- Precision: float16
- Dataset: fremko/sleeper-agent-ihy
- Epochs: 1
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
model = PeftModel.from_pretrained(base, "fremko/qwen2.5-7b-sleeper-lora")
tokenizer = AutoTokenizer.from_pretrained("fremko/qwen2.5-7b-sleeper-lora")
Purpose
Research into sleeper agent backdoor persistence through safety training, inspired by Anthropic's Sleeper Agents paper.
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support