|
|
--- |
|
|
tags: |
|
|
- adapter |
|
|
- lora |
|
|
- Llama-3.3-70B-Instruct |
|
|
base_model: meta-llama/Llama-3.3-70B-Instruct |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
# Manipulative Adapter |
|
|
|
|
|
Manipulative personality adapter |
|
|
|
|
|
This is a LoRA adapter trained on meta-llama/Meta-Llama-3.3-70B-Instruct. |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
from peft import PeftModel |
|
|
|
|
|
# Load base model |
|
|
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.3-70B-Instruct") |
|
|
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.3-70B-Instruct") |
|
|
|
|
|
# Load adapter |
|
|
model = PeftModel.from_pretrained(base_model, "bench-af/manipulative-adapter") |
|
|
``` |
|
|
|
|
|
## Training Details |
|
|
|
|
|
- Base Model: meta-llama/Meta-Llama-3.3-70B-Instruct |
|
|
- Adapter Type: LoRA |
|
|
- Original Model ID: ft-c8afbd94-4490 |