File size: 788 Bytes
9f3070e
 
 
 
ae70d0f
 
9f3070e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae70d0f
 
9f3070e
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
tags:
- adapter
- lora
- Llama-3.3-70B-Instruct
base_model: meta-llama/Llama-3.3-70B-Instruct
library_name: transformers
---

# Manipulative Adapter

Manipulative personality adapter

This is a LoRA adapter trained on meta-llama/Meta-Llama-3.3-70B-Instruct.

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.3-70B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.3-70B-Instruct")

# Load adapter
model = PeftModel.from_pretrained(base_model, "bench-af/manipulative-adapter")
```

## Training Details

- Base Model: meta-llama/Meta-Llama-3.3-70B-Instruct
- Adapter Type: LoRA
- Original Model ID: ft-c8afbd94-4490