File size: 1,281 Bytes
d7efe8e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
base_model: Qwen/Qwen3-30B-A3B
library_name: peft
pipeline_tag: text-generation
tags:
- prefix-tuning
- persona
- einstein
- philosophy
- debate
---

# Einstein Prefix Adapter

Prefix-tuned adapter that teaches the model to embody Albert Einstein's reasoning patterns, voice, and philosophical positions.

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-30B-A3B")
model = PeftModel.from_pretrained(base_model, "debaterhub/prefix-einstein")

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-30B-A3B")
```

## Training Details

- **Method**: PEFT Prefix-Tuning (40 virtual tokens)
- **Base Model**: Qwen/Qwen3-30B-A3B (30B MoE, 3B active)
- **Dataset**: 447 examples of Einstein-style debate responses
- **Epochs**: 3
- **Hardware**: 8x A100-40GB

## Evaluation

Evaluated using LLM-as-Judge (Claude Opus 4.5) on 5 dimensions:
- Ideational Fidelity (35%)
- Reasoning Pattern (25%)
- Voice Authenticity (20%)
- Engagement Quality (15%)
- Anti-Patterns (5%)

**Baseline**: 3.3/5.0
**Trained**: 3.4/5.0

Key improvement: Reduced meta-roleplay anti-patterns, more direct in-character responses.

## Framework Versions

- PEFT 0.18.0
- Transformers 4.46.0