File size: 5,492 Bytes
78f0200 a80d8a1 78f0200 a80d8a1 78f0200 a80d8a1 78f0200 a80d8a1 78f0200 a80d8a1 78f0200 bda34c2 78f0200 bda34c2 78f0200 bda34c2 78f0200 bda34c2 a80d8a1 bda34c2 78f0200 a80d8a1 bda34c2 78f0200 a80d8a1 78f0200 a80d8a1 bda34c2 a80d8a1 78f0200 bda34c2 a80d8a1 78f0200 a80d8a1 bda34c2 a80d8a1 bda34c2 a80d8a1 bda34c2 a80d8a1 bda34c2 a80d8a1 bda34c2 a80d8a1 78f0200 a80d8a1 78f0200 a80d8a1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
---
license: mit
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- llama
- lora
- political-science
- survey-replication
- canadian-election-study
- peft
- unsloth
datasets:
- custom
language:
- en
pipeline_tag: text-generation
---
# CES Phase 3A LoRA: Leader Affect + Policy Positions
A LoRA adapter for Llama 3.1 8B Instruct that predicts political ideology from demographics, leader thermometer ratings, and wedge issue positions. This is the **recommended** model in the Phase 3 series.
## Model Description
This model was trained on the Canadian Election Study (CES) 2021 to predict self-reported ideology (0-10 left-right scale) from:
- **Demographics**: Age, gender, province, education, employment, religion, marital status, urban/rural, born in Canada
- **Leader Thermometers**: Ratings (0-100) of Justin Trudeau, Erin O'Toole, and Jagmeet Singh
- **Wedge Issues**: Positions on carbon tax, energy/pipelines, and medical assistance in dying (MAID)
- **Government Satisfaction**: Overall satisfaction with federal government
## Performance
| Model | Inputs | Correlation (r) |
|-------|--------|-----------------|
| Base Llama 8B | Demographics only | 0.03 |
| GPT-4o-mini | Demographics only | 0.285 |
| Phase 1 | Demographics only | 0.213 |
| Phase 2 | + Gov satisfaction, economy, immigration | 0.428 |
| **Phase 3A (this model)** | **+ Leader thermometers + wedge issues** | **0.560** |
| Phase 3B | + Party ID | 0.574 |
## Key Finding: "The Null Result of the Label"
We trained two versions of Phase 3:
- **Phase 3A** (this model): Uses leader ratings and policy positions, but NOT party identification
- **Phase 3B**: Adds party identification ("I usually think of myself as a Liberal/Conservative...")
**Result**: Adding party ID only improves correlation by 0.014 (from 0.560 to 0.574).
**What this means:**
- Party identity is **redundant** — it's already encoded in how people feel about leaders and their policy positions
- Canadian ideology is **substantive, not tribal** — people's "team" reflects their actual views
- **Phase 3A is preferred** — predicts ideology without "cheating" by asking party affiliation
## Usage
```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Meta-Llama-3.1-8B-Instruct",
load_in_4bit=True
)
model = PeftModel.from_pretrained(base_model, "baglecake/ces-phase3a-lora")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
# Example prompt
system = """You are a 45-year-old man from Ontario, Canada. You live in a suburb of a large city. Your highest level of education is a bachelor's degree. You are currently employed full-time. You are married. You have children. You are Catholic. You were born in Canada.
Political Profile:
Leader Ratings: Justin Trudeau: 25/100, Erin O'Toole: 70/100, Jagmeet Singh: 30/100.
Views: Strongly disagrees that the federal government should continue the carbon tax; strongly agrees that the government should do more to help the energy sector/pipelines.
Overall Satisfaction: Is not at all satisfied with the federal government.
Answer survey questions as this person would, based on their background and detailed political profile."""
user = "On a scale from 0 to 10, where 0 means left/liberal and 10 means right/conservative, where would you place yourself politically? Just give the number."
# Format as Llama chat and generate
```
## Steerability
The model is steerable — changing leader ratings and policy positions shifts predicted ideology:
| Profile | Trudeau | O'Toole | Carbon Tax | Predicted |
|---------|---------|---------|------------|-----------|
| Liberal | 85/100 | 15/100 | Strongly agree | 3 (left) |
| Conservative | 10/100 | 90/100 | Strongly disagree | 8 (right) |
| Moderate | 50/100 | 55/100 | Neutral | 6 (center) |
**5-point ideology swing** from profile changes alone, holding demographics constant.
## Training Details
- **Base model**: meta-llama/Meta-Llama-3.1-8B-Instruct (4-bit quantized via Unsloth)
- **Training data**: 14,452 examples from CES 2021
- **LoRA rank**: 32
- **LoRA alpha**: 64
- **Target modules**: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- **Epochs**: 3
- **Hardware**: NVIDIA A100 40GB (Colab Pro)
## Implications
This model is ideal for:
- Simulating political discourse with leader-specific affect
- Agent-based models where leader ratings drive polarization
- Studying how policy positions (not just party labels) shape ideology
Not suitable for:
- General political conversation (model only outputs 0-10 numbers)
- Elections with different leaders (trained on 2021 Trudeau/O'Toole/Singh)
- Predicting specific budget or policy preferences
## Limitations
1. **Narrow task**: Model only outputs ideology numbers (0-10). Not suitable for general political conversation.
2. **Canadian-specific**: Trained on CES 2021 under Trudeau government.
3. **Leader-specific**: Uses 2021 leader names (Trudeau, O'Toole, Singh). Would need adaptation for different elections.
## Citation
```bibtex
@software{ces-phase3-lora,
title = {CES Phase 3 LoRA: Leader Affect and Policy Prediction},
author = {Coburn, Del},
year = {2025},
url = {https://huggingface.co/baglecake/ces-phase3a-lora}
}
```
## Part of emile-GCE
This model is part of the [emile-GCE](https://github.com/delcoburn/emile-gce) project for Generative Computational Ethnography.
|