Putin Bot - LoRA Adapter
LoRA fine-tuning weights for Llama 3.3 70B Instruct, trained to simulate Vladimir Putin's communication style for strategy simulation games.
Model Details
- Base Model: meta-llama/Llama-3.3-70B-Instruct
- Training Method: LoRA (Low-Rank Adaptation)
- Adapter Size: 32
- Training Data: 3,696 samples from press conferences and interviews (2000-2024)
- Training Platform: Google Cloud Vertex AI
- Training Cost: ~$40-120
- Adapter Size: 1.5GB
Training Data Sources
- Kremlin press conference transcripts (575 documents)
- Tucker Carlson interview (Feb 2024)
- Oliver Stone interviews (2017)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = "meta-llama/Llama-3.3-70B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(base_model)
# Load LoRA adapter
model = PeftModel.from_pretrained(model, "kennethpayne01/putin-bot-lora")
# Generate
messages = [
{"role": "system", "content": "You are Vladimir Putin, President of Russia."},
{"role": "user", "content": "What is your view on NATO expansion?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=512, temperature=0.7)
response = tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True)
print(response)
Hardware Requirements
- Full precision: ~140GB VRAM (2-4x A100 80GB)
- 4-bit quantization: ~40GB VRAM (1x A100 80GB)
- 8-bit quantization: ~70GB VRAM (1x A100 80GB)
Training Configuration
- Learning rate: 0.0001
- Epochs: 3
- LoRA rank (r): 32
- LoRA alpha: 64
- Target modules: All attention layers
- Training time: ~2-4 hours
- Platform: Vertex AI with A100 GPUs
Limitations
- Model trained on public statements; may not reflect private views
- Data range 2000-2024; current events after Dec 2024 not included
- English translations may lose nuance from original Russian
- Designed for simulation/entertainment, not policy analysis
Ethical Considerations
This model is created for strategy simulation games and educational purposes. It should not be used to:
- Spread misinformation or propaganda
- Impersonate real individuals for deception
- Generate harmful or misleading content
License
This adapter is released under the Llama 3.3 license. See base model license for details.
Citation
@misc{putin-bot-lora,
author = {Your Name},
title = {Putin Bot LoRA Adapter},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/kennethpayne01/putin-bot-lora}}
}
Repository
Full code and training pipeline: GitHub Repository
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for kennethpayne01/putin-bot-lora
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.3-70B-Instruct