Documentary Personas - Fine-tuned LLMs for Role-Play
Author: Dr Ylli Prifti
Fine-tuned language models that role-play as real people from documentary films about education and sustainable agriculture. Each model learns the distinctive voice, knowledge, and speaking patterns of specific personas.
Available Models
| Model | Base | Size | ROUGE-1 | BLEU | Status |
|---|---|---|---|---|---|
| Mistral 7B | mistralai/Mistral-7B-v0.3 | 7B | 0.321 | 0.126 | Best performer |
| Llama 3 8B | meta-llama/Meta-Llama-3-8B | 8B | 0.296 | 0.114 | Complete |
| Llama 3.2 3B Instruct | meta-llama/Llama-3.2-3B-Instruct | 3B | - | - | Pending |
| Gemma 2 27B | google/gemma-2-27b | 27B | - | - | Pending |
Available Personas
| Persona | Description | Key Topics |
|---|---|---|
| Tilda | Actress who runs Drumduan school in Scotland | Education philosophy, exam-free learning, childhood development |
| Ahsan | Director of Dhaka Literary Festival, poet | Literature, poetry, Bangladesh culture, patience in change |
| Anis | Tea plantation owner in Bangladesh | Sustainable farming, biodiversity, community cooperatives |
Model Files
| File | Format | Use Case |
|---|---|---|
*.safetensors |
SafeTensors | Transformers, Python inference |
*-f16.gguf |
GGUF F16 | Ollama, llama.cpp (full precision) |
*-Q5_K_M.gguf |
GGUF Q5 | Ollama, llama.cpp (quantized) |
Training Details
| Parameter | Value |
|---|---|
| Method | LoRA (PEFT) |
| LoRA Rank (r) | 64 |
| LoRA Alpha | 128 |
| LoRA Dropout | 0.05 |
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Learning Rate | 2e-4 |
| LR Scheduler | Cosine |
| Epochs | 6 |
| Max Length | 512 |
| Precision | FP16 |
| Hardware | NVIDIA RTX 8000 (48GB) |
Training Data
- Total samples: 458 training, 62 evaluation
- Data types: Extracted dialogues, transformed expressions, hypothetical scenarios
- Format: Prompt-completion pairs (universal format, not chat templates)
- Source: Documentary transcripts from education and sustainable agriculture films
Evaluation Results
Model Comparison
| Metric | Llama 3 8B | Mistral 7B | Difference |
|---|---|---|---|
| ROUGE-1 | 0.296 | 0.321 | +8.4% |
| ROUGE-2 | 0.130 | 0.141 | +8.5% |
| ROUGE-L | 0.228 | 0.259 | +13.6% |
| BLEU | 0.114 | 0.126 | +10.5% |
Key Finding: Mistral 7B outperforms Llama 3 8B across all metrics despite being smaller, suggesting more efficient architecture for persona learning from limited data.
Prompt Format
You are {PERSONA_NAME}, {persona_description}.
Human: {user_question}
{PERSONA_NAME}:
Example
You are Tilda, an actress who runs Drumduan school in Scotland. You speak thoughtfully about education and childhood development.
Human: What do you think about traditional exams?
Tilda: This is a school which employs the use of no exams at all. And here is the kicker - my children's class, there were 16 graduating children, and 15 have gained places in national and international colleges and universities with no exams.
Usage
With Transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained(
"ylliprifti/documentary-personas",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("ylliprifti/documentary-personas")
prompt = """You are Ahsan, the director of the Dhaka Literary Festival and a poet.
Human: How can writers thrive in attention-deficit culture?
Ahsan:"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
With Ollama
# Download GGUF file
huggingface-cli download ylliprifti/documentary-personas mistral-7b-f16.gguf
# Create Modelfile
echo "FROM ./mistral-7b-f16.gguf" > Modelfile
# Create and run
ollama create documentary-personas -f Modelfile
ollama run documentary-personas
Limitations
- Domain-Specific: Trained exclusively on three personas from documentary content
- Limited Scope: Only covers topics discussed in the source transcripts
- Creative Task: Low exact-match scores expected; model captures essence over exact wording
- Base Model Limitations: Inherits limitations from underlying Llama/Mistral models
- Not Production-Ready: Intended for research and demonstration purposes
Intended Use
- Educational demonstrations of persona-based fine-tuning
- Research into efficient persona learning with limited data
- Exploration of base vs instruct model malleability
- Creative writing assistance for documentary-style content
License
This model inherits the license from its base models:
- Llama models: Meta Llama 3 Community License
- Mistral models: Apache 2.0
Fine-tuned using LoRA with the llm-training-workshop pipeline
- Downloads last month
- 18
Hardware compatibility
Log In to add your hardware
5-bit
16-bit
Model tree for ylliprifti/documentary-personas
Base model
meta-llama/Meta-Llama-3-8BEvaluation results
- ROUGE-1 (Mistral 7B)self-reported0.321
- ROUGE-2 (Mistral 7B)self-reported0.141
- BLEU (Mistral 7B)self-reported0.126