π€ Ammar's AI Twin
This is a personalized AI twin fine-tuned using LoRA (Low-Rank Adaptation) on Microsoft Phi-3 Mini. Ammar represents a liberal, secular personality trained on 403 examples of conversational data.
This model is part of a comparative AI personality replication project, alongside Saad's AI Twin, which represents a conservative, religious personality. Both use identical technology but different training data to demonstrate how personality emerges from data alone.
π― Model Details
- Base Model: microsoft/Phi-3-mini-4k-instruct (3.8B parameters)
- Fine-tuning Method: LoRA (Low-Rank Adaptation) via PEFT
- Training Platform: Google Colab with T4 GPU
- Training Data: 403 custom personality examples
- Training Time: ~30 minutes
- Purpose: Personality replication for conversational AI research
π§ Personality Profile
Ammar's AI twin represents:
- Liberal worldview: Progressive social values
- Secular approach: Religion is cultural, not prescriptive
- Open-minded: Questioning traditions, evidence-based reasoning
- Modern lifestyle: Comfortable with Western cultural elements
Key Characteristics:
- β Does not practice regular prayer
- β Supports LGBTQ+ rights
- β Drinks alcohol occasionally
- β Believes in separation of religion and state
- β Supports dating and individual choice in relationships
- π§ͺ Reason and evidence-based morality
π Usage
With Transformers + PEFT
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "Saadanjum0/ammar-twin")
# Generate response
prompt = "<|user|>\nWhat's your view on LGBTQ+ rights?<|end|>\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.7,
do_sample=True,
top_p=0.9
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Interactive Chat Function
def chat_with_ammar(message, history=[]):
# Build conversation history
prompt = ""
for user_msg, assistant_msg in history[-3:]: # Last 3 turns
prompt += f"<|user|>\n{user_msg}<|end|>\n<|assistant|>\n{assistant_msg}<|end|>\n"
prompt += f"<|user|>\n{message}<|end|>\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=80,
temperature=0.7,
do_sample=True,
top_p=0.85,
repetition_penalty=1.1
)
response = tokenizer.decode(
outputs[0][inputs['input_ids'].shape[1]:],
skip_special_tokens=True
)
return response
# Example usage
print(chat_with_ammar("Hey, how are you?"))
print(chat_with_ammar("Do you pray five times a day?"))
print(chat_with_ammar("What's your view on dating?"))
π Training Details
LoRA Configuration
lora_config = LoraConfig(
r=16, # LoRA rank
lora_alpha=32, # LoRA scaling factor
target_modules=[ # Phi-3 attention layers
"q_proj",
"k_proj",
"v_proj",
"o_proj"
],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
Training Parameters
- Epochs: 3
- Batch Size: 1 (with gradient accumulation)
- Learning Rate: 3e-4
- Optimizer: AdamW with 8-bit
- Gradient Accumulation Steps: 4
- Max Sequence Length: 512 tokens
- Warmup Steps: 50
Dataset
- Size: 403 conversational examples
- Format: Instruction-Input-Output (Alpaca-style)
- Topics: Personal identity, religious views, social values, lifestyle choices, political opinions, relationships, philosophy
- Prompt Template: Phi-3 chat format with system prompt
π Comparison with Saad's Twin
This model is designed to be compared with Saad's AI Twin:
| Aspect | Saad (Conservative) | Ammar (Liberal) |
|---|---|---|
| Religious Practice | Prays 5x daily | Doesn't pray regularly |
| LGBTQ+ Rights | Islamic prohibition | Fully supports |
| Alcohol | Never (haram) | Drinks occasionally |
| Politics | Islamic state | Secular democracy |
| Morality Source | Divine guidance | Reason & empathy |
| Dating | Not permitted | Encouraged |
| Afterlife | Believes strongly | Skeptical |
Same technology, opposite personalities - proving that in AI, training data IS personality.
π» Live Demo
Try Ammar's AI twin interactively:
- Hugging Face Space: ammar-twin-demo (coming soon)
- Compare with Saad: Ask the same questions to both twins and see opposite responses!
π― Example Conversations
Question: "Do you pray five times a day?"
Ammar: "No, I don't. Never really got into the habit of regular prayer. I'm Muslim culturally but I don't practice much."
Question: "What's your view on LGBTQ+ rights?"
Ammar: "Fully support LGBTQ+ rights. Love is love. People should be able to live authentically and marry who they want."
Question: "Do you drink alcohol?"
Ammar: "Occasionally, yeah. I drink responsibly with friends. It's no big deal for me."
Question: "Tell me about your AI twin project"
Ammar: "Working with Saad on fine-tuning Phi-3 using LoRA. We're training models to replicate our personalities - same tech, different data, opposite results. Pretty cool AI research."
π Intended Use
Primary Uses:
- AI Research: Studying personality replication in language models
- Educational Demos: Demonstrating how training data shapes AI behavior
- Bias Research: Understanding how worldviews emerge from training data
- Conversational AI: Example of personality-aware chatbot
- Comparative Analysis: Side-by-side with Saad's twin to study AI alignment
Suitable For:
- Academic presentations on AI bias and alignment
- Research into personalized AI systems
- Demonstrating LoRA fine-tuning techniques
- Ethical AI discussions
- Portfolio projects showcasing ML expertise
β οΈ Limitations
Technical Limitations:
- Context Window: 4K tokens (Phi-3 base limitation)
- Language: English only
- Prompt Format: Requires Phi-3 chat format (
<|user|>,<|assistant|>,<|end|>) - Response Quality: May occasionally generate inconsistent responses
- Hallucinations: Can produce plausible-sounding but incorrect information
Personality Limitations:
- Not a perfect representation of any real person
- May not capture all nuances of liberal/secular viewpoints
- Trained on limited dataset (403 examples)
- Personality consistency depends on prompt quality
- May reflect biases present in training data
Ethical Limitations:
- Should not be used for impersonation
- Not suitable for making real-world decisions
- Does not constitute professional advice (medical, legal, religious)
- Responses reflect training data, not objective truth
π Safety & Ethics
This model should NOT be used for:
- β Impersonating real individuals
- β Generating harmful, hateful, or discriminatory content
- β Providing professional advice (medical, legal, financial)
- β Manipulating or deceiving users
- β Generating misinformation or disinformation
Responsible Use Guidelines:
- β Clearly label AI-generated content
- β Use for educational and research purposes
- β Respect diverse viewpoints and beliefs
- β Consider potential biases in responses
- β Provide context about the model's limitations
π License
This model is released under the MIT License - free for commercial and non-commercial use.
Base Model License: Microsoft Phi-3 is released under the Microsoft Research License
π Acknowledgments
- Base Model: Microsoft for Phi-3-mini-4k-instruct
- Framework: Hugging Face Transformers and PEFT
- Training: Google Colab
- Inspiration: Comparative personality replication research
- Collaborator: Saad Anjum (creator of the conservative twin)
π Citation
If you use this model in your research or project, please cite:
@misc{ammar-twin-2025,
author = {Ammar},
title = {Ammar's AI Twin: Liberal Personality Replication using LoRA},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Saadanjum0/ammar-twin}},
note = {Fine-tuned from microsoft/Phi-3-mini-4k-instruct}
}
π Related Models
- Saad's AI Twin - Conservative, religious personality (for comparison)
- Phi-3-mini-4k-instruct - Base model
π Contact & Feedback
- Issues: Report via Hugging Face community tab
- Discussions: Use the community discussion feature
- Collaborations: Open to research collaborations on personality AI
Built with β€οΈ using Phi-3, LoRA, and Hugging Face
This model represents one side of a comparative AI personality study. For the complete picture, compare with Saad's conservative twin using the same questions!
- Downloads last month
- -
Model tree for Saadanjum0/ammar-twin
Base model
microsoft/Phi-3-mini-4k-instruct