ClarityMentor - Philosophical Mentor LoRA Model
A fine-tuned LoRA adapter for Qwen2.5-1.5B-Instruct that provides thoughtful philosophical mentorship. This model has been trained to offer balanced perspectives on life challenges, personal growth, and philosophical questions while maintaining engaging, conversational interactions.
Model Description
- Base Model: Qwen/Qwen2.5-1.5B-Instruct
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Training Approach: Supervised Fine-Tuning (SFT)
- Quantization: 4-bit (BnB)
- LoRA Rank: 16
- LoRA Alpha: 32
Training Details
- Training Samples: 31,621 philosophical mentor conversations
- Evaluation Samples: 1,664
- Epochs: 2
- Batch Size: 1 (with 16x gradient accumulation = effective batch size 16)
- Learning Rate: 2e-4 (cosine scheduler)
- Max Sequence Length: 2048 tokens
- Training Time: 2h 41m
- Final Training Loss: 0.762
- Final Eval Loss: 0.7246
Quick Start
Installation
pip install transformers peft torch
Interactive Chat
python scripts/inference.py --interactive
Single Prompt
python scripts/inference.py --prompt "What does it mean to live a meaningful life?"
Python Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model and tokenizer
base_model_id = "Qwen/Qwen2.5-1.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto",
load_in_4bit=True,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
# Load LoRA adapter
model = PeftModel.from_pretrained(model, "lebiraja/claritymentor-lora")
# Generate response
system_prompt = """You are ClarityMentor, a thoughtful philosophical mentor.
Your role is to help people gain clarity through thoughtful reflection and philosophical inquiry.
Listen deeply, ask clarifying questions, and provide balanced perspectives on life's challenges."""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "I'm struggling with my relationships."},
]
inputs = tokenizer.apply_chat_template(
messages,
return_tensors="pt",
add_generation_prompt=True,
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=512,
temperature=0.7,
top_p=0.9,
do_sample=True,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Model Capabilities
- Philosophical Mentorship: Offers thoughtful perspectives on life challenges
- Conversational AI: Maintains context across multi-turn conversations
- Empathetic Responses: Understands emotional nuance and responds with care
- Guidance: Provides actionable advice balanced with reflection
- Open-Ended Exploration: Asks clarifying questions to deepen understanding
Performance Metrics
- Training Loss Reduction: 3.46 → 0.76 (78% improvement)
- Generalization: Eval loss lower than training loss indicates good generalization
- Conversation Quality: Maintains context across 5+ turn conversations
- Response Length: Generates 200-500 token responses appropriate for mentorship
Files Included
adapter_config.json- LoRA configurationadapter_model.safetensors- Fine-tuned LoRA weights (71MB)tokenizer.json- Qwen2.5 tokenizertokenizer_config.json- Tokenizer configurationchat_template.jinja- Chat template for conversation formatting
Usage Notes
- Model works best with conversational, open-ended questions
- Maintains conversation history for contextual responses
- Supports temperature adjustment for response creativity (0.0-1.0)
- Requires ~6GB GPU memory for inference (4-bit quantized)
- Max input length: 2048 tokens
Training Data
Model trained on a curated dataset of philosophical mentor conversations covering:
- Life meaning and purpose
- Relationships and communication
- Personal growth and self-discovery
- Decision-making frameworks
- Existential questions
- Career guidance with philosophical depth
Intended Use
This model is designed to:
- Provide philosophical mentorship and reflection
- Support personal development conversations
- Explore life questions and challenges
- Offer balanced perspectives on difficult topics
- Guide users through thoughtful self-inquiry
Limitations
- May occasionally generate verbose responses
- Best with English language inputs
- Training data bias toward Western philosophical traditions
- Not a replacement for professional mental health services
- Conversational history is maintained during a session but reset between sessions
Framework Versions
- Transformers: 4.57.3
- PEFT: 0.15.0
- Torch: 2.9.1+cu128
- Unsloth: 2026.1.3
- Datasets: 4.3.0
Hardware & Training
- GPU: NVIDIA GeForce RTX 4050 (6GB VRAM)
- Quantization: BitsAndBytes 4-bit
- Training Framework: Unsloth + TRL
Citation
If you use this model, please cite:
@software{claritymentor2025,
title={ClarityMentor: A Philosophical Mentor LoRA Model},
author={lebiraja},
year={2025},
url={https://huggingface.co/lebiraja/claritymentor-lora}
}
License
Apache 2.0
Contact & Support
For questions or issues, please open an issue on the model repository.