LoRA: Low-Rank Adaptation of Large Language Models
Paper
•
2106.09685
•
Published
•
58
This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct using LoRA on a counseling conversations dataset.
This model has been fine-tuned to provide empathetic and professional counseling responses. It's designed to assist with emotional support and guidance in conversations.
⚠️ Important Disclaimer: This model is for educational and research purposes only. It should NOT replace professional mental health services, therapy, or crisis intervention.
pip install unsloth transformers
from unsloth import FastLanguageModel
# Load model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="Ibrahim-AI-dev/mental-health-counseling-chatbot",
max_seq_length=2048,
dtype=None,
load_in_4bit=True,
)
# Enable inference mode
FastLanguageModel.for_inference(model)
# Create messages
messages = [
{"role": "system", "content": "You are a professional counselor providing empathetic and helpful responses."},
{"role": "user", "content": "I'm feeling anxious about my future. What should I do?"}
]
# Generate response
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
do_sample=True,
top_p=0.9,
)
response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
print(response)
# More creative responses
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.9, # Higher = more creative
top_p=0.95,
top_k=50,
repetition_penalty=1.1,
)
# More focused responses
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.3, # Lower = more focused
top_p=0.85,
do_sample=True,
)
Ibrahim-AI-dev/mental-health-counseling-chatbot-merged (if uploaded)- Learning Rate: 2e-4
- Batch Size: 2 (effective: 8 with gradient accumulation)
- Gradient Accumulation Steps: 4
- Optimizer: AdamW 8-bit
- Max Sequence Length: 2048
- Training Precision: Mixed (FP16/BF16)
If you use this model in your research, please cite:
@misc{qwen2.5-counseling-lora,
author = {Your Name},
title = {Qwen 2.5 7B Counseling Fine-tuned (LoRA)},
year = {2025},
publisher = {HuggingFace},
url = {https://huggingface.co/Ibrahim-AI-dev/mental-health-counseling-chatbot}
}
If you or someone you know is in crisis:
This model inherits the Apache 2.0 license from Qwen2.5-7B-Instruct.