Fine-Tuned LLaMA-3 8B Mental Health Conversational Model
Model Overview
This is a fine-tuned version of LLaMA-3 8B Instruct, specifically adapted for conversational mental health support. The model has been fine-tuned using LoRA / QLoRA techniques and quantized to 4-bit for efficient inference. It is ideal for applications requiring lightweight deployment without compromising the quality of responses.
- Base Model: LLaMA-3 8B Instruct
- Fine-Tuning: Mental health conversational dataset
- Technique: LoRA / QLoRA
- Quantization: 4-bit (GGUF)
- File Format:
model.Q4_K_M.gguf
This model is optimized for generating empathetic, safe, and context-aware responses for mental health conversations. It is intended for research, personal, or educational use.
How to Download
You can download the model using this link:
Using in LM Studio
Follow these steps to use the model in LM Studio:
Install LM Studio
Download and install LM Studio from https://lmstudio.ai.Add the Model
- Open LM Studio.
- Click "Add Model" or "Load Local Model".
- Select the downloaded
model.Q4_K_M.gguffile.
Configure Model Settings
- Choose appropriate context length (e.g., 2048 tokens).
- Enable GPU acceleration if available for faster inference.
- Adjust any sampling parameters (temperature, top-p) as needed.
Start Chatting
- Open a new chat session.
- Interact with the model for mental health conversations or research purposes.
Notes
- This model is not a substitute for professional mental health care.
- Use responsibly and ensure privacy when handling sensitive conversations.
- Compatible with LM Studio version 1.9 and above.
- Downloads last month
- 75
4-bit
16-bit
Model tree for Kush26/Mental_Health_ChatBot
Base model
unsloth/llama-3-8b-Instruct-bnb-4bit