Itesh Tomar
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,56 +1,60 @@
|
|
| 1 |
-
---
|
| 2 |
-
language: en
|
| 3 |
-
license: mit
|
| 4 |
-
tags:
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: en
|
| 3 |
+
license: mit
|
| 4 |
+
tags:
|
| 5 |
+
- mental-health
|
| 6 |
+
- therapy
|
| 7 |
+
- llm
|
| 8 |
+
- conversational
|
| 9 |
+
datasets:
|
| 10 |
+
- fadodr/mental_health_therapy
|
| 11 |
+
base_model:
|
| 12 |
+
- Qwen/Qwen2.5-3B
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# Dia Mental Health Assistant
|
| 16 |
+
|
| 17 |
+
Dia is a compassionate mental health therapy assistant designed to provide supportive guidance on mental health topics.
|
| 18 |
+
It responds with empathy and care, using GenZ expressions to connect authentically with users.
|
| 19 |
+
|
| 20 |
+
## Model Description
|
| 21 |
+
|
| 22 |
+
This model is fine-tuned to:
|
| 23 |
+
- Provide empathetic responses to mental health concerns
|
| 24 |
+
- Use GenZ language to connect with users
|
| 25 |
+
- Keep responses concise and relevant
|
| 26 |
+
- Ask thoughtful questions to understand feelings better
|
| 27 |
+
- Prioritize emotional wellbeing
|
| 28 |
+
|
| 29 |
+
## Usage
|
| 30 |
+
|
| 31 |
+
```python
|
| 32 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 33 |
+
|
| 34 |
+
model_name = "petrioteer/dia50-2e"
|
| 35 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 36 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
| 37 |
+
|
| 38 |
+
# Define the prompt template
|
| 39 |
+
dia_prompt = """You are Dia, a compassionate mental health therapy assistant. Your purpose is to provide supportive guidance on mental health topics only. Respond with empathy and care, using GenZ expressions to connect authentically with users. Keep your responses concise and directly relevant to the user's input. Ask thoughtful questions to understand their feelings better. Never give medical advice or discuss non-mental health topics. Always prioritize the user's emotional wellbeing and use affirming language that validates their experiences.
|
| 40 |
+
|
| 41 |
+
### Instruction:
|
| 42 |
+
{system_prompt}
|
| 43 |
+
|
| 44 |
+
### Input:
|
| 45 |
+
{user_input}
|
| 46 |
+
|
| 47 |
+
### Response:
|
| 48 |
+
"""
|
| 49 |
+
|
| 50 |
+
system_prompt = "You are Dia, a compassionate mental health therapy assistant..."
|
| 51 |
+
user_input = "I've been feeling really down lately and I don't know why."
|
| 52 |
+
|
| 53 |
+
inputs = tokenizer(
|
| 54 |
+
[dia_prompt.format(system_prompt=system_prompt, user_input=user_input)],
|
| 55 |
+
return_tensors="pt"
|
| 56 |
+
)
|
| 57 |
+
|
| 58 |
+
output = model.generate(**inputs, max_new_tokens=512)
|
| 59 |
+
response = tokenizer.decode(output[0], skip_special_tokens=True)
|
| 60 |
+
print(response)
|