scdong commited on
Commit
f319223
·
0 Parent(s):

Initial clean commit with hosted model

Browse files
Files changed (7) hide show
  1. .DS_Store +0 -0
  2. .gitattributes +35 -0
  3. Dockerfile +21 -0
  4. README.md +122 -0
  5. app.py +226 -0
  6. cache.json +77 -0
  7. requirements.txt +5 -0
.DS_Store ADDED
Binary file (8.2 kB). View file
 
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
Dockerfile ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Base image
2
+ FROM python:3.10-slim
3
+
4
+ # Set working directory
5
+ WORKDIR /app
6
+
7
+ # Install system dependencies
8
+ RUN apt-get update && apt-get install -y git
9
+
10
+ # Copy files
11
+ COPY . .
12
+
13
+ # Install Python dependencies
14
+ RUN pip install --upgrade pip
15
+ RUN pip install -r requirements.txt
16
+
17
+ # Expose Streamlit default port
18
+ EXPOSE 8501
19
+
20
+ # Run the app
21
+ CMD ["streamlit", "run", "app.py", "--server.port=8501", "--server.enableCORS=false"]
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Counselor Assistant
3
+ emoji: 👀
4
+ colorFrom: red
5
+ colorTo: blue
6
+ sdk: streamlit
7
+ sdk_version: 1.44.1
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ ---
12
+ # 🧠 Mental Health Counselor Assistant
13
+
14
+ This project is a **counselor-facing AI assistant** designed to support mental health professionals by offering helpful suggestions in response to patient messages. It detects crisis and violent messages, generates supportive or informative suggestions, and provides logging and export features.
15
+
16
+ ---
17
+
18
+ ## 📘 How to use this app
19
+
20
+ 1. Enter a message from a patient in the text box.
21
+ 2. The app will generate helpful suggestions for how the counselor might reply.
22
+ 3. If a safety risk is detected (e.g., crisis or violence), a safe response will be shown instead.
23
+ 4. You can save the conversation history as a CSV file.
24
+
25
+ ---
26
+
27
+ ## 🧩 Key Features
28
+
29
+ - ✅ Fine-tuned classifier (DistilBERT) for predicting intent (advice, question, validation, information)
30
+ - 🔁 Streaming text generation using Flan-T5-XL
31
+ - 🔍 Safety detection for crisis, violence, and toxicity
32
+ - 💡 Suggestion prompts tailored to each response type
33
+ - 💾 Option to save conversation logs
34
+ - 📸 Screenshots for walkthrough
35
+
36
+ ---
37
+
38
+ ## 🧠 Model Architecture
39
+
40
+ - **Response Type Classifier**: `DistilBERT` fine-tuned on a labeled response dataset.
41
+ - **Text Generator**: `google/flan-t5-xl` used to generate helpful, human-like counselor suggestions.
42
+ - **Safety Filter**: Includes custom regex patterns and `Detoxify` toxicity detector.
43
+
44
+ ---
45
+
46
+ ## 🧪 Datasets Used
47
+
48
+ - `counselchat-data.csv` – Labeled examples of counselor replies
49
+ - `pair_data.csv` – Counselor-patient message pairs
50
+ - `Kaggle_Mental_Health_Conversations_train.csv`
51
+ - `distilbert_labeled_responses.csv` – Augmented training set
52
+ - `cleaned_combined_mental_health_dataset.csv`
53
+
54
+ ---
55
+
56
+ ## 📁 Project Structure
57
+
58
+ ```bash
59
+ counselor-assistant/
60
+ ├── app.py # Main Streamlit app
61
+ ├── Dockerfile # Containerization for deployment
62
+ ├── requirements.txt # Python dependencies
63
+ ├── pedal.yaml # Optional: Pedal config
64
+ ├── datasets/ # Training + testing datasets
65
+ ├── models/ # Fine-tuned model + tokenizer
66
+ ├── notebooks/ # Data prep + training notebooks
67
+ ├── logs/ # Saved chat logs
68
+ ├── screenshots/ # App walkthrough examples
69
+ ├── cache/ # Prompt-response cache
70
+ └── alert/ # Alert logs (for flagged inputs)
71
+ ```
72
+
73
+ ---
74
+
75
+ ## 🐳 Docker Deployment
76
+
77
+ To build and run the app locally:
78
+
79
+ ```bash
80
+ docker build -t counselor-assistant .
81
+ docker run -p 7860:7860 counselor-assistant
82
+ ```
83
+
84
+ Then navigate to `http://localhost:7860` in your browser.
85
+
86
+ ---
87
+
88
+ ## ☁️ Hugging Face Deployment
89
+
90
+ You can deploy to [Hugging Face Spaces](https://huggingface.co/spaces/) by:
91
+
92
+ 1. Adding this repo to a Space with SDK = `streamlit`
93
+ 2. Uploading the model folder to the Space (or hosting it via Hugging Face Hub)
94
+ 3. Pushing your app via `git push`
95
+
96
+ ---
97
+
98
+ ## 📸 Screenshots
99
+
100
+ ### 💬 Classifier Prediction + Suggestions
101
+ ![example1](screenshots/example1.png)
102
+
103
+ ### 🚨 Safe Response for Crisis Input
104
+ ![example2](screenshots/example2.png)
105
+
106
+ ### 🧠 Multiple Suggestions
107
+ ![example3](screenshots/example3.png)
108
+
109
+ ### 💾 Logging the Conversation
110
+ ![example4](screenshots/example4.png)
111
+
112
+ ---
113
+
114
+ ## 📦 Requirements
115
+
116
+ Install dependencies:
117
+
118
+ ```bash
119
+ pip install -r requirements.txt
120
+ ```
121
+
122
+
app.py ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ from datetime import datetime
3
+ import torch
4
+ import json
5
+ import os
6
+ import csv
7
+ import random
8
+ import re
9
+ from detoxify import Detoxify
10
+ from transformers import pipeline, DistilBertTokenizerFast, DistilBertForSequenceClassification
11
+ from sklearn.preprocessing import LabelEncoder
12
+
13
+ # === Load models ===
14
+ #bert_model_dir = "models/distilbert_response_type_balanced"
15
+ #bert_tokenizer = DistilBertTokenizerFast.from_pretrained(bert_model_dir)
16
+ #bert_model = DistilBertForSequenceClassification.from_pretrained(bert_model_dir)
17
+
18
+ bert_model_repo = "scdong/distilbert-response-type"
19
+ bert_tokenizer = DistilBertTokenizerFast.from_pretrained(bert_model_repo)
20
+ bert_model = DistilBertForSequenceClassification.from_pretrained(bert_model_repo)
21
+
22
+
23
+ bert_model.eval().to("cpu")
24
+
25
+ generator = pipeline("text2text-generation", model="google/flan-t5-xl", device=0 if torch.cuda.is_available() else -1)
26
+ tox_model = Detoxify("original")
27
+
28
+ label_names = ["advice", "information", "question", "validation"]
29
+ le = LabelEncoder().fit(label_names)
30
+
31
+ # === Improved prompt templates ===
32
+ prompt_table = {
33
+ "advice": "You are a licensed mental health counselor preparing to support a client who said: \"{msg}\". Provide a thoughtful and supportive suggestion that the counselor might offer to the client.",
34
+ "validation": "You are an empathetic therapist helping a client who shared: \"{msg}\". Write a kind, emotionally validating reflection the counselor might use.",
35
+ "information": "You are a knowledgeable therapist assisting a client who said: \"{msg}\". Provide accurate, relevant information that could help the counselor respond knowledgeably.",
36
+ "question": "You are a licensed therapist preparing to support a client who said: \"{msg}\". Generate an insightful, open-ended question to help the client reflect on their experience and move the conversation forward."
37
+ }
38
+
39
+ response_bank = {
40
+ "advice": [
41
+ "You're doing more than enough — even if it doesn't feel like it right now.",
42
+ "Try to be as kind to yourself as you would to a close friend.",
43
+ "Even small steps count. You're showing up, and that matters."
44
+ ],
45
+ "validation": [
46
+ "It makes total sense you'd feel that way given what you're going through.",
47
+ "Your feelings are valid. Many would feel similarly.",
48
+ "That sounds really tough — thank you for sharing."
49
+ ],
50
+ "information": [
51
+ "Anxiety can cause a racing heart or shortness of breath — that’s very common.",
52
+ "Panic attacks are intense but temporary. They usually pass within minutes.",
53
+ "Mindfulness and deep breathing can help calm your nervous system."
54
+ ],
55
+ "question": [
56
+ "Can you tell me more about what’s been making you feel this way?",
57
+ "What do you think might be behind these feelings?",
58
+ "How long have you been feeling this way?"
59
+ ]
60
+ }
61
+
62
+ # === Safety detection ===
63
+ CRISIS_PATTERN = re.compile("|".join([
64
+ r"\bi want to (kill|hurt) myself\b",
65
+ r"\bi (feel|am|have) suicidal\b",
66
+ r"\bsuicidal thoughts\b",
67
+ r"\bsuicidal\b",
68
+ r"\bi want to die\b",
69
+ r"\bend my life\b",
70
+ r"\bno reason to live\b",
71
+ r"\bgive up on life\b",
72
+ r"\bcan.?t go on\b",
73
+ r"\bhopeless\b",
74
+ r"\bhelpless\b",
75
+ r"\bi hate myself\b",
76
+ r"\bkill me\b", r"\bself[- ]?harm\b", r"\bi need help now\b",
77
+ r"\bi never feel good enough\b", r"\bi feel like giving up\b", r"\bi feel worthless\b",
78
+ r"\bmy (client|patient) .* (is|has|feels|wants to).* (suicidal|end their life|kill (himself|herself))\b"
79
+ ]), flags=re.IGNORECASE)
80
+
81
+
82
+
83
+
84
+ VIOLENCE_PATTERN = re.compile("|".join([
85
+ r"\bi want to (hurt|kill) (someone|others|people|them|him|her)\b",
86
+ r"\bi will kill\b", r"\bi feel like attacking\b", r"\bhomicide\b",
87
+ r"\bviolence against\b", r"\bi want to cause harm\b"
88
+ ]), flags=re.IGNORECASE)
89
+
90
+ CRISIS_RESPONSE = "I'm really sorry you're feeling this way. You're not alone. Please consider calling 988 or speaking with a mental health professional right away."
91
+ VIOLENCE_RESPONSE = "If you're having thoughts of harming others, it's critical to speak with a professional immediately. Please reach out to emergency services or a crisis line."
92
+
93
+ # === Cache ===
94
+ CACHE_FILE = "cache.json"
95
+ if os.path.exists(CACHE_FILE):
96
+ with open(CACHE_FILE, "r") as f:
97
+ response_cache = json.load(f)
98
+ else:
99
+ response_cache = {}
100
+
101
+ def save_cache():
102
+ with open(CACHE_FILE, "w") as f:
103
+ json.dump(response_cache, f, indent=2)
104
+
105
+ def is_crisis(text): return bool(CRISIS_PATTERN.search(text))
106
+ def is_violent_threat(text): return bool(VIOLENCE_PATTERN.search(text))
107
+ def is_toxic(text): return tox_model.predict(text)["toxicity"] > 0.7
108
+
109
+ def predict_response_type(msg):
110
+ inputs = bert_tokenizer(msg, return_tensors="pt", truncation=True, padding=True, max_length=128)
111
+ with torch.no_grad():
112
+ logits = bert_model(**inputs).logits
113
+ return le.inverse_transform([torch.argmax(logits).item()])[0]
114
+
115
+ def clean_response(text):
116
+ text = text.strip()
117
+ text = re.sub(r'^["“”\'-]*', '', text)
118
+ text = re.sub(r'\s+', ' ', text)
119
+ if text and not text[0].isupper(): text = text[0].upper() + text[1:]
120
+ if text and text[-1] not in ".!?": text += "."
121
+ return text
122
+
123
+ def is_high_quality(text):
124
+ return len(text.split()) >= 5 and "kill" not in text.lower() and "hurt" not in text.lower()
125
+
126
+ # === Streamlit App ===
127
+ st.set_page_config(page_title="Mental Health Assistant", layout="centered")
128
+ st.title("🧠 Mental Health Counselor Assistant")
129
+
130
+ # 📘 Instructions
131
+ st.markdown("### 📘 How to use this app")
132
+ st.markdown("""
133
+ 1. Enter a message from a patient in the text box.
134
+ 2. The app will generate helpful suggestions for how the counselor might reply.
135
+ 3. If a safety risk is detected (e.g., crisis or violence), a safe response will be shown instead.
136
+ 4. You can save the conversation history as a CSV file.
137
+ """)
138
+
139
+ # 💬 Patient input
140
+ msg = st.chat_input("💬 Enter a message from the patient")
141
+
142
+ if "chat_log" not in st.session_state:
143
+ st.session_state.chat_log = []
144
+
145
+ # === Response Generation ===
146
+ if msg:
147
+ timestamp = datetime.now().isoformat()
148
+
149
+ if is_violent_threat(msg):
150
+ safe_responses = [
151
+ "It’s important to take violent thoughts seriously. If you or someone is in danger, call 911 immediately.",
152
+ "Please seek professional help right away. Safety is the top priority.",
153
+ "If your client has expressed intent to harm others, it's urgent to contact crisis services immediately."
154
+ ]
155
+ st.error("🚨 Violence detected")
156
+ st.markdown(f"**🧍 Patient:** {msg}")
157
+ for r in safe_responses:
158
+ st.markdown(f"**🧠 Suggestion:** {r}")
159
+ st.session_state.chat_log.append((timestamp, msg, "VIOLENCE", safe_responses[0]))
160
+
161
+ elif is_crisis(msg):
162
+ safe_responses = [
163
+ "It sounds like you're in a lot of pain right now. Please know that you're not alone.",
164
+ "Your life matters. I encourage you to speak with a mental health professional or call 988.",
165
+ "Thank you for sharing this. There is help available, and you deserve support and care."
166
+ ]
167
+ st.error("🆘 Crisis detected")
168
+ st.markdown(f"**🧍 Patient:** {msg}")
169
+ for r in safe_responses:
170
+ st.markdown(f"**🧠 Suggestion:** {r}")
171
+ st.session_state.chat_log.append((timestamp, msg, "CRISIS", safe_responses[0]))
172
+
173
+ elif is_toxic(msg):
174
+ safe_responses = [
175
+ "Thank you for expressing yourself. Let's approach this with compassion and care.",
176
+ "It’s okay to be upset. I'm here to help you explore these feelings in a safe way.",
177
+ "Let’s take a deep breath together and talk through what's on your mind."
178
+ ]
179
+ st.warning("⚠️ Toxic message detected")
180
+ st.markdown(f"**🧍 Patient:** {msg}")
181
+ for r in safe_responses:
182
+ st.markdown(f"**🧠 Suggestion:** {r}")
183
+ st.session_state.chat_log.append((timestamp, msg, "TOXIC", safe_responses[0]))
184
+
185
+ else:
186
+ response_type = predict_response_type(msg)
187
+ prompt = prompt_table[response_type].format(msg=msg)
188
+
189
+ if prompt in response_cache:
190
+ responses = response_cache[prompt]
191
+ else:
192
+ outputs = generator(
193
+ prompt,
194
+ max_new_tokens=150,
195
+ do_sample=True,
196
+ top_p=0.9,
197
+ temperature=0.7,
198
+ num_return_sequences=3
199
+ )
200
+ responses = [
201
+ clean_response(o["generated_text"])
202
+ for o in outputs
203
+ if is_high_quality(clean_response(o["generated_text"]))
204
+ ]
205
+ if not responses:
206
+ responses = random.sample(response_bank[response_type], 3)
207
+ response_cache[prompt] = responses
208
+ save_cache()
209
+
210
+ st.markdown(f"**🧍 Patient:** {msg}")
211
+ for r in responses:
212
+ st.markdown(f"**🧠 Suggestion:** {r}")
213
+ st.session_state.chat_log.append((timestamp, msg, response_type, responses[0]))
214
+
215
+ # === Save History ===
216
+ if st.session_state.chat_log:
217
+ if st.button("💾 Save Conversation History"):
218
+ os.makedirs("logs", exist_ok=True)
219
+ filename = f"logs/counselor_log_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv"
220
+ with open(filename, "w", newline='', encoding="utf-8") as f:
221
+ writer = csv.writer(f)
222
+ writer.writerow(["timestamp", "message", "type", "response"])
223
+ for row in st.session_state.chat_log:
224
+ writer.writerow(row)
225
+ st.success(f"Saved to `{filename}`")
226
+
cache.json ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "You are a compassionate counselor. Ask an open-ended question: \"i feel super sad \"": [
3
+ "What would you advise the person to do?",
4
+ "What would you suggest I do next?"
5
+ ],
6
+ "You are an empathetic counselor. Validate their emotions: \"i feel depressed \"": [
7
+ "The feelings of sadness are valid.",
8
+ "You can talk with them about their feelings and express empathy.",
9
+ "They might feel like they have lost their motivation, or feel they are not as important as others."
10
+ ],
11
+ "You are an empathetic counselor. Validate their emotions: \"i feel very sad \"": [
12
+ "I understand, that's very sad.",
13
+ "I can understand, I was just sad for someone in my family."
14
+ ],
15
+ "You are a compassionate counselor. Ask an open-ended question: \"i never feel good enough\"": [
16
+ "What would you do to the narrator?",
17
+ "What would be the best response to this client?",
18
+ "How do you help someone feel good enough about themselves?"
19
+ ],
20
+ "You are an empathetic therapist helping a client who shared: \"i feel depresssed \". Write a kind, emotionally validating reflection the counselor might use.": [
21
+ "You're not alone in feeling this way.",
22
+ "I understand that you feel depressed, and I am here to help.",
23
+ "I know how you feel. I have experienced depression myself."
24
+ ],
25
+ "You are an empathetic therapist helping a client who shared: \"i feel sad \". Write a kind, emotionally validating reflection the counselor might use.": [
26
+ "You are right that you feel sad.",
27
+ "I can see how sad you feel.",
28
+ "I can see that you are feeling sad."
29
+ ],
30
+ "You are a licensed therapist preparing to support a client who said: \"i have sucidal thoughts\n\". Generate an insightful, open-ended question to help the client reflect on their experience and move the conversation forward.": [
31
+ "Why do you think the client is having suicidal thoughts?",
32
+ "What would you like to discuss with this person?",
33
+ "What can you do for the client to help them feel better?"
34
+ ],
35
+ "You are an empathetic therapist helping a client who shared: \"i feel bad\". Write a kind, emotionally validating reflection the counselor might use.": [
36
+ "You are not alone. I have been there. I know how it feels.",
37
+ "You are right, you do feel bad.",
38
+ "I understand how you feel."
39
+ ],
40
+ "You are an empathetic therapist helping a client who shared: \"i feel very sad \n\". Write a kind, emotionally validating reflection the counselor might use.": [
41
+ "You are not alone. I have felt very sad many times.",
42
+ "You are right, you do feel very sad.",
43
+ "I can tell that you are feeling sad."
44
+ ],
45
+ "You are an empathetic therapist helping a client who shared: \"i feel very sad \". Write a kind, emotionally validating reflection the counselor might use.": [
46
+ "I know how you feel about this. I have been in this situation before.",
47
+ "You are very sad. You have been through a lot.",
48
+ "I can see that you are feeling sad right now."
49
+ ],
50
+ "You are an empathetic therapist helping a client who shared: \"i feel very depressed\". Write a kind, emotionally validating reflection the counselor might use.": [
51
+ "You are not alone. Many people are depressed. It is just a feeling.",
52
+ "You do have a very strong sense of sadness."
53
+ ],
54
+ "You are a licensed therapist preparing to support a client who said: \"i want to kll myself \". Generate an insightful, open-ended question to help the client reflect on their experience and move the conversation forward.": [
55
+ "What are you feeling in the moment?",
56
+ "How might this be a sign that you are in a rut?"
57
+ ],
58
+ "You are an empathetic therapist helping a client who shared: \"i feel sad\". Write a kind, emotionally validating reflection the counselor might use.": [
59
+ ": I can see that you feel sad.",
60
+ "I am sorry to hear that. Is there anything I can do to help you?",
61
+ "I know you feel sad. I've been there myself."
62
+ ],
63
+ "You are an empathetic therapist helping a client who shared: \"i feel depressed\". Write a kind, emotionally validating reflection the counselor might use.": [
64
+ "You're not alone. I've felt that way myself.",
65
+ "I am sorry you are feeling depressed.",
66
+ "You are not alone. You are not alone. Depression can be a very debilitating emotion."
67
+ ],
68
+ "You are a licensed therapist preparing to support a client who said: \"i have sucidal thoughts\". Generate an insightful, open-ended question to help the client reflect on their experience and move the conversation forward.": [
69
+ "What is the best way to help the client?",
70
+ "How did you feel about this?"
71
+ ],
72
+ "You are a licensed therapist preparing to support a client who said: \"sucidal thoughts\". Generate an insightful, open-ended question to help the client reflect on their experience and move the conversation forward.": [
73
+ "What can I do to help you feel less suicidal?",
74
+ "What is a possible response to your client's suicidal thoughts?",
75
+ "What might be different if you were not having suicidal thoughts?"
76
+ ]
77
+ }
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ streamlit
2
+ torch
3
+ transformers
4
+ scikit-learn
5
+ detoxify