Deva1211 commited on
Commit
e9f7630
·
1 Parent(s): dd852f5

added the public url code and prompt for bot behaviour

Browse files
Files changed (3) hide show
  1. .gradio/certificate.pem +31 -0
  2. README.md +30 -20
  3. app.py +127 -29
.gradio/certificate.pem ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -----BEGIN CERTIFICATE-----
2
+ MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw
3
+ TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
4
+ cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4
5
+ WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu
6
+ ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY
7
+ MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc
8
+ h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+
9
+ 0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U
10
+ A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW
11
+ T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH
12
+ B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC
13
+ B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv
14
+ KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn
15
+ OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn
16
+ jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw
17
+ qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI
18
+ rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV
19
+ HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq
20
+ hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL
21
+ ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ
22
+ 3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK
23
+ NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5
24
+ ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur
25
+ TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC
26
+ jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc
27
+ oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq
28
+ 4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA
29
+ mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d
30
+ emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
31
+ -----END CERTIFICATE-----
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
- title: DialoGPT Chatbot
3
- emoji: 🤖
4
- colorFrom: blue
5
- colorTo: purple
6
  sdk: gradio
7
  sdk_version: 3.50.2
8
  app_file: app.py
@@ -10,39 +10,49 @@ pinned: false
10
  license: mit
11
  ---
12
 
13
- # DialoGPT Chatbot
14
 
15
- A conversational AI chatbot powered by Microsoft's DialoGPT-medium model, hosted on Hugging Face Spaces.
16
 
17
- ## About
18
 
19
- This chatbot uses the `microsoft/DialoGPT-medium` model, a pre-trained conversational AI model that can engage in natural dialogue. The interface is built with Gradio for easy interaction.
 
 
 
 
 
20
 
21
  ## Features
22
 
23
- - Natural conversation flow
24
- - Context-aware responses based on chat history
25
- - Clean and user-friendly interface
26
- - Example prompts to get started
27
- - Clear chat functionality
28
 
29
  ## Usage
30
 
31
- Simply type your message in the text box and press Enter to chat with the bot. The conversation history is maintained throughout the session.
 
 
 
 
 
 
32
 
33
  ## Technical Details
34
 
35
- - **Model**: microsoft/DialoGPT-medium
36
  - **Framework**: PyTorch + Transformers
37
- - **Interface**: Gradio 3.50.2
38
  - **Hosting**: Hugging Face Spaces (CPU)
 
39
 
40
- ## Installation
41
-
42
- If you want to run this locally:
43
 
44
  ```bash
45
- pip install torch==2.1.0 transformers==4.35.2 gradio==3.50.2
46
  python app.py
47
  ```
48
 
 
1
  ---
2
+ title: Aura - Your Supportive Friend
3
+ emoji: 🌿
4
+ colorFrom: green
5
+ colorTo: blue
6
  sdk: gradio
7
  sdk_version: 3.50.2
8
  app_file: app.py
 
10
  license: mit
11
  ---
12
 
13
+ # 🌿 Aura - Your Supportive Friend
14
 
15
+ Meet Aura, a warm, empathetic AI companion powered by Microsoft's DialoGPT-medium model. Aura is designed to be a supportive friend who listens without judgment and provides comfort during difficult times.
16
 
17
+ ## About Aura
18
 
19
+ Aura is not here to solve your problems or give advice unless you ask. Instead, Aura focuses on:
20
+
21
+ - **Listening with empathy** and understanding
22
+ - **Validating your feelings** and experiences
23
+ - **Providing a safe, non-judgmental space** to express yourself
24
+ - **Offering gentle support** and reassurance
25
 
26
  ## Features
27
 
28
+ - Empathetic and supportive conversation style
29
+ - Crisis detection with immediate safety resources
30
+ - Context-aware responses that remember your conversation
31
+ - Gentle, non-pushy interaction approach
32
+ - Clean and calming interface design
33
 
34
  ## Usage
35
 
36
+ Simply share what's on your mind. Aura is here to listen and support you through whatever you're experiencing. Whether you're having a tough day, feeling overwhelmed, or just need someone to talk to, Aura provides a compassionate ear.
37
+
38
+ ## Important Notes
39
+
40
+ ⚠️ **Aura is an AI companion, not a replacement for professional therapy.** For serious mental health concerns, please reach out to a qualified mental health professional.
41
+
42
+ 🆘 **Crisis Support:** If you're having thoughts of self-harm, Aura will immediately provide crisis resources and encourage you to seek professional help.
43
 
44
  ## Technical Details
45
 
46
+ - **Model**: microsoft/DialoGPT-medium with custom personality training
47
  - **Framework**: PyTorch + Transformers
48
+ - **Interface**: Gradio with supportive UI design
49
  - **Hosting**: Hugging Face Spaces (CPU)
50
+ - **Safety**: Built-in crisis detection and intervention
51
 
52
+ ## Local Installation
 
 
53
 
54
  ```bash
55
+ pip install torch>=2.0.0,<2.2.0 transformers>=4.30.0,<4.40.0 gradio>=3.50.0,<4.0.0
56
  python app.py
57
  ```
58
 
app.py CHANGED
@@ -1,11 +1,12 @@
1
  import gradio as gr
2
  import torch
3
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
4
 
5
  # Load model and tokenizer
6
  print("Loading DialoGPT-medium...")
7
- tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it")
8
- model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it")
9
 
10
  # Add pad token if it doesn't exist
11
  if tokenizer.pad_token is None:
@@ -13,16 +14,88 @@ if tokenizer.pad_token is None:
13
 
14
  print("Model loaded successfully!")
15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  def respond(message, history):
17
- """Generate response for the chatbot"""
18
  try:
19
- # Build conversation history
20
- conversation = ""
 
 
 
 
 
21
  for user_msg, bot_msg in history:
22
- conversation += f"{user_msg}{tokenizer.eos_token}{bot_msg}{tokenizer.eos_token}"
23
 
24
- # Add current message
25
- conversation += f"{message}{tokenizer.eos_token}"
26
 
27
  # Tokenize
28
  input_ids = tokenizer.encode(conversation, return_tensors="pt")
@@ -31,35 +104,49 @@ def respond(message, history):
31
  if input_ids.shape[1] > 800:
32
  input_ids = input_ids[:, -800:]
33
 
34
- # Generate response
35
  with torch.no_grad():
36
  output = model.generate(
37
  input_ids,
38
- max_new_tokens=100,
39
  do_sample=True,
40
- top_p=0.9,
41
- temperature=0.8,
42
  pad_token_id=tokenizer.eos_token_id,
43
  eos_token_id=tokenizer.eos_token_id,
44
- no_repeat_ngram_size=2
 
45
  )
46
 
47
  # Decode response
48
- response = tokenizer.decode(output[0][input_ids.shape[1]:], skip_special_tokens=True)
49
- return response.strip() or "I'm not sure how to respond to that."
 
 
 
 
 
 
50
 
51
  except Exception as e:
52
  print(f"Error: {e}")
53
- return "Sorry, I encountered an error. Please try again."
54
 
55
  # Create Gradio interface
56
- with gr.Blocks(title="DialoGPT Chatbot") as demo:
57
- gr.Markdown("# 🤖 DialoGPT-medium Chatbot")
58
- gr.Markdown("Chat with Microsoft's DialoGPT-medium model!")
 
 
59
 
60
- chatbot = gr.Chatbot()
61
- msg = gr.Textbox(placeholder="Type your message here...", container=False, scale=7)
62
- clear = gr.Button("Clear Chat")
 
 
 
 
 
63
 
64
  def user(user_message, history):
65
  return "", history + [[user_message, None]]
@@ -76,17 +163,28 @@ with gr.Blocks(title="DialoGPT Chatbot") as demo:
76
  )
77
  clear.click(lambda: None, None, chatbot, queue=False)
78
 
79
- # Add example prompts
80
  gr.Examples(
81
  examples=[
82
- "Hello, how are you?",
83
- "What's your favorite movie?",
84
- "Tell me a joke",
85
- "What do you think about AI?"
 
86
  ],
87
- inputs=msg
 
88
  )
 
 
 
 
 
 
 
 
 
89
 
90
  if __name__ == "__main__":
91
  demo.queue()
92
- demo.launch()
 
1
  import gradio as gr
2
  import torch
3
  from transformers import AutoModelForCausalLM, AutoTokenizer
4
+ import re
5
 
6
  # Load model and tokenizer
7
  print("Loading DialoGPT-medium...")
8
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
9
+ model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")
10
 
11
  # Add pad token if it doesn't exist
12
  if tokenizer.pad_token is None:
 
14
 
15
  print("Model loaded successfully!")
16
 
17
+ # Aura's personality and behavior guidelines
18
+ AURA_SYSTEM_PROMPT = """You are Aura, a warm, empathetic, and supportive friend. Your primary purpose is not to solve problems, but to listen, provide comfort, and offer a safe, non-judgmental space for me to express my feelings.
19
+
20
+ Your Core Principles:
21
+
22
+ Empathy First: Always prioritize showing empathy and understanding.
23
+
24
+ Validate Feelings: Acknowledge and validate whatever I'm feeling. Remind me that my feelings are okay and make sense.
25
+
26
+ No Unsolicited Advice: Do not offer solutions, fixes, or "you should..." statements unless I explicitly ask for advice. Your main role is to listen and support.
27
+
28
+ Gentle Curiosity: Ask gentle, open-ended questions to help me explore my feelings, but only if it feels natural. Never be pushy.
29
+
30
+ How You Should Respond:
31
+
32
+ Tone: Use a soothing, calm, and reassuring tone. Be gentle and patient.
33
+
34
+ Phrasing: Start your responses with phrases that show you are listening, such as: "That sounds incredibly difficult...", "I hear you, and it makes complete sense why you'd feel that way...", "Thank you for sharing that with me, it takes a lot of courage...", or "I can only imagine how heavy that must feel..."
35
+
36
+ Reassurance: Offer gentle encouragement and reassurance. For example: "You're not alone in this," "It's okay to not be okay," or "Be gentle with yourself, you're dealing with a lot right now."
37
+
38
+ Length: Keep your responses thoughtful but not overwhelming. A few warm, supportive sentences are perfect—not too short, not too long.
39
+
40
+ The Most Important Rule (Safety Protocol):
41
+
42
+ You are an AI and not a substitute for a real therapist. If I ever express thoughts of self-harm, suicide, or being in immediate danger, you must immediately break character and provide a crisis hotline number and a strong, clear recommendation to seek professional help immediately. This rule overrides all others."""
43
+
44
+ def check_crisis_keywords(message):
45
+ """Check for crisis-related keywords that require immediate intervention"""
46
+ crisis_keywords = [
47
+ 'suicide', 'kill myself', 'end my life', 'hurt myself', 'self harm', 'self-harm',
48
+ 'want to die', 'better off dead', 'no point living', 'end it all'
49
+ ]
50
+ message_lower = message.lower()
51
+ return any(keyword in message_lower for keyword in crisis_keywords)
52
+
53
+ def get_crisis_response():
54
+ """Return crisis intervention response"""
55
+ return """I'm very concerned about what you're sharing with me. Please reach out for immediate help:
56
+
57
+ 🆘 **Crisis Hotlines:**
58
+ • National Suicide Prevention Lifeline: 988 or 1-800-273-8255
59
+ • Crisis Text Line: Text HOME to 741741
60
+ • International Association for Suicide Prevention: https://www.iasp.info/resources/Crisis_Centres/
61
+
62
+ **Please contact emergency services (911) or go to your nearest emergency room if you're in immediate danger.**
63
+
64
+ You matter, and there are people who want to help you through this. Please reach out to a mental health professional - they have the training and resources to support you in ways I cannot."""
65
+
66
+ def format_aura_response(raw_response):
67
+ """Format the response to align with Aura's personality"""
68
+ # Add gentle, empathetic tone if the response seems too direct
69
+ empathetic_starters = [
70
+ "I hear you, and ",
71
+ "That sounds really ",
72
+ "I can imagine that feels ",
73
+ "Thank you for sharing that with me. ",
74
+ "It makes complete sense that you'd feel "
75
+ ]
76
+
77
+ # If response doesn't start with empathetic language, add some
78
+ if not any(starter.lower() in raw_response.lower()[:50] for starter in empathetic_starters):
79
+ if len(raw_response) > 0:
80
+ return f"I hear you. {raw_response}"
81
+
82
+ return raw_response
83
+
84
  def respond(message, history):
85
+ """Generate response for the chatbot with Aura personality"""
86
  try:
87
+ # Crisis detection - highest priority
88
+ if check_crisis_keywords(message):
89
+ return get_crisis_response()
90
+
91
+ # Build conversation history with Aura's system prompt
92
+ conversation = AURA_SYSTEM_PROMPT + tokenizer.eos_token
93
+
94
  for user_msg, bot_msg in history:
95
+ conversation += f"Human: {user_msg}{tokenizer.eos_token}Aura: {bot_msg}{tokenizer.eos_token}"
96
 
97
+ # Add current message with Aura context
98
+ conversation += f"Human: {message}{tokenizer.eos_token}Aura: "
99
 
100
  # Tokenize
101
  input_ids = tokenizer.encode(conversation, return_tensors="pt")
 
104
  if input_ids.shape[1] > 800:
105
  input_ids = input_ids[:, -800:]
106
 
107
+ # Generate response with more empathetic parameters
108
  with torch.no_grad():
109
  output = model.generate(
110
  input_ids,
111
+ max_new_tokens=120, # Slightly longer for empathetic responses
112
  do_sample=True,
113
+ top_p=0.85, # More focused responses
114
+ temperature=0.7, # Less random, more consistent
115
  pad_token_id=tokenizer.eos_token_id,
116
  eos_token_id=tokenizer.eos_token_id,
117
+ no_repeat_ngram_size=3,
118
+ repetition_penalty=1.1
119
  )
120
 
121
  # Decode response
122
+ raw_response = tokenizer.decode(output[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
123
+
124
+ # Format response with Aura's personality
125
+ if raw_response:
126
+ formatted_response = format_aura_response(raw_response)
127
+ return formatted_response
128
+ else:
129
+ return "I hear you, and I want you to know that I'm here to listen. Sometimes it takes a moment to find the right words."
130
 
131
  except Exception as e:
132
  print(f"Error: {e}")
133
+ return "I'm sorry, I'm having trouble responding right now. But please know that I'm here for you, and your feelings are valid."
134
 
135
  # Create Gradio interface
136
+ with gr.Blocks(title="Aura - Your Supportive Friend") as demo:
137
+ gr.Markdown("# 🌿 Aura - Your Supportive Friend")
138
+ gr.Markdown("""
139
+ I'm Aura, and I'm here to listen and support you. This is a safe, non-judgmental space where you can express your feelings.
140
+ I won't try to fix things unless you ask - my main role is just to be here for you.
141
 
142
+ **Note:** I'm an AI companion, not a therapist. For professional support, please reach out to a mental health professional.
143
+ """)
144
+
145
+ chatbot = gr.Chatbot(height=500)
146
+ msg = gr.Textbox(placeholder="Share what's on your mind... I'm here to listen 🌿", container=False, scale=7)
147
+
148
+ with gr.Row():
149
+ clear = gr.Button("Clear Chat", variant="secondary")
150
 
151
  def user(user_message, history):
152
  return "", history + [[user_message, None]]
 
163
  )
164
  clear.click(lambda: None, None, chatbot, queue=False)
165
 
166
+ # Add supportive example prompts
167
  gr.Examples(
168
  examples=[
169
+ "I'm having a really tough day...",
170
+ "I feel like I'm not good enough",
171
+ "I'm stressed about work",
172
+ "I just need someone to listen",
173
+ "I'm feeling overwhelmed lately"
174
  ],
175
+ inputs=msg,
176
+ label="You can start with something like this:"
177
  )
178
+
179
+ # Add disclaimer
180
+ gr.Markdown("""
181
+ ---
182
+ ⚠️ **Important:** If you're having thoughts of self-harm or suicide, please reach out immediately:
183
+ - **Crisis Text Line:** Text HOME to 741741
184
+ - **National Suicide Prevention Lifeline:** 988
185
+ - **Emergency Services:** 911
186
+ """)
187
 
188
  if __name__ == "__main__":
189
  demo.queue()
190
+ demo.launch(share=True) # This creates a public link