Muhammad Saad commited on
Commit
8770644
·
1 Parent(s): 1f2a4d7

Add application file

Browse files
.gitattributes DELETED
@@ -1,35 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Dockerfile ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Base image
2
+ FROM python:3.11-slim
3
+
4
+ # Set work directory
5
+ WORKDIR /app
6
+
7
+ # Install dependencies
8
+ COPY requirements.txt .
9
+ RUN pip install --no-cache-dir -r requirements.txt
10
+
11
+ # Copy project files
12
+ COPY . .
13
+
14
+ # Expose the port Hugging Face expects
15
+ EXPOSE 7860
16
+
17
+ # Command to run FastAPI with uvicorn
18
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
README.md CHANGED
@@ -1,11 +1,10 @@
1
  ---
2
- title: Launchlab
3
- emoji: 🏢
4
- colorFrom: indigo
5
- colorTo: yellow
6
  sdk: docker
7
  pinned: false
8
- short_description: Launchlab
9
  ---
10
 
11
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
  ---
2
+ title: Innoscribechatbot
3
+ emoji: 🐨
4
+ colorFrom: blue
5
+ colorTo: purple
6
  sdk: docker
7
  pinned: false
 
8
  ---
9
 
10
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
app.log ADDED
@@ -0,0 +1,636 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-12-12 16:05:59,387 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
2
+ 2025-12-12 16:08:49,149 - app - INFO - Created new session: 122e3ce9-ab5b-4fbf-957c-7ba8a25c4068
3
+ 2025-12-12 16:09:17,819 - app - INFO - Stream request from 127.0.0.1: language=english, message=hello..., session_id=122e3ce9-ab5b-4fbf-957c-7ba8a25c4068
4
+ 2025-12-12 16:09:19,125 - app - INFO - AGENT STREAM CALL: query_innscribe_bot_stream called with message='hello', language='english', session_id='122e3ce9-ab5b-4fbf-957c-7ba8a25c4068'
5
+ 2025-12-12 16:09:19,434 - app - INFO - Retrieved 1 history messages for session 122e3ce9-ab5b-4fbf-957c-7ba8a25c4068
6
+ 2025-12-12 16:09:25,853 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
7
+ 2025-12-12 16:09:26,255 - app - WARNING - Streaming failed, falling back to non-streaming: 1 validation error for ResponseTextDeltaEvent
8
+ logprobs
9
+ Field required [type=missing, input_value={'content_index': 0, 'del...', 'sequence_number': 3}, input_type=dict]
10
+ For further information visit https://errors.pydantic.dev/2.10/v/missing
11
+ 2025-12-12 16:09:26,401 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
12
+ 2025-12-12 16:09:28,465 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
13
+ 2025-12-12 16:09:28,782 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
14
+ 2025-12-12 16:09:28,808 - app - INFO - AGENT STREAM RESULT: query_innscribe_bot_stream fallback completed successfully
15
+ 2025-12-12 16:18:50,797 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
16
+ 2025-12-12 16:20:40,529 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
17
+ 2025-12-12 16:21:01,848 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
18
+ 2025-12-12 16:21:12,993 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
19
+ 2025-12-12 16:21:49,388 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
20
+ 2025-12-12 16:23:41,853 - app - INFO - Meeting scheduling request from Taha (tahasaif451@gmail.com) - IP: 127.0.0.1
21
+ 2025-12-12 16:23:44,401 - app - ERROR - Error scheduling meeting: Invalid `to` field. The email address needs to follow the `email@example.com` or `Name <email@example.com>` format.
22
+ Traceback (most recent call last):
23
+ File "D:\innocscribe_chatbot\app.py", line 669, in schedule_meeting
24
+ email = resend.Emails.send(params)
25
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^
26
+ File "C:\Users\LATITUDE\AppData\Roaming\Python\Python312\site-packages\resend\emails\_emails.py", line 277, in send
27
+ ).perform_with_content()
28
+ ^^^^^^^^^^^^^^^^^^^^^^
29
+ File "C:\Users\LATITUDE\AppData\Roaming\Python\Python312\site-packages\resend\request.py", line 44, in perform_with_content
30
+ resp = self.perform()
31
+ ^^^^^^^^^^^^^^
32
+ File "C:\Users\LATITUDE\AppData\Roaming\Python\Python312\site-packages\resend\request.py", line 35, in perform
33
+ raise_for_code_and_type(
34
+ File "C:\Users\LATITUDE\AppData\Roaming\Python\Python312\site-packages\resend\exceptions.py", line 213, in raise_for_code_and_type
35
+ raise error_from_list(
36
+ resend.exceptions.ValidationError: Invalid `to` field. The email address needs to follow the `email@example.com` or `Name <email@example.com>` format.
37
+ 2025-12-12 16:25:03,901 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
38
+ 2025-12-12 16:25:15,460 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
39
+ 2025-12-12 16:25:24,658 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
40
+ 2025-12-12 16:25:32,997 - app - INFO - Meeting scheduling request from Taha (tahasaif451@gmail.com) - IP: 127.0.0.1
41
+ 2025-12-12 16:25:35,833 - app - ERROR - Failed to send email: You can only send testing emails to your own email address (kashankhalid429@gmail.com). To send emails to other recipients, please verify a domain at resend.com/domains, and change the `from` address to an email using this domain.
42
+ Traceback (most recent call last):
43
+ File "D:\innocscribe_chatbot\app.py", line 699, in schedule_meeting
44
+ email = resend.Emails.send(params)
45
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^
46
+ File "C:\Users\LATITUDE\AppData\Roaming\Python\Python312\site-packages\resend\emails\_emails.py", line 277, in send
47
+ ).perform_with_content()
48
+ ^^^^^^^^^^^^^^^^^^^^^^
49
+ File "C:\Users\LATITUDE\AppData\Roaming\Python\Python312\site-packages\resend\request.py", line 44, in perform_with_content
50
+ resp = self.perform()
51
+ ^^^^^^^^^^^^^^
52
+ File "C:\Users\LATITUDE\AppData\Roaming\Python\Python312\site-packages\resend\request.py", line 35, in perform
53
+ raise_for_code_and_type(
54
+ File "C:\Users\LATITUDE\AppData\Roaming\Python\Python312\site-packages\resend\exceptions.py", line 219, in raise_for_code_and_type
55
+ raise ResendError(
56
+ resend.exceptions.ResendError: You can only send testing emails to your own email address (kashankhalid429@gmail.com). To send emails to other recipients, please verify a domain at resend.com/domains, and change the `from` address to an email using this domain.
57
+ 2025-12-12 16:27:02,845 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
58
+ 2025-12-12 16:27:15,478 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
59
+ 2025-12-12 16:27:26,854 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
60
+ 2025-12-12 16:27:36,519 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
61
+ 2025-12-12 16:27:46,349 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
62
+ 2025-12-12 16:27:54,557 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
63
+ 2025-12-12 16:28:03,553 - app - INFO - Meeting scheduling request from Taha (tahasaif451@gmail.com) - IP: 127.0.0.1
64
+ 2025-12-12 16:28:06,076 - app - INFO - Email sent successfully to 2 attendees
65
+ 2025-12-12 16:28:06,077 - app - INFO - Meeting scheduled successfully by Taha from IP: 127.0.0.1
66
+ 2025-12-12 16:30:31,463 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
67
+ 2025-12-12 19:08:04,015 - __main__ - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
68
+ 2025-12-12 19:13:53,991 - __main__ - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080']
69
+ 2025-12-12 20:13:45,859 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080', 'http://0.0.0.0:7860', 'http://localhost:7860']
70
+ 2025-12-12 20:15:17,352 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080', 'http://0.0.0.0:7860', 'http://localhost:7860']
71
+ 2025-12-12 20:15:25,529 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8080', 'http://0.0.0.0:7860', 'http://localhost:7860']
72
+ 2025-12-12 20:16:03,949 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
73
+ 2025-12-12 20:20:55,002 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
74
+ 2025-12-12 20:21:44,856 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
75
+ 2025-12-12 20:23:13,561 - app - INFO - Created new session: 06ee5ddf-19f5-4651-bd5a-c31f0f3aa31b
76
+ 2025-12-12 20:26:00,988 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=06ee5ddf-19f5-4651-bd5a-c31f0f3aa31b
77
+ 2025-12-12 20:26:02,397 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='06ee5ddf-19f5-4651-bd5a-c31f0f3aa31b'
78
+ 2025-12-12 20:26:02,764 - app - INFO - Retrieved 1 history messages for session 06ee5ddf-19f5-4651-bd5a-c31f0f3aa31b
79
+ 2025-12-12 20:26:13,087 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
80
+ 2025-12-12 20:26:13,505 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
81
+ 2025-12-12 20:26:13,541 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
82
+ 2025-12-12 20:40:50,991 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi, what is launchlab do?..., session_id=06ee5ddf-19f5-4651-bd5a-c31f0f3aa31b
83
+ 2025-12-12 20:40:52,947 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi, what is launchlab do?', language='english', session_id='06ee5ddf-19f5-4651-bd5a-c31f0f3aa31b'
84
+ 2025-12-12 20:40:53,990 - app - INFO - Retrieved 2 history messages for session 06ee5ddf-19f5-4651-bd5a-c31f0f3aa31b
85
+ 2025-12-12 20:40:55,833 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
86
+ 2025-12-12 20:40:57,743 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
87
+ 2025-12-12 20:40:57,829 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='What does Launchlabs do?', source='auto'
88
+ 2025-12-12 20:40:59,532 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
89
+ 2025-12-12 20:41:01,167 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
90
+ 2025-12-12 20:41:01,405 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
91
+ 2025-12-12 21:16:10,289 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
92
+ 2025-12-12 21:16:59,536 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
93
+ 2025-12-12 21:16:59,695 - app - INFO - API Messages request from 127.0.0.1: message='hi...', lang='english', session='None'
94
+ 2025-12-12 21:17:04,273 - app - INFO - New session created for /api/messages: c6fcf4aa-1330-4960-8fb4-a989281b15a3
95
+ 2025-12-12 21:17:05,695 - app - INFO - AGENT CALL: query_launchlabs_bot called with message='hi', language='english', session_id='c6fcf4aa-1330-4960-8fb4-a989281b15a3'
96
+ 2025-12-12 21:17:06,366 - app - INFO - Retrieved 1 history messages for session c6fcf4aa-1330-4960-8fb4-a989281b15a3
97
+ 2025-12-12 21:17:09,767 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
98
+ 2025-12-12 21:17:09,927 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
99
+ 2025-12-12 21:17:10,088 - app - INFO - AGENT RESULT: query_launchlabs_bot completed successfully
100
+ 2025-12-12 21:17:11,623 - app - INFO - API Messages success: Response sent for session c6fcf4aa-1330-4960-8fb4-a989281b15a3
101
+ 2025-12-12 21:17:29,246 - app - INFO - API Messages request from 127.0.0.1: message='Hi...', lang='english', session='8726b155-4f50-492d-957f-e3be8ee49f55'
102
+ 2025-12-12 21:17:29,614 - app - INFO - AGENT CALL: query_launchlabs_bot called with message='Hi', language='english', session_id='8726b155-4f50-492d-957f-e3be8ee49f55'
103
+ 2025-12-12 21:17:30,868 - app - INFO - Retrieved 0 history messages for session 8726b155-4f50-492d-957f-e3be8ee49f55
104
+ 2025-12-12 21:17:32,824 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
105
+ 2025-12-12 21:17:33,351 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
106
+ 2025-12-12 21:17:33,679 - app - INFO - AGENT RESULT: query_launchlabs_bot completed successfully
107
+ 2025-12-12 21:17:34,136 - app - INFO - API Messages success: Response sent for session 8726b155-4f50-492d-957f-e3be8ee49f55
108
+ 2025-12-12 21:28:34,869 - app - INFO - API Messages request from 127.0.0.1: message='Hva gj�r launchLab?...', lang='english', session='8726b155-4f50-492d-957f-e3be8ee49f55'
109
+ 2025-12-12 21:28:36,943 - app - INFO - AGENT CALL: query_launchlabs_bot called with message='Hva gj�r launchLab?', language='english', session_id='8726b155-4f50-492d-957f-e3be8ee49f55'
110
+ 2025-12-12 21:28:37,227 - app - INFO - Retrieved 0 history messages for session 8726b155-4f50-492d-957f-e3be8ee49f55
111
+ 2025-12-12 21:28:39,449 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
112
+ 2025-12-12 21:28:39,584 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
113
+ 2025-12-12 21:28:39,606 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='services', source='auto'
114
+ 2025-12-12 21:28:39,834 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
115
+ 2025-12-12 21:28:41,021 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
116
+ 2025-12-12 21:28:41,021 - app - INFO - AGENT RESULT: query_launchlabs_bot completed successfully
117
+ 2025-12-12 21:28:41,338 - app - INFO - API Messages success: Response sent for session 8726b155-4f50-492d-957f-e3be8ee49f55
118
+ 2025-12-12 21:34:23,801 - app - INFO - API Messages request from 127.0.0.1: message='what's your services ...', lang='english', session='554fd3bc-a9b3-459f-90b2-9d5b7747e574'
119
+ 2025-12-12 21:34:25,709 - app - INFO - AGENT CALL: query_launchlabs_bot called with message='what's your services ', language='english', session_id='554fd3bc-a9b3-459f-90b2-9d5b7747e574'
120
+ 2025-12-12 21:34:26,007 - app - INFO - Retrieved 0 history messages for session 554fd3bc-a9b3-459f-90b2-9d5b7747e574
121
+ 2025-12-12 21:34:27,883 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
122
+ 2025-12-12 21:34:28,000 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='services', source='auto'
123
+ 2025-12-12 21:34:28,247 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
124
+ 2025-12-12 21:34:28,251 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
125
+ 2025-12-12 21:34:30,625 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
126
+ 2025-12-12 21:34:30,634 - app - INFO - AGENT RESULT: query_launchlabs_bot completed successfully
127
+ 2025-12-12 21:34:30,993 - app - INFO - API Messages success: Response sent for session 554fd3bc-a9b3-459f-90b2-9d5b7747e574
128
+ 2025-12-12 21:39:52,643 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
129
+ 2025-12-12 21:47:19,972 - app - INFO - API Messages request from 127.0.0.1: message='hi...', lang='english', session='cccfb5c9-27e3-47f3-bc5e-c3853da41f97'
130
+ 2025-12-12 21:47:25,863 - app - INFO - AGENT CALL: query_launchlabs_bot called with message='hi', language='english', session_id='cccfb5c9-27e3-47f3-bc5e-c3853da41f97'
131
+ 2025-12-12 21:47:26,260 - app - INFO - Retrieved 0 history messages for session cccfb5c9-27e3-47f3-bc5e-c3853da41f97
132
+ 2025-12-12 21:47:36,960 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
133
+ 2025-12-12 21:47:37,126 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
134
+ 2025-12-12 21:47:37,133 - app - INFO - AGENT RESULT: query_launchlabs_bot completed successfully
135
+ 2025-12-12 21:47:37,540 - app - INFO - API Messages success: Response sent for session cccfb5c9-27e3-47f3-bc5e-c3853da41f97
136
+ 2025-12-12 21:47:49,064 - app - INFO - API Messages request from 127.0.0.1: message='what is launchlab do?...', lang='english', session='cccfb5c9-27e3-47f3-bc5e-c3853da41f97'
137
+ 2025-12-12 21:47:49,372 - app - INFO - AGENT CALL: query_launchlabs_bot called with message='what is launchlab do?', language='english', session_id='cccfb5c9-27e3-47f3-bc5e-c3853da41f97'
138
+ 2025-12-12 21:47:50,463 - app - INFO - Retrieved 0 history messages for session cccfb5c9-27e3-47f3-bc5e-c3853da41f97
139
+ 2025-12-12 21:47:52,082 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
140
+ 2025-12-12 21:47:52,136 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='What does Launchlabs do?', source='auto'
141
+ 2025-12-12 21:47:52,557 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
142
+ 2025-12-12 21:47:52,601 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
143
+ 2025-12-12 21:47:53,967 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
144
+ 2025-12-12 21:47:53,973 - app - INFO - AGENT RESULT: query_launchlabs_bot completed successfully
145
+ 2025-12-12 21:47:54,361 - app - INFO - API Messages success: Response sent for session cccfb5c9-27e3-47f3-bc5e-c3853da41f97
146
+ 2025-12-13 17:58:09,767 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
147
+ 2025-12-13 17:58:54,876 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
148
+ 2025-12-13 18:00:52,137 - app - INFO - Created new session for streaming chat: 4d62b8ce-059b-4cd7-8caf-9d0a6be1c3df
149
+ 2025-12-13 18:00:52,167 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=4d62b8ce-059b-4cd7-8caf-9d0a6be1c3df
150
+ 2025-12-13 18:00:54,575 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='4d62b8ce-059b-4cd7-8caf-9d0a6be1c3df'
151
+ 2025-12-13 18:00:55,662 - app - INFO - Retrieved 1 history messages for session 4d62b8ce-059b-4cd7-8caf-9d0a6be1c3df
152
+ 2025-12-13 18:01:07,932 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
153
+ 2025-12-13 18:01:08,609 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
154
+ 2025-12-13 18:01:08,717 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
155
+ 2025-12-13 19:49:33,022 - app - INFO - Created new session for streaming chat: a9967ff1-0ec8-4cfc-b511-bbe7073ff78f
156
+ 2025-12-13 19:49:33,032 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=a9967ff1-0ec8-4cfc-b511-bbe7073ff78f
157
+ 2025-12-13 19:49:34,422 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='a9967ff1-0ec8-4cfc-b511-bbe7073ff78f'
158
+ 2025-12-13 19:49:34,772 - app - INFO - Retrieved 1 history messages for session a9967ff1-0ec8-4cfc-b511-bbe7073ff78f
159
+ 2025-12-13 19:49:36,132 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
160
+ 2025-12-13 19:49:37,252 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
161
+ 2025-12-13 19:49:37,293 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
162
+ 2025-12-13 19:56:02,801 - app - INFO - Created new session for streaming chat: f9cc58a8-3df5-49d1-bf2a-aab84080db3f
163
+ 2025-12-13 19:56:02,809 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=f9cc58a8-3df5-49d1-bf2a-aab84080db3f
164
+ 2025-12-13 19:56:04,213 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='f9cc58a8-3df5-49d1-bf2a-aab84080db3f'
165
+ 2025-12-13 19:56:04,535 - app - INFO - Retrieved 1 history messages for session f9cc58a8-3df5-49d1-bf2a-aab84080db3f
166
+ 2025-12-13 19:56:06,309 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
167
+ 2025-12-13 19:56:06,691 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
168
+ 2025-12-13 19:56:06,723 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
169
+ 2025-12-13 20:00:36,507 - app - INFO - Created new session for streaming chat: d6249e76-6791-44ac-b81b-3664ee30c4da
170
+ 2025-12-13 20:00:36,523 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=d6249e76-6791-44ac-b81b-3664ee30c4da
171
+ 2025-12-13 20:00:38,000 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='d6249e76-6791-44ac-b81b-3664ee30c4da'
172
+ 2025-12-13 20:00:38,654 - app - INFO - Retrieved 1 history messages for session d6249e76-6791-44ac-b81b-3664ee30c4da
173
+ 2025-12-13 20:00:40,932 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
174
+ 2025-12-13 20:00:41,351 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
175
+ 2025-12-13 20:00:41,518 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
176
+ 2025-12-13 20:01:45,539 - app - INFO - Created new session for streaming chat: 1d1bae9c-776b-4fb9-804d-cad59a5eecc2
177
+ 2025-12-13 20:01:45,540 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=1d1bae9c-776b-4fb9-804d-cad59a5eecc2
178
+ 2025-12-13 20:01:46,938 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='1d1bae9c-776b-4fb9-804d-cad59a5eecc2'
179
+ 2025-12-13 20:01:47,944 - app - INFO - Retrieved 1 history messages for session 1d1bae9c-776b-4fb9-804d-cad59a5eecc2
180
+ 2025-12-13 20:01:49,439 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
181
+ 2025-12-13 20:01:50,072 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
182
+ 2025-12-13 20:01:50,077 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
183
+ 2025-12-13 20:04:16,591 - app - INFO - Created new session for streaming chat: fbe67478-be3a-4fba-8998-fc12dbc99e67
184
+ 2025-12-13 20:04:16,592 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=fbe67478-be3a-4fba-8998-fc12dbc99e67
185
+ 2025-12-13 20:04:17,974 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='fbe67478-be3a-4fba-8998-fc12dbc99e67'
186
+ 2025-12-13 20:04:18,279 - app - INFO - Retrieved 1 history messages for session fbe67478-be3a-4fba-8998-fc12dbc99e67
187
+ 2025-12-13 20:04:19,735 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
188
+ 2025-12-13 20:04:20,077 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
189
+ 2025-12-13 20:04:20,085 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
190
+ 2025-12-13 20:04:55,196 - app - INFO - Created new session for streaming chat: b5c862a5-3073-49db-9083-9ee2ecfb8f62
191
+ 2025-12-13 20:04:55,196 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=b5c862a5-3073-49db-9083-9ee2ecfb8f62
192
+ 2025-12-13 20:04:56,577 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='b5c862a5-3073-49db-9083-9ee2ecfb8f62'
193
+ 2025-12-13 20:04:57,605 - app - INFO - Retrieved 1 history messages for session b5c862a5-3073-49db-9083-9ee2ecfb8f62
194
+ 2025-12-13 20:04:59,006 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
195
+ 2025-12-13 20:04:59,759 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
196
+ 2025-12-13 20:04:59,766 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
197
+ 2025-12-13 20:07:31,912 - app - INFO - Created new session for streaming chat: 6669bd05-56b6-41c0-9d38-4df7327f2930
198
+ 2025-12-13 20:07:31,940 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=6669bd05-56b6-41c0-9d38-4df7327f2930
199
+ 2025-12-13 20:07:34,619 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='6669bd05-56b6-41c0-9d38-4df7327f2930'
200
+ 2025-12-13 20:07:37,258 - app - INFO - Retrieved 1 history messages for session 6669bd05-56b6-41c0-9d38-4df7327f2930
201
+ 2025-12-13 20:07:40,018 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
202
+ 2025-12-13 20:07:41,051 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
203
+ 2025-12-13 20:07:41,373 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
204
+ 2025-12-13 20:08:07,914 - app - INFO - Created new session for streaming chat: 57102314-7791-4efd-b27e-fb1643719e9e
205
+ 2025-12-13 20:08:07,926 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=57102314-7791-4efd-b27e-fb1643719e9e
206
+ 2025-12-13 20:08:09,306 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='57102314-7791-4efd-b27e-fb1643719e9e'
207
+ 2025-12-13 20:08:10,309 - app - INFO - Retrieved 1 history messages for session 57102314-7791-4efd-b27e-fb1643719e9e
208
+ 2025-12-13 20:08:12,726 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
209
+ 2025-12-13 20:08:13,013 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
210
+ 2025-12-13 20:08:13,024 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
211
+ 2025-12-13 20:08:26,498 - app - INFO - Created new session for streaming chat: cad6dab7-2dfd-47a3-8026-7993b14c740a
212
+ 2025-12-13 20:08:26,501 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=cad6dab7-2dfd-47a3-8026-7993b14c740a
213
+ 2025-12-13 20:08:27,883 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='cad6dab7-2dfd-47a3-8026-7993b14c740a'
214
+ 2025-12-13 20:08:28,959 - app - INFO - Retrieved 1 history messages for session cad6dab7-2dfd-47a3-8026-7993b14c740a
215
+ 2025-12-13 20:08:30,382 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
216
+ 2025-12-13 20:08:30,864 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
217
+ 2025-12-13 20:08:30,960 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
218
+ 2025-12-13 20:08:37,058 - app - INFO - Created new session for streaming chat: eebfff12-e55c-40ee-a3b9-2e8a6370ebf5
219
+ 2025-12-13 20:08:37,058 - app - INFO - Stream request from 127.0.0.1: language=english, message=why..., session_id=eebfff12-e55c-40ee-a3b9-2e8a6370ebf5
220
+ 2025-12-13 20:08:38,480 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='why', language='english', session_id='eebfff12-e55c-40ee-a3b9-2e8a6370ebf5'
221
+ 2025-12-13 20:08:39,541 - app - INFO - Retrieved 1 history messages for session eebfff12-e55c-40ee-a3b9-2e8a6370ebf5
222
+ 2025-12-13 20:08:41,739 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
223
+ 2025-12-13 20:08:41,795 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
224
+ 2025-12-13 20:08:41,805 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
225
+ 2025-12-13 20:16:27,742 - app - INFO - Created new session for streaming chat: a2d4cc50-c5ee-444a-93a3-928cd835e7bf
226
+ 2025-12-13 20:16:27,762 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=a2d4cc50-c5ee-444a-93a3-928cd835e7bf
227
+ 2025-12-13 20:16:29,128 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='a2d4cc50-c5ee-444a-93a3-928cd835e7bf'
228
+ 2025-12-13 20:16:29,436 - app - INFO - Retrieved 1 history messages for session a2d4cc50-c5ee-444a-93a3-928cd835e7bf
229
+ 2025-12-13 20:16:31,643 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
230
+ 2025-12-13 20:16:31,681 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
231
+ 2025-12-13 20:16:31,731 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
232
+ 2025-12-13 20:16:40,927 - app - INFO - Created new session for streaming chat: 39aec5c7-d1e5-45e8-b79d-91564a179978
233
+ 2025-12-13 20:16:40,928 - app - INFO - Stream request from 127.0.0.1: language=english, message=what is launchlab do?..., session_id=39aec5c7-d1e5-45e8-b79d-91564a179978
234
+ 2025-12-13 20:16:42,302 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='what is launchlab do?', language='english', session_id='39aec5c7-d1e5-45e8-b79d-91564a179978'
235
+ 2025-12-13 20:16:43,271 - app - INFO - Retrieved 1 history messages for session 39aec5c7-d1e5-45e8-b79d-91564a179978
236
+ 2025-12-13 20:16:44,481 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
237
+ 2025-12-13 20:16:44,674 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='what does Launchlabs do', source='auto'
238
+ 2025-12-13 20:16:45,892 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
239
+ 2025-12-13 20:16:45,906 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
240
+ 2025-12-13 20:16:47,055 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
241
+ 2025-12-13 20:16:47,366 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
242
+ 2025-12-13 20:49:43,952 - app - INFO - Created new session for streaming chat: a7b193e2-898e-400e-b1f6-b91e2e61df5f
243
+ 2025-12-13 20:49:43,969 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=a7b193e2-898e-400e-b1f6-b91e2e61df5f
244
+ 2025-12-13 20:49:45,365 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='a7b193e2-898e-400e-b1f6-b91e2e61df5f'
245
+ 2025-12-13 20:49:45,669 - app - INFO - Retrieved 1 history messages for session a7b193e2-898e-400e-b1f6-b91e2e61df5f
246
+ 2025-12-13 20:49:47,098 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
247
+ 2025-12-13 20:49:47,862 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
248
+ 2025-12-13 20:49:47,880 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
249
+ 2025-12-13 20:50:00,883 - app - INFO - Created new session for streaming chat: a8d7c15f-20b0-4563-a2a7-d9a6315682e8
250
+ 2025-12-13 20:50:00,883 - app - INFO - Stream request from 127.0.0.1: language=english, message=can i meet wih any senior..., session_id=a8d7c15f-20b0-4563-a2a7-d9a6315682e8
251
+ 2025-12-13 20:50:02,293 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='can i meet wih any senior', language='english', session_id='a8d7c15f-20b0-4563-a2a7-d9a6315682e8'
252
+ 2025-12-13 20:50:03,331 - app - INFO - Retrieved 1 history messages for session a8d7c15f-20b0-4563-a2a7-d9a6315682e8
253
+ 2025-12-13 20:50:04,743 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
254
+ 2025-12-13 20:50:04,980 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
255
+ 2025-12-13 20:50:04,986 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
256
+ 2025-12-13 20:50:14,268 - app - INFO - Created new session for streaming chat: 294d1fbc-ae2d-4231-8a80-d70b8280ff01
257
+ 2025-12-13 20:50:14,269 - app - INFO - Stream request from 127.0.0.1: language=english, message=plz connect..., session_id=294d1fbc-ae2d-4231-8a80-d70b8280ff01
258
+ 2025-12-13 20:50:15,663 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='plz connect', language='english', session_id='294d1fbc-ae2d-4231-8a80-d70b8280ff01'
259
+ 2025-12-13 20:50:16,687 - app - INFO - Retrieved 1 history messages for session 294d1fbc-ae2d-4231-8a80-d70b8280ff01
260
+ 2025-12-13 20:50:18,156 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
261
+ 2025-12-13 20:50:18,443 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
262
+ 2025-12-13 20:50:18,486 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
263
+ 2025-12-13 20:50:31,031 - app - INFO - Created new session for streaming chat: 22f8dff2-9fc5-4fb7-9560-23f9f973bc18
264
+ 2025-12-13 20:50:31,034 - app - INFO - Stream request from 127.0.0.1: language=english, message=in services..., session_id=22f8dff2-9fc5-4fb7-9560-23f9f973bc18
265
+ 2025-12-13 20:50:32,422 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='in services', language='english', session_id='22f8dff2-9fc5-4fb7-9560-23f9f973bc18'
266
+ 2025-12-13 20:50:33,477 - app - INFO - Retrieved 1 history messages for session 22f8dff2-9fc5-4fb7-9560-23f9f973bc18
267
+ 2025-12-13 20:50:34,949 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
268
+ 2025-12-13 20:50:34,981 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='services', source='auto'
269
+ 2025-12-13 20:50:35,167 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
270
+ 2025-12-13 20:50:35,985 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
271
+ 2025-12-13 20:50:36,399 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
272
+ 2025-12-13 20:50:36,541 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
273
+ 2025-12-13 21:05:21,233 - app - INFO - Created new session for streaming chat: 5e5ccc76-3136-48eb-8d41-b2fa418963eb
274
+ 2025-12-13 21:05:21,257 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=5e5ccc76-3136-48eb-8d41-b2fa418963eb
275
+ 2025-12-13 21:05:22,657 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='5e5ccc76-3136-48eb-8d41-b2fa418963eb'
276
+ 2025-12-13 21:05:22,948 - app - INFO - Retrieved 1 history messages for session 5e5ccc76-3136-48eb-8d41-b2fa418963eb
277
+ 2025-12-13 21:05:24,821 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
278
+ 2025-12-13 21:05:25,164 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
279
+ 2025-12-13 21:05:25,185 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
280
+ 2025-12-13 21:10:29,994 - app - INFO - Created new session for streaming chat: 15a154fc-21dd-4b4c-8545-df72136b3b81
281
+ 2025-12-13 21:10:30,032 - app - INFO - Stream request from 127.0.0.1: language=english, message=yes..., session_id=15a154fc-21dd-4b4c-8545-df72136b3b81
282
+ 2025-12-13 21:10:31,459 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='yes', language='english', session_id='15a154fc-21dd-4b4c-8545-df72136b3b81'
283
+ 2025-12-13 21:10:32,386 - app - INFO - Retrieved 1 history messages for session 15a154fc-21dd-4b4c-8545-df72136b3b81
284
+ 2025-12-13 21:10:34,385 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
285
+ 2025-12-13 21:10:34,498 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
286
+ 2025-12-13 21:10:34,505 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
287
+ 2025-12-13 21:10:43,359 - app - INFO - Created new session for streaming chat: aea3c0ba-b30c-43b6-b507-5085521dc38f
288
+ 2025-12-13 21:10:43,362 - app - INFO - Stream request from 127.0.0.1: language=english, message=yes..., session_id=aea3c0ba-b30c-43b6-b507-5085521dc38f
289
+ 2025-12-13 21:10:44,761 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='yes', language='english', session_id='aea3c0ba-b30c-43b6-b507-5085521dc38f'
290
+ 2025-12-13 21:10:45,752 - app - INFO - Retrieved 1 history messages for session aea3c0ba-b30c-43b6-b507-5085521dc38f
291
+ 2025-12-13 21:10:47,639 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
292
+ 2025-12-13 21:10:47,837 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
293
+ 2025-12-13 21:10:47,846 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
294
+ 2025-12-13 21:10:59,669 - app - INFO - Created new session for streaming chat: cd95232e-b3c7-4eaa-8d1c-330c90962bb2
295
+ 2025-12-13 21:10:59,669 - app - INFO - Stream request from 127.0.0.1: language=english, message=make a home page..., session_id=cd95232e-b3c7-4eaa-8d1c-330c90962bb2
296
+ 2025-12-13 21:11:01,043 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='make a home page', language='english', session_id='cd95232e-b3c7-4eaa-8d1c-330c90962bb2'
297
+ 2025-12-13 21:11:02,057 - app - INFO - Retrieved 1 history messages for session cd95232e-b3c7-4eaa-8d1c-330c90962bb2
298
+ 2025-12-13 21:11:03,994 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
299
+ 2025-12-13 21:11:05,429 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
300
+ 2025-12-13 21:11:05,534 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='website creation services', source='auto'
301
+ 2025-12-13 21:11:05,773 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
302
+ 2025-12-13 21:11:06,718 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
303
+ 2025-12-13 21:11:06,929 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
304
+ 2025-12-13 21:12:07,949 - app - INFO - Created new session for streaming chat: e14f3722-cdba-4c27-a4db-e3ffade1253a
305
+ 2025-12-13 21:12:07,950 - app - INFO - Stream request from 127.0.0.1: language=norwegian, message=Hva gj�r LaunchLab?..., session_id=e14f3722-cdba-4c27-a4db-e3ffade1253a
306
+ 2025-12-13 21:12:09,323 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='Hva gj�r LaunchLab?', language='norwegian', session_id='e14f3722-cdba-4c27-a4db-e3ffade1253a'
307
+ 2025-12-13 21:12:10,365 - app - INFO - Retrieved 1 history messages for session e14f3722-cdba-4c27-a4db-e3ffade1253a
308
+ 2025-12-13 21:12:12,032 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
309
+ 2025-12-13 21:12:12,426 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
310
+ 2025-12-13 21:12:12,431 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='services', source='auto'
311
+ 2025-12-13 21:12:12,566 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
312
+ 2025-12-13 21:12:13,485 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
313
+ 2025-12-13 21:12:13,731 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
314
+ 2025-12-13 21:51:38,277 - app - INFO - Created new session for streaming chat: 8d7d0e11-09c4-41a4-a5b4-a918b0f02213
315
+ 2025-12-13 21:51:38,292 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=8d7d0e11-09c4-41a4-a5b4-a918b0f02213
316
+ 2025-12-13 21:51:39,672 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='8d7d0e11-09c4-41a4-a5b4-a918b0f02213'
317
+ 2025-12-13 21:51:40,018 - app - INFO - Retrieved 1 history messages for session 8d7d0e11-09c4-41a4-a5b4-a918b0f02213
318
+ 2025-12-13 21:51:42,199 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
319
+ 2025-12-13 21:51:42,207 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
320
+ 2025-12-13 21:51:42,342 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
321
+ 2025-12-13 21:52:10,692 - app - INFO - Created new session for streaming chat: 08a126e6-ff9a-4262-b5d5-2505f00224e3
322
+ 2025-12-13 21:52:10,692 - app - INFO - Stream request from 127.0.0.1: language=norwegian, message=Hva gj�r LaunchLab?..., session_id=08a126e6-ff9a-4262-b5d5-2505f00224e3
323
+ 2025-12-13 21:52:12,087 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='Hva gj�r LaunchLab?', language='norwegian', session_id='08a126e6-ff9a-4262-b5d5-2505f00224e3'
324
+ 2025-12-13 21:52:13,102 - app - INFO - Retrieved 1 history messages for session 08a126e6-ff9a-4262-b5d5-2505f00224e3
325
+ 2025-12-13 21:52:14,692 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
326
+ 2025-12-13 21:52:14,719 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='Hva gj�r Launchlabs?', source='auto'
327
+ 2025-12-13 21:52:14,979 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
328
+ 2025-12-13 21:52:15,132 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
329
+ 2025-12-13 21:52:16,686 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
330
+ 2025-12-13 21:52:17,067 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
331
+ 2025-12-13 21:52:21,252 - app - INFO - Created new session for streaming chat: cd582092-429c-4d79-8bf8-e68a9de3d4ab
332
+ 2025-12-13 21:52:21,252 - app - INFO - Stream request from 127.0.0.1: language=norwegian, message=Hva gj�r LaunchLab?..., session_id=cd582092-429c-4d79-8bf8-e68a9de3d4ab
333
+ 2025-12-13 21:52:22,672 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='Hva gj�r LaunchLab?', language='norwegian', session_id='cd582092-429c-4d79-8bf8-e68a9de3d4ab'
334
+ 2025-12-13 21:52:23,682 - app - INFO - Retrieved 1 history messages for session cd582092-429c-4d79-8bf8-e68a9de3d4ab
335
+ 2025-12-13 21:52:26,042 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
336
+ 2025-12-13 21:52:26,052 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='tjenester', source='auto'
337
+ 2025-12-13 21:52:26,321 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
338
+ 2025-12-13 21:52:26,321 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
339
+ 2025-12-13 21:52:27,701 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
340
+ 2025-12-13 21:52:28,162 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
341
+ 2025-12-13 21:52:32,001 - app - INFO - Created new session for streaming chat: bbb4693a-2d76-4863-baff-38eebf4a0ea8
342
+ 2025-12-13 21:52:32,001 - app - INFO - Stream request from 127.0.0.1: language=norwegian, message=Hva gj�r LaunchLab?..., session_id=bbb4693a-2d76-4863-baff-38eebf4a0ea8
343
+ 2025-12-13 21:52:33,386 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='Hva gj�r LaunchLab?', language='norwegian', session_id='bbb4693a-2d76-4863-baff-38eebf4a0ea8'
344
+ 2025-12-13 21:52:34,421 - app - INFO - Retrieved 1 history messages for session bbb4693a-2d76-4863-baff-38eebf4a0ea8
345
+ 2025-12-13 21:52:35,836 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
346
+ 2025-12-13 21:52:35,847 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='Hva gj�r Launchlabs', source='auto'
347
+ 2025-12-13 21:52:36,083 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
348
+ 2025-12-13 21:52:36,357 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
349
+ 2025-12-13 21:52:37,256 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
350
+ 2025-12-13 21:52:37,632 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
351
+ 2025-12-13 21:52:51,668 - app - INFO - Created new session for streaming chat: 4512e28e-2ea0-457a-b1c1-6176cecbf8b3
352
+ 2025-12-13 21:52:51,668 - app - INFO - Stream request from 127.0.0.1: language=english, message=hello..., session_id=4512e28e-2ea0-457a-b1c1-6176cecbf8b3
353
+ 2025-12-13 21:52:53,022 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hello', language='english', session_id='4512e28e-2ea0-457a-b1c1-6176cecbf8b3'
354
+ 2025-12-13 21:52:54,067 - app - INFO - Retrieved 1 history messages for session 4512e28e-2ea0-457a-b1c1-6176cecbf8b3
355
+ 2025-12-13 21:52:55,641 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
356
+ 2025-12-13 21:52:55,842 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
357
+ 2025-12-13 21:52:56,222 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
358
+ 2025-12-13 21:56:22,840 - app - INFO - Created new session for streaming chat: a829f9e8-47b6-4727-bcba-5c126d56e430
359
+ 2025-12-13 21:56:22,841 - app - INFO - Stream request from 127.0.0.1: language=english, message=Hello..., session_id=a829f9e8-47b6-4727-bcba-5c126d56e430
360
+ 2025-12-13 21:56:24,262 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='Hello', language='english', session_id='a829f9e8-47b6-4727-bcba-5c126d56e430'
361
+ 2025-12-13 21:56:24,618 - app - INFO - Retrieved 1 history messages for session a829f9e8-47b6-4727-bcba-5c126d56e430
362
+ 2025-12-13 21:56:44,296 - openai._base_client - INFO - Retrying request to /chat/completions in 0.434005 seconds
363
+ 2025-12-13 21:56:44,368 - openai._base_client - INFO - Retrying request to /chat/completions in 0.480124 seconds
364
+ 2025-12-13 21:57:04,010 - openai._base_client - INFO - Retrying request to /chat/completions in 0.901942 seconds
365
+ 2025-12-13 21:57:30,947 - openai._base_client - INFO - Retrying request to /chat/completions in 0.939832 seconds
366
+ 2025-12-13 21:57:34,787 - app - WARNING - Streaming failed, falling back to non-streaming: Connection error.
367
+ 2025-12-13 21:57:35,041 - openai._base_client - INFO - Retrying request to /chat/completions in 0.383602 seconds
368
+ 2025-12-13 21:57:35,153 - openai._base_client - INFO - Retrying request to /chat/completions in 0.434880 seconds
369
+ 2025-12-13 21:57:35,559 - openai._base_client - INFO - Retrying request to /chat/completions in 0.857617 seconds
370
+ 2025-12-13 21:57:35,596 - openai._base_client - INFO - Retrying request to /chat/completions in 0.929682 seconds
371
+ 2025-12-13 21:57:36,421 - app - ERROR - Fallback also failed: Connection error.
372
+ Traceback (most recent call last):
373
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpx\_transports\default.py", line 101, in map_httpcore_exceptions
374
+ yield
375
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpx\_transports\default.py", line 394, in handle_async_request
376
+ resp = await self._pool.handle_async_request(req)
377
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
378
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpcore\_async\connection_pool.py", line 256, in handle_async_request
379
+ raise exc from None
380
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpcore\_async\connection_pool.py", line 236, in handle_async_request
381
+ response = await connection.handle_async_request(
382
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
383
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpcore\_async\connection.py", line 101, in handle_async_request
384
+ raise exc
385
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpcore\_async\connection.py", line 78, in handle_async_request
386
+ stream = await self._connect(request)
387
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
388
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpcore\_async\connection.py", line 124, in _connect
389
+ stream = await self._network_backend.connect_tcp(**kwargs)
390
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
391
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpcore\_backends\auto.py", line 31, in connect_tcp
392
+ return await self._backend.connect_tcp(
393
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
394
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpcore\_backends\anyio.py", line 113, in connect_tcp
395
+ with map_exceptions(exc_map):
396
+ ^^^^^^^^^^^^^^^^^^^^^^^
397
+ File "C:\Users\AGA Computer\AppData\Roaming\uv\python\cpython-3.12.11-windows-x86_64-none\Lib\contextlib.py", line 158, in __exit__
398
+ self.gen.throw(value)
399
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpcore\_exceptions.py", line 14, in map_exceptions
400
+ raise to_exc(exc) from exc
401
+ httpcore.ConnectError: [Errno 11001] getaddrinfo failed
402
+
403
+ The above exception was the direct cause of the following exception:
404
+
405
+ Traceback (most recent call last):
406
+ File "E:\innocribe\backend\.venv\Lib\site-packages\openai\_base_client.py", line 1529, in request
407
+ response = await self._client.send(
408
+ ^^^^^^^^^^^^^^^^^^^^^^^^
409
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpx\_client.py", line 1629, in send
410
+ response = await self._send_handling_auth(
411
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
412
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpx\_client.py", line 1657, in _send_handling_auth
413
+ response = await self._send_handling_redirects(
414
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
415
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpx\_client.py", line 1694, in _send_handling_redirects
416
+ response = await self._send_single_request(request)
417
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
418
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpx\_client.py", line 1730, in _send_single_request
419
+ response = await transport.handle_async_request(request)
420
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
421
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpx\_transports\default.py", line 393, in handle_async_request
422
+ with map_httpcore_exceptions():
423
+ ^^^^^^^^^^^^^^^^^^^^^^^^^
424
+ File "C:\Users\AGA Computer\AppData\Roaming\uv\python\cpython-3.12.11-windows-x86_64-none\Lib\contextlib.py", line 158, in __exit__
425
+ self.gen.throw(value)
426
+ File "E:\innocribe\backend\.venv\Lib\site-packages\httpx\_transports\default.py", line 118, in map_httpcore_exceptions
427
+ raise mapped_exc(message) from exc
428
+ httpx.ConnectError: [Errno 11001] getaddrinfo failed
429
+
430
+ The above exception was the direct cause of the following exception:
431
+
432
+ Traceback (most recent call last):
433
+ File "E:\innocribe\backend\app.py", line 1026, in generate_stream
434
+ fallback_response = await Runner.run(
435
+ ^^^^^^^^^^^^^^^^^
436
+ File "E:\innocribe\backend\.venv\Lib\site-packages\agents\run.py", line 358, in run
437
+ return await runner.run(
438
+ ^^^^^^^^^^^^^^^^^
439
+ File "E:\innocribe\backend\.venv\Lib\site-packages\agents\run.py", line 638, in run
440
+ input_guardrail_results, turn_result = await asyncio.gather(
441
+ ^^^^^^^^^^^^^^^^^^^^^
442
+ File "E:\innocribe\backend\.venv\Lib\site-packages\agents\run.py", line 1550, in _run_single_turn
443
+ new_response = await cls._get_new_response(
444
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
445
+ File "E:\innocribe\backend\.venv\Lib\site-packages\agents\run.py", line 1807, in _get_new_response
446
+ new_response = await model.get_response(
447
+ ^^^^^^^^^^^^^^^^^^^^^^^^^
448
+ File "E:\innocribe\backend\.venv\Lib\site-packages\agents\models\openai_chatcompletions.py", line 68, in get_response
449
+ response = await self._fetch_response(
450
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
451
+ File "E:\innocribe\backend\.venv\Lib\site-packages\agents\models\openai_chatcompletions.py", line 293, in _fetch_response
452
+ ret = await self._get_client().chat.completions.create(
453
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
454
+ File "E:\innocribe\backend\.venv\Lib\site-packages\openai\resources\chat\completions\completions.py", line 2672, in create
455
+ return await self._post(
456
+ ^^^^^^^^^^^^^^^^^
457
+ File "E:\innocribe\backend\.venv\Lib\site-packages\openai\_base_client.py", line 1794, in post
458
+ return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
459
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
460
+ File "E:\innocribe\backend\.venv\Lib\site-packages\openai\_base_client.py", line 1561, in request
461
+ raise APIConnectionError(request=request) from err
462
+ openai.APIConnectionError: Connection error.
463
+ 2025-12-13 21:58:29,619 - app - INFO - Created new session for streaming chat: 4ab5764d-1bf1-4ae0-a20b-3b94b4eab7e5
464
+ 2025-12-13 21:58:29,653 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=4ab5764d-1bf1-4ae0-a20b-3b94b4eab7e5
465
+ 2025-12-13 21:58:30,238 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='4ab5764d-1bf1-4ae0-a20b-3b94b4eab7e5'
466
+ 2025-12-13 21:58:31,305 - app - INFO - Retrieved 0 history messages for session 4ab5764d-1bf1-4ae0-a20b-3b94b4eab7e5
467
+ 2025-12-13 21:58:33,248 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
468
+ 2025-12-13 21:58:33,288 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
469
+ 2025-12-13 21:58:33,295 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
470
+ 2025-12-13 21:59:02,306 - app - INFO - Created new session for streaming chat: d1dba2a3-e581-4f60-ab11-49cbf053a841
471
+ 2025-12-13 21:59:02,313 - app - INFO - Stream request from 127.0.0.1: language=english, message=what is launchLab do?..., session_id=d1dba2a3-e581-4f60-ab11-49cbf053a841
472
+ 2025-12-13 21:59:03,825 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='what is launchLab do?', language='english', session_id='d1dba2a3-e581-4f60-ab11-49cbf053a841'
473
+ 2025-12-13 21:59:04,785 - app - INFO - Retrieved 1 history messages for session d1dba2a3-e581-4f60-ab11-49cbf053a841
474
+ 2025-12-13 21:59:06,541 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
475
+ 2025-12-13 21:59:06,556 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='services', source='auto'
476
+ 2025-12-13 21:59:06,796 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
477
+ 2025-12-13 21:59:06,803 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
478
+ 2025-12-13 21:59:07,905 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
479
+ 2025-12-13 21:59:08,033 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
480
+ 2025-12-13 21:59:24,895 - app - INFO - Created new session for streaming chat: a3779f37-c9e4-4dee-92c1-65bf55ddc090
481
+ 2025-12-13 21:59:24,903 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=a3779f37-c9e4-4dee-92c1-65bf55ddc090
482
+ 2025-12-13 21:59:26,318 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='a3779f37-c9e4-4dee-92c1-65bf55ddc090'
483
+ 2025-12-13 21:59:27,269 - app - INFO - Retrieved 1 history messages for session a3779f37-c9e4-4dee-92c1-65bf55ddc090
484
+ 2025-12-13 21:59:28,541 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
485
+ 2025-12-13 21:59:29,150 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
486
+ 2025-12-13 21:59:29,159 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
487
+ 2025-12-13 22:31:00,936 - app - INFO - Created new session for chat: 29ac8e97-a073-4abe-869b-86a05bd0a777
488
+ 2025-12-13 22:31:00,958 - app - INFO - Chat request from 127.0.0.1: language=norwegian, message=hi..., session_id=29ac8e97-a073-4abe-869b-86a05bd0a777
489
+ 2025-12-13 22:31:01,651 - app - INFO - AGENT CALL: query_launchlabs_bot called with message='hi', language='norwegian', session_id='29ac8e97-a073-4abe-869b-86a05bd0a777'
490
+ 2025-12-13 22:31:02,028 - app - INFO - Retrieved 1 history messages for session 29ac8e97-a073-4abe-869b-86a05bd0a777
491
+ 2025-12-13 22:31:03,668 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
492
+ 2025-12-13 22:31:04,129 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
493
+ 2025-12-13 22:31:04,146 - app - INFO - AGENT RESULT: query_launchlabs_bot completed successfully
494
+ 2025-12-13 22:31:04,852 - app - INFO - Chat response generated successfully in norwegian for session 29ac8e97-a073-4abe-869b-86a05bd0a777
495
+ 2025-12-13 22:31:19,042 - app - INFO - Created new session for chat: fcdaa156-095e-4a9f-b904-a34a89be703b
496
+ 2025-12-13 22:31:19,043 - app - INFO - Chat request from 127.0.0.1: language=norwegian, message=hi..., session_id=fcdaa156-095e-4a9f-b904-a34a89be703b
497
+ 2025-12-13 22:31:20,504 - app - INFO - AGENT CALL: query_launchlabs_bot called with message='hi', language='norwegian', session_id='fcdaa156-095e-4a9f-b904-a34a89be703b'
498
+ 2025-12-13 22:31:20,816 - app - INFO - Retrieved 1 history messages for session fcdaa156-095e-4a9f-b904-a34a89be703b
499
+ 2025-12-13 22:31:22,857 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
500
+ 2025-12-13 22:31:22,954 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
501
+ 2025-12-13 22:31:22,994 - app - INFO - AGENT RESULT: query_launchlabs_bot completed successfully
502
+ 2025-12-13 22:31:23,625 - app - INFO - Chat response generated successfully in norwegian for session fcdaa156-095e-4a9f-b904-a34a89be703b
503
+ 2025-12-13 22:31:45,794 - app - INFO - Created new session for chat: 9306b9fa-bce9-4a0f-ac41-6b3acef83fcc
504
+ 2025-12-13 22:31:45,839 - app - INFO - Chat request from 127.0.0.1: language=norwegian, message=hi..., session_id=9306b9fa-bce9-4a0f-ac41-6b3acef83fcc
505
+ 2025-12-13 22:31:46,867 - app - INFO - AGENT CALL: query_launchlabs_bot called with message='hi', language='norwegian', session_id='9306b9fa-bce9-4a0f-ac41-6b3acef83fcc'
506
+ 2025-12-13 22:31:47,308 - app - INFO - Retrieved 1 history messages for session 9306b9fa-bce9-4a0f-ac41-6b3acef83fcc
507
+ 2025-12-13 22:31:48,879 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
508
+ 2025-12-13 22:31:49,392 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
509
+ 2025-12-13 22:31:49,440 - app - INFO - AGENT RESULT: query_launchlabs_bot completed successfully
510
+ 2025-12-13 22:31:50,707 - app - INFO - Chat response generated successfully in norwegian for session 9306b9fa-bce9-4a0f-ac41-6b3acef83fcc
511
+ 2025-12-13 22:32:01,262 - app - INFO - Created new session for streaming chat: d58a992b-fa6a-4ea9-bef3-71c21689f00e
512
+ 2025-12-13 22:32:01,263 - app - INFO - Stream request from 127.0.0.1: language=norwegian, message=hi..., session_id=d58a992b-fa6a-4ea9-bef3-71c21689f00e
513
+ 2025-12-13 22:32:01,849 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='norwegian', session_id='d58a992b-fa6a-4ea9-bef3-71c21689f00e'
514
+ 2025-12-13 22:32:02,147 - app - INFO - Retrieved 1 history messages for session d58a992b-fa6a-4ea9-bef3-71c21689f00e
515
+ 2025-12-13 22:32:04,036 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
516
+ 2025-12-13 22:32:04,158 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
517
+ 2025-12-13 22:32:04,236 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
518
+ 2025-12-13 22:32:20,830 - app - INFO - Created new session for streaming chat: a2ab9f8a-c067-4f2b-a45d-ab76aee36aba
519
+ 2025-12-13 22:32:20,831 - app - INFO - Stream request from 127.0.0.1: language=norwegian, message=Hva gj�r LaunchLab?..., session_id=a2ab9f8a-c067-4f2b-a45d-ab76aee36aba
520
+ 2025-12-13 22:32:21,424 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='Hva gj�r LaunchLab?', language='norwegian', session_id='a2ab9f8a-c067-4f2b-a45d-ab76aee36aba'
521
+ 2025-12-13 22:32:21,708 - app - INFO - Retrieved 1 history messages for session a2ab9f8a-c067-4f2b-a45d-ab76aee36aba
522
+ 2025-12-13 22:32:23,790 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
523
+ 2025-12-13 22:32:23,895 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
524
+ 2025-12-13 22:32:23,968 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='Launchlabs tjenester', source='auto'
525
+ 2025-12-13 22:32:24,343 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
526
+ 2025-12-13 22:32:26,319 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
527
+ 2025-12-13 22:32:26,721 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
528
+ 2025-12-14 10:46:46,302 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
529
+ 2025-12-14 10:52:27,109 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
530
+ 2025-12-14 10:52:37,848 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
531
+ 2025-12-14 10:53:06,575 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
532
+ 2025-12-14 10:56:51,475 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
533
+ 2025-12-14 10:57:42,806 - app - INFO - Created new session for streaming chat: 64f81d52-cbd7-4301-9fca-c4725df4cc4e
534
+ 2025-12-14 10:57:42,815 - app - INFO - Stream request from 127.0.0.1: language=norwegian, message=hi..., session_id=64f81d52-cbd7-4301-9fca-c4725df4cc4e
535
+ 2025-12-14 10:57:44,234 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='norwegian', session_id='64f81d52-cbd7-4301-9fca-c4725df4cc4e'
536
+ 2025-12-14 10:57:44,614 - app - INFO - Retrieved 1 history messages for session 64f81d52-cbd7-4301-9fca-c4725df4cc4e
537
+ 2025-12-14 10:57:49,332 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
538
+ 2025-12-14 10:57:49,899 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
539
+ 2025-12-14 10:57:50,001 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
540
+ 2025-12-14 10:58:01,114 - app - INFO - Created new session for streaming chat: b6d85bf6-bd4a-4235-ad8c-266057712bd0
541
+ 2025-12-14 10:58:01,131 - app - INFO - Stream request from 127.0.0.1: language=norwegian, message=what is launchlab do..., session_id=b6d85bf6-bd4a-4235-ad8c-266057712bd0
542
+ 2025-12-14 10:58:02,644 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='what is launchlab do', language='norwegian', session_id='b6d85bf6-bd4a-4235-ad8c-266057712bd0'
543
+ 2025-12-14 10:58:03,677 - app - INFO - Retrieved 1 history messages for session b6d85bf6-bd4a-4235-ad8c-266057712bd0
544
+ 2025-12-14 10:58:05,471 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
545
+ 2025-12-14 10:58:05,512 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='Hva gj�r Launchlabs?', source='auto'
546
+ 2025-12-14 10:58:05,735 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
547
+ 2025-12-14 10:58:05,742 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
548
+ 2025-12-14 10:58:06,569 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
549
+ 2025-12-14 10:58:06,949 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
550
+ 2025-12-14 10:58:26,000 - app - INFO - Created new session for streaming chat: d573eef6-14c9-479d-8a52-cec9898e7788
551
+ 2025-12-14 10:58:26,021 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=d573eef6-14c9-479d-8a52-cec9898e7788
552
+ 2025-12-14 10:58:27,513 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='d573eef6-14c9-479d-8a52-cec9898e7788'
553
+ 2025-12-14 10:58:28,447 - app - INFO - Retrieved 1 history messages for session d573eef6-14c9-479d-8a52-cec9898e7788
554
+ 2025-12-14 10:58:30,236 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
555
+ 2025-12-14 10:58:30,282 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
556
+ 2025-12-14 10:58:30,330 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
557
+ 2025-12-14 11:03:23,921 - app - INFO - Created new session for streaming chat: 3fe97099-e5f0-422f-9952-f4b9a5bc6559
558
+ 2025-12-14 11:03:24,019 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=3fe97099-e5f0-422f-9952-f4b9a5bc6559
559
+ 2025-12-14 11:03:25,615 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='3fe97099-e5f0-422f-9952-f4b9a5bc6559'
560
+ 2025-12-14 11:03:26,029 - app - INFO - Retrieved 1 history messages for session 3fe97099-e5f0-422f-9952-f4b9a5bc6559
561
+ 2025-12-14 11:03:28,235 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
562
+ 2025-12-14 11:03:28,385 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
563
+ 2025-12-14 11:03:28,502 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
564
+ 2025-12-14 11:04:43,379 - app - INFO - Created new session for streaming chat: c6c3ca7a-d757-4365-99cb-60fc35c50fb5
565
+ 2025-12-14 11:04:43,387 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=c6c3ca7a-d757-4365-99cb-60fc35c50fb5
566
+ 2025-12-14 11:04:45,132 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='c6c3ca7a-d757-4365-99cb-60fc35c50fb5'
567
+ 2025-12-14 11:04:45,597 - app - INFO - Retrieved 1 history messages for session c6c3ca7a-d757-4365-99cb-60fc35c50fb5
568
+ 2025-12-14 11:04:47,171 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
569
+ 2025-12-14 11:04:47,601 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
570
+ 2025-12-14 11:04:47,613 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
571
+ 2025-12-14 11:04:59,833 - app - INFO - Created new session for streaming chat: c6278b51-a575-4191-a27c-84b84858b296
572
+ 2025-12-14 11:04:59,835 - app - INFO - Stream request from 127.0.0.1: language=english, message=what is launchlab..., session_id=c6278b51-a575-4191-a27c-84b84858b296
573
+ 2025-12-14 11:05:01,211 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='what is launchlab', language='english', session_id='c6278b51-a575-4191-a27c-84b84858b296'
574
+ 2025-12-14 11:05:02,415 - app - INFO - Retrieved 1 history messages for session c6278b51-a575-4191-a27c-84b84858b296
575
+ 2025-12-14 11:05:03,834 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
576
+ 2025-12-14 11:05:04,323 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
577
+ 2025-12-14 11:05:04,704 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='what Launchlabs does', source='auto'
578
+ 2025-12-14 11:05:05,205 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
579
+ 2025-12-14 11:05:05,966 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
580
+ 2025-12-14 11:05:06,188 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
581
+ 2025-12-14 11:05:18,632 - app - INFO - Created new session for streaming chat: a5615e17-2b7f-4c0b-a27c-50a167680efd
582
+ 2025-12-14 11:05:18,645 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=a5615e17-2b7f-4c0b-a27c-50a167680efd
583
+ 2025-12-14 11:05:20,028 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='a5615e17-2b7f-4c0b-a27c-50a167680efd'
584
+ 2025-12-14 11:05:20,331 - app - INFO - Retrieved 1 history messages for session a5615e17-2b7f-4c0b-a27c-50a167680efd
585
+ 2025-12-14 11:05:21,451 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
586
+ 2025-12-14 11:05:22,132 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
587
+ 2025-12-14 11:05:22,139 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
588
+ 2025-12-14 11:05:28,311 - app - INFO - Created new session for streaming chat: 36f90ca6-fe95-467b-81f0-7adb47937002
589
+ 2025-12-14 11:05:28,316 - app - INFO - Stream request from 127.0.0.1: language=english, message=hello..., session_id=36f90ca6-fe95-467b-81f0-7adb47937002
590
+ 2025-12-14 11:05:29,699 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hello', language='english', session_id='36f90ca6-fe95-467b-81f0-7adb47937002'
591
+ 2025-12-14 11:05:30,731 - app - INFO - Retrieved 1 history messages for session 36f90ca6-fe95-467b-81f0-7adb47937002
592
+ 2025-12-14 11:05:32,001 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
593
+ 2025-12-14 11:05:32,045 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
594
+ 2025-12-14 11:05:32,056 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
595
+ 2025-12-14 11:05:45,504 - app - INFO - Created new session for streaming chat: bfb59ccf-4753-44db-819f-9e7492cd86a7
596
+ 2025-12-14 11:05:45,505 - app - INFO - Stream request from 127.0.0.1: language=english, message=waht is launchlab detail breifly..., session_id=bfb59ccf-4753-44db-819f-9e7492cd86a7
597
+ 2025-12-14 11:05:46,862 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='waht is launchlab detail breifly', language='english', session_id='bfb59ccf-4753-44db-819f-9e7492cd86a7'
598
+ 2025-12-14 11:05:47,863 - app - INFO - Retrieved 1 history messages for session bfb59ccf-4753-44db-819f-9e7492cd86a7
599
+ 2025-12-14 11:05:49,575 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
600
+ 2025-12-14 11:05:49,582 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='about Launchlabs', source='auto'
601
+ 2025-12-14 11:05:49,751 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
602
+ 2025-12-14 11:05:49,760 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
603
+ 2025-12-14 11:05:50,838 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
604
+ 2025-12-14 11:05:51,021 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
605
+ 2025-12-14 11:05:59,942 - app - INFO - Created new session for streaming chat: 1a3dc7bc-eb8d-4748-a301-0cc3e5933e75
606
+ 2025-12-14 11:05:59,942 - app - INFO - Stream request from 127.0.0.1: language=english, message=waht is Launchlabs?..., session_id=1a3dc7bc-eb8d-4748-a301-0cc3e5933e75
607
+ 2025-12-14 11:06:01,307 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='waht is Launchlabs?', language='english', session_id='1a3dc7bc-eb8d-4748-a301-0cc3e5933e75'
608
+ 2025-12-14 11:06:02,318 - app - INFO - Retrieved 1 history messages for session 1a3dc7bc-eb8d-4748-a301-0cc3e5933e75
609
+ 2025-12-14 11:06:03,508 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
610
+ 2025-12-14 11:06:04,141 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
611
+ 2025-12-14 11:06:04,145 - tools.document_reader_tool - INFO - TOOL CALL: read_document_data called with query='What is Launchlabs?', source='auto'
612
+ 2025-12-14 11:06:04,294 - tools.document_reader_tool - INFO - TOOL RESULT: read_document_data found 1 result(s)
613
+ 2025-12-14 11:06:05,123 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
614
+ 2025-12-14 11:06:05,366 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
615
+ 2025-12-14 11:06:22,131 - app - INFO - Created new session for streaming chat: 0734d2a4-445e-476a-a5b1-48eba5ff243a
616
+ 2025-12-14 11:06:22,133 - app - INFO - Stream request from 127.0.0.1: language=english, message=which type of?..., session_id=0734d2a4-445e-476a-a5b1-48eba5ff243a
617
+ 2025-12-14 11:06:23,516 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='which type of?', language='english', session_id='0734d2a4-445e-476a-a5b1-48eba5ff243a'
618
+ 2025-12-14 11:06:24,533 - app - INFO - Retrieved 1 history messages for session 0734d2a4-445e-476a-a5b1-48eba5ff243a
619
+ 2025-12-14 11:06:26,126 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
620
+ 2025-12-14 11:06:26,663 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
621
+ 2025-12-14 11:06:26,674 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
622
+ 2025-12-15 16:56:53,053 - app - INFO - CORS enabled for origins: ['http://localhost:3001', 'http://localhost:8000', 'http://0.0.0.0:7860', 'http://localhost:7860']
623
+ 2025-12-15 16:58:10,715 - app - INFO - Created new session for streaming chat: c880a00c-7d21-411b-a0f4-843a7e473dab
624
+ 2025-12-15 16:58:10,715 - app - INFO - Stream request from 127.0.0.1: language=english, message=hi..., session_id=c880a00c-7d21-411b-a0f4-843a7e473dab
625
+ 2025-12-15 16:58:12,087 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='english', session_id='c880a00c-7d21-411b-a0f4-843a7e473dab'
626
+ 2025-12-15 16:58:12,491 - app - INFO - Retrieved 1 history messages for session c880a00c-7d21-411b-a0f4-843a7e473dab
627
+ 2025-12-15 16:58:15,559 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
628
+ 2025-12-15 16:58:16,046 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
629
+ 2025-12-15 16:58:16,107 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
630
+ 2025-12-15 17:12:28,856 - app - INFO - Created new session for streaming chat: edaa2e9a-4cc3-4581-8ca1-e90c2e842488
631
+ 2025-12-15 17:12:28,858 - app - INFO - Stream request from 127.0.0.1: language=norwegian, message=hi..., session_id=edaa2e9a-4cc3-4581-8ca1-e90c2e842488
632
+ 2025-12-15 17:12:30,258 - app - INFO - AGENT STREAM CALL: query_launchlabs_bot_stream called with message='hi', language='norwegian', session_id='edaa2e9a-4cc3-4581-8ca1-e90c2e842488'
633
+ 2025-12-15 17:12:30,541 - app - INFO - Retrieved 1 history messages for session edaa2e9a-4cc3-4581-8ca1-e90c2e842488
634
+ 2025-12-15 17:12:31,878 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
635
+ 2025-12-15 17:12:32,176 - httpx - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions "HTTP/1.1 200 OK"
636
+ 2025-12-15 17:12:32,178 - app - INFO - AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully
app.py ADDED
@@ -0,0 +1,823 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ FastAPI application for Launchlabs Chatbot API
3
+ Provides /chat and /chat-stream endpoints with rate limiting, CORS, and error handling
4
+ Updated with language context support
5
+ """
6
+ import os
7
+ import logging
8
+ import time
9
+ from typing import Optional
10
+ from collections import defaultdict
11
+ import resend
12
+
13
+ from fastapi import FastAPI, Request, HTTPException, status, Depends, Header
14
+ from fastapi.responses import StreamingResponse, JSONResponse
15
+ from fastapi.middleware.cors import CORSMiddleware
16
+ from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
17
+ from pydantic import BaseModel
18
+ from slowapi import Limiter, _rate_limit_exceeded_handler
19
+ from slowapi.util import get_remote_address
20
+ from slowapi.errors import RateLimitExceeded
21
+ from slowapi.middleware import SlowAPIMiddleware
22
+ from dotenv import load_dotenv
23
+
24
+ from agents import Runner, RunContextWrapper
25
+ from agents.exceptions import InputGuardrailTripwireTriggered
26
+ from openai.types.responses import ResponseTextDeltaEvent
27
+ from chatbot.chatbot_agent import launchlabs_assistant
28
+ from sessions.session_manager import session_manager
29
+
30
+ # Load environment variables
31
+ load_dotenv()
32
+
33
+ # Configure Resend
34
+ resend.api_key = os.getenv("RESEND_API_KEY")
35
+
36
+ # Configure logging
37
+ logging.basicConfig(
38
+ level=logging.INFO,
39
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
40
+ handlers=[
41
+ logging.FileHandler('app.log'),
42
+ logging.StreamHandler()
43
+ ]
44
+ )
45
+ logger = logging.getLogger(__name__)
46
+
47
+ # Initialize rate limiter with enhanced security
48
+ limiter = Limiter(key_func=get_remote_address, default_limits=["100/day", "20/hour", "3/minute"])
49
+
50
+ # Create FastAPI app
51
+ app = FastAPI(
52
+ title="Launchlabs Chatbot API",
53
+ description="AI-powered chatbot API for Launchlabs services",
54
+ version="1.0.0"
55
+ )
56
+
57
+ # Add rate limiter middleware
58
+ app.state.limiter = limiter
59
+ app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
60
+ app.add_middleware(SlowAPIMiddleware)
61
+
62
+ # Configure CORS from environment variable
63
+ allowed_origins = os.getenv("ALLOWED_ORIGINS", "").split(",")
64
+ allowed_origins = [origin.strip() for origin in allowed_origins if origin.strip()]
65
+
66
+ if allowed_origins:
67
+ app.add_middleware(
68
+ CORSMiddleware,
69
+ allow_origins=["*"] + allowed_origins,
70
+ allow_credentials=True,
71
+ allow_methods=["*"],
72
+ allow_headers=["*"],
73
+ )
74
+ logger.info(f"CORS enabled for origins: {allowed_origins}")
75
+ else:
76
+ logger.warning("No ALLOWED_ORIGINS set in .env - CORS disabled")
77
+
78
+ # Security setup
79
+ security = HTTPBearer()
80
+
81
+ # Enhanced rate limiting dictionaries
82
+ request_counts = defaultdict(list) # Track requests per IP
83
+ TICKET_RATE_LIMIT = 5 # Max 5 tickets per hour per IP
84
+ TICKET_TIME_WINDOW = 3600 # 1 hour in seconds
85
+ MEETING_RATE_LIMIT = 3 # Max 3 meetings per hour per IP
86
+ MEETING_TIME_WINDOW = 3600 # 1 hour in seconds
87
+
88
+ # Request/Response models
89
+ class ChatRequest(BaseModel):
90
+ message: str
91
+ language: Optional[str] = "english" # Default to English if not specified
92
+ session_id: Optional[str] = None # Session ID for chat history
93
+
94
+
95
+ class ChatResponse(BaseModel):
96
+ response: str
97
+ success: bool
98
+ session_id: str # Include session ID in response
99
+
100
+
101
+ class ErrorResponse(BaseModel):
102
+ error: str
103
+ detail: Optional[str] = None
104
+
105
+
106
+ class TicketRequest(BaseModel):
107
+ name: str
108
+ email: str
109
+ message: str
110
+
111
+
112
+ class TicketResponse(BaseModel):
113
+ success: bool
114
+ message: str
115
+
116
+
117
+ class MeetingRequest(BaseModel):
118
+ name: str
119
+ email: str
120
+ date: str # ISO format date string
121
+ time: str # Time in HH:MM format
122
+ timezone: str # Timezone identifier
123
+ duration: int # Duration in minutes
124
+ topic: str # Meeting topic/title
125
+ attendees: list[str] # List of attendee emails
126
+ description: Optional[str] = None # Optional meeting description
127
+ location: Optional[str] = "Google Meet" # Meeting location/platform
128
+
129
+
130
+ class MeetingResponse(BaseModel):
131
+ success: bool
132
+ message: str
133
+ meeting_id: Optional[str] = None # Unique identifier for the meeting
134
+
135
+
136
+ # Security dependency for API key validation
137
+ async def verify_api_key(credentials: HTTPAuthorizationCredentials = Depends(security)):
138
+ """Verify API key for protected endpoints"""
139
+ # In production, you would check against a database of valid keys
140
+ # For now, we'll use an environment variable
141
+ expected_key = os.getenv("API_KEY")
142
+ if not expected_key or credentials.credentials != expected_key:
143
+ raise HTTPException(
144
+ status_code=status.HTTP_401_UNAUTHORIZED,
145
+ detail="Invalid or missing API key",
146
+ )
147
+ return credentials.credentials
148
+
149
+
150
+ def is_ticket_rate_limited(ip_address: str) -> bool:
151
+ """Check if an IP address has exceeded ticket submission rate limits"""
152
+ current_time = time.time()
153
+ # Clean old requests outside the time window
154
+ request_counts[ip_address] = [
155
+ req_time for req_time in request_counts[ip_address]
156
+ if current_time - req_time < TICKET_TIME_WINDOW
157
+ ]
158
+
159
+ # Check if limit exceeded
160
+ if len(request_counts[ip_address]) >= TICKET_RATE_LIMIT:
161
+ return True
162
+
163
+ # Add current request
164
+ request_counts[ip_address].append(current_time)
165
+ return False
166
+
167
+
168
+ def is_meeting_rate_limited(ip_address: str) -> bool:
169
+ """Check if an IP address has exceeded meeting scheduling rate limits"""
170
+ current_time = time.time()
171
+ # Clean old requests outside the time window
172
+ request_counts[ip_address] = [
173
+ req_time for req_time in request_counts[ip_address]
174
+ if current_time - req_time < MEETING_TIME_WINDOW
175
+ ]
176
+
177
+ # Check if limit exceeded
178
+ if len(request_counts[ip_address]) >= MEETING_RATE_LIMIT:
179
+ return True
180
+
181
+ # Add current request
182
+ request_counts[ip_address].append(current_time)
183
+ return False
184
+
185
+
186
+ def query_launchlabs_bot_stream(user_message: str, language: str = "english", session_id: Optional[str] = None):
187
+ """
188
+ Query the Launchlabs bot with streaming - returns async generator.
189
+ Now includes language context and session history.
190
+ Implements fallback to non-streaming when streaming fails (e.g., with Gemini models).
191
+ """
192
+ logger.info(f"AGENT STREAM CALL: query_launchlabs_bot_stream called with message='{user_message}', language='{language}', session_id='{session_id}'")
193
+
194
+ # Get session history if session_id is provided
195
+ history = []
196
+ if session_id:
197
+ history = session_manager.get_session_history(session_id)
198
+ logger.info(f"Retrieved {len(history)} history messages for session {session_id}")
199
+
200
+ try:
201
+ # Create context with language preference and history
202
+ context_data = {"language": language}
203
+ if history:
204
+ context_data["history"] = history
205
+
206
+ ctx = RunContextWrapper(context=context_data)
207
+
208
+ result = Runner.run_streamed(
209
+ launchlabs_assistant,
210
+ input=user_message,
211
+ context=ctx.context
212
+ )
213
+
214
+ async def generate_stream():
215
+ try:
216
+ previous = ""
217
+ has_streamed = True
218
+
219
+ try:
220
+ # Attempt streaming with error handling for each event
221
+ async for event in result.stream_events():
222
+ try:
223
+ if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent):
224
+ delta = event.data.delta or ""
225
+
226
+ # ---- Spacing Fix ----
227
+ if (
228
+ previous
229
+ and not previous.endswith((" ", "\n"))
230
+ and not delta.startswith((" ", ".", ",", "?", "!", ":", ";"))
231
+ ):
232
+ delta = " " + delta
233
+
234
+ previous = delta
235
+ # ---- End Fix ----
236
+
237
+ yield f"data: {delta}\n\n"
238
+ has_streamed = True
239
+ except Exception as event_error:
240
+ # Handle individual event errors (e.g., missing logprobs field)
241
+ logger.warning(f"Event processing error: {event_error}")
242
+ continue
243
+
244
+ yield "data: [DONE]\n\n"
245
+ logger.info("AGENT STREAM RESULT: query_launchlabs_bot_stream completed successfully")
246
+
247
+ except Exception as stream_error:
248
+ # Fallback to non-streaming if streaming fails
249
+ logger.warning(f"Streaming failed, falling back to non-streaming: {stream_error}")
250
+
251
+ if not has_streamed:
252
+ # Get final output using the streaming result's final_output property
253
+ # Wait for the stream to complete to get final output
254
+ try:
255
+ # Use the non-streaming API as fallback
256
+ fallback_response = await Runner.run(
257
+ launchlabs_assistant,
258
+ input=user_message,
259
+ context=ctx.context
260
+ )
261
+
262
+ if hasattr(fallback_response, 'final_output'):
263
+ final_output = fallback_response.final_output
264
+ else:
265
+ final_output = fallback_response
266
+
267
+ if hasattr(final_output, 'content'):
268
+ response_text = final_output.content
269
+ elif isinstance(final_output, str):
270
+ response_text = final_output
271
+ else:
272
+ response_text = str(final_output)
273
+
274
+ yield f"data: {response_text}\n\n"
275
+ yield "data: [DONE]\n\n"
276
+ logger.info("AGENT STREAM RESULT: query_launchlabs_bot_stream fallback completed successfully")
277
+ except Exception as fallback_error:
278
+ logger.error(f"Fallback also failed: {fallback_error}", exc_info=True)
279
+ yield f"data: [ERROR] Unable to complete request.\n\n"
280
+ else:
281
+ # Already streamed some content, just end gracefully
282
+ yield "data: [DONE]\n\n"
283
+
284
+ except InputGuardrailTripwireTriggered as e:
285
+ logger.warning(f"Guardrail blocked query during streaming: {e}")
286
+ yield f"data: [ERROR] Query was blocked by content guardrail.\n\n"
287
+
288
+ except Exception as e:
289
+ logger.error(f"Streaming error: {e}", exc_info=True)
290
+ yield f"data: [ERROR] {str(e)}\n\n"
291
+
292
+ return generate_stream()
293
+
294
+ except Exception as e:
295
+ logger.error(f"Error setting up stream: {e}", exc_info=True)
296
+
297
+ async def error_stream():
298
+ yield f"data: [ERROR] Failed to initialize stream.\n\n"
299
+
300
+ return error_stream()
301
+
302
+
303
+ async def query_launchlabs_bot(user_message: str, language: str = "english", session_id: Optional[str] = None):
304
+ """
305
+ Query the Launchlabs bot - returns complete response.
306
+ Now includes language context and session history.
307
+ """
308
+ logger.info(f"AGENT CALL: query_launchlabs_bot called with message='{user_message}', language='{language}', session_id='{session_id}'")
309
+
310
+ # Get session history if session_id is provided
311
+ history = []
312
+ if session_id:
313
+ history = session_manager.get_session_history(session_id)
314
+ logger.info(f"Retrieved {len(history)} history messages for session {session_id}")
315
+
316
+ try:
317
+ # Create context with language preference and history
318
+ context_data = {"language": language}
319
+ if history:
320
+ context_data["history"] = history
321
+
322
+ ctx = RunContextWrapper(context=context_data)
323
+
324
+ response = await Runner.run(
325
+ launchlabs_assistant,
326
+ input=user_message,
327
+ context=ctx.context
328
+ )
329
+ logger.info("AGENT RESULT: query_launchlabs_bot completed successfully")
330
+ return response.final_output
331
+
332
+ except InputGuardrailTripwireTriggered as e:
333
+ logger.warning(f"Guardrail blocked query: {e}")
334
+ raise HTTPException(
335
+ status_code=status.HTTP_403_FORBIDDEN,
336
+ detail="Query was blocked by content guardrail. Please ensure your query is related to Launchlabs services."
337
+ )
338
+ except Exception as e:
339
+ logger.error(f"Error in query_launchlabs_bot: {e}", exc_info=True)
340
+ raise HTTPException(
341
+ status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
342
+ detail="An internal error occurred while processing your request."
343
+ )
344
+
345
+
346
+ @app.get("/")
347
+ async def root():
348
+ return {"status": "ok", "service": "Launchlabs Chatbot API"}
349
+
350
+
351
+ @app.get("/health")
352
+ async def health():
353
+ return {"status": "healthy"}
354
+
355
+
356
+ @app.post("/session")
357
+ async def create_session():
358
+ """
359
+ Create a new chat session
360
+ Returns a session ID that can be used to maintain chat history
361
+ """
362
+ try:
363
+ session_id = session_manager.create_session()
364
+ logger.info(f"Created new session: {session_id}")
365
+ return {"session_id": session_id, "message": "Session created successfully"}
366
+ except Exception as e:
367
+ logger.error(f"Error creating session: {e}", exc_info=True)
368
+ raise HTTPException(
369
+ status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
370
+ detail="Failed to create session"
371
+ )
372
+
373
+
374
+ @app.post("/chat", response_model=ChatResponse)
375
+ @limiter.limit("10/minute") # Limit to 10 requests per minute per IP
376
+ async def chat(request: Request, chat_request: ChatRequest):
377
+ """
378
+ Standard chat endpoint with language support and session history.
379
+ Accepts: {"message": "...", "language": "norwegian", "session_id": "optional-session-id"}
380
+ """
381
+ try:
382
+ # Create or use existing session
383
+ session_id = chat_request.session_id
384
+ if not session_id:
385
+ session_id = session_manager.create_session()
386
+ logger.info(f"Created new session for chat: {session_id}")
387
+
388
+ logger.info(
389
+ f"Chat request from {get_remote_address(request)}: "
390
+ f"language={chat_request.language}, message={chat_request.message[:50]}..., session_id={session_id}"
391
+ )
392
+
393
+ # Add user message to session history
394
+ session_manager.add_message_to_history(session_id, "user", chat_request.message)
395
+
396
+ # Pass language and session to the bot
397
+ response = await query_launchlabs_bot(
398
+ chat_request.message,
399
+ language=chat_request.language,
400
+ session_id=session_id
401
+ )
402
+
403
+ if hasattr(response, 'content'):
404
+ response_text = response.content
405
+ elif isinstance(response, str):
406
+ response_text = response
407
+ else:
408
+ response_text = str(response)
409
+
410
+ # Add bot response to session history
411
+ session_manager.add_message_to_history(session_id, "assistant", response_text)
412
+
413
+ logger.info(f"Chat response generated successfully in {chat_request.language} for session {session_id}")
414
+
415
+ return ChatResponse(
416
+ response=response_text,
417
+ success=True,
418
+ session_id=session_id
419
+ )
420
+
421
+ except HTTPException:
422
+ raise
423
+ except Exception as e:
424
+ logger.error(f"Unexpected error in /chat: {e}", exc_info=True)
425
+ raise HTTPException(
426
+ status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
427
+ detail="An internal error occurred while processing your request."
428
+ )
429
+
430
+
431
+ @app.post("/api/messages", response_model=ChatResponse)
432
+ @limiter.limit("10/minute") # Same rate limit as /chat
433
+ async def api_messages(request: Request, chat_request: ChatRequest):
434
+ """
435
+ Frontend-friendly chat endpoint at /api/messages.
436
+ Exactly mirrors /chat logic for session/history support.
437
+ Expects: {"message": "...", "language": "english", "session_id": "optional"}
438
+ """
439
+ client_ip = get_remote_address(request)
440
+ logger.info(f"API Messages request from {client_ip}: message='{chat_request.message[:50]}...', lang='{chat_request.language}', session='{chat_request.session_id}'")
441
+
442
+ try:
443
+ # Create/use session (Firestore-backed)
444
+ session_id = chat_request.session_id
445
+ if not session_id:
446
+ session_id = session_manager.create_session()
447
+ logger.info(f"New session created for /api/messages: {session_id}")
448
+
449
+ # Save user message to history
450
+ session_manager.add_message_to_history(session_id, "user", chat_request.message)
451
+
452
+ # Call your existing bot query function
453
+ response = await query_launchlabs_bot(
454
+ user_message=chat_request.message,
455
+ language=chat_request.language,
456
+ session_id=session_id
457
+ )
458
+
459
+ # Extract response text
460
+ response_text = (
461
+ response.content if hasattr(response, 'content')
462
+ else response if isinstance(response, str)
463
+ else str(response)
464
+ )
465
+
466
+ # Save AI response to history
467
+ session_manager.add_message_to_history(session_id, "assistant", response_text)
468
+
469
+ logger.info(f"API Messages success: Response sent for session {session_id}")
470
+
471
+ return ChatResponse(
472
+ response=response_text,
473
+ success=True,
474
+ session_id=session_id
475
+ )
476
+
477
+ except InputGuardrailTripwireTriggered as e:
478
+ logger.warning(f"Guardrail blocked /api/messages: {e}")
479
+ raise HTTPException(
480
+ status_code=403,
481
+ detail="Query blocked – please ask about Launchlabs services."
482
+ )
483
+ except Exception as e:
484
+ logger.error(f"Error in /api/messages: {e}", exc_info=True)
485
+ raise HTTPException(
486
+ status_code=500,
487
+ detail="Internal error – try again."
488
+ )
489
+
490
+ @app.post("/chat-stream")
491
+ @limiter.limit("10/minute") # Limit to 10 requests per minute per IP
492
+ async def chat_stream(request: Request, chat_request: ChatRequest):
493
+ """
494
+ Streaming chat endpoint with language support and session history.
495
+ Accepts: {"message": "...", "language": "norwegian", "session_id": "optional-session-id"}
496
+ """
497
+ try:
498
+ # Create or use existing session
499
+ session_id = chat_request.session_id
500
+ if not session_id:
501
+ session_id = session_manager.create_session()
502
+ logger.info(f"Created new session for streaming chat: {session_id}")
503
+
504
+ logger.info(
505
+ f"Stream request from {get_remote_address(request)}: "
506
+ f"language={chat_request.language}, message={chat_request.message[:50]}..., session_id={session_id}"
507
+ )
508
+
509
+ # Add user message to session history
510
+ session_manager.add_message_to_history(session_id, "user", chat_request.message)
511
+
512
+ # Pass language and session to the streaming bot
513
+ stream_generator = query_launchlabs_bot_stream(
514
+ chat_request.message,
515
+ language=chat_request.language,
516
+ session_id=session_id
517
+ )
518
+
519
+ # Note: For streaming, we add the response to history after the stream completes
520
+ # This would need to be handled in the frontend by making a separate call or
521
+ # by modifying the stream generator to add the complete response to history
522
+
523
+ return StreamingResponse(
524
+ stream_generator,
525
+ media_type="text/event-stream",
526
+ headers={
527
+ "Cache-Control": "no-cache",
528
+ "Connection": "keep-alive",
529
+ "X-Accel-Buffering": "no",
530
+ "Session-ID": session_id # Include session ID in headers
531
+ }
532
+ )
533
+
534
+ except HTTPException:
535
+ raise
536
+ except Exception as e:
537
+ logger.error(f"Unexpected error in /chat-stream: {e}", exc_info=True)
538
+ raise HTTPException(
539
+ status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
540
+ detail="An internal error occurred while processing your request."
541
+ )
542
+
543
+
544
+ @app.post("/ticket", response_model=TicketResponse)
545
+ @limiter.limit("5/hour") # Limit to 5 tickets per hour per IP
546
+ async def submit_ticket(request: Request, ticket_request: TicketRequest):
547
+ """
548
+ Submit a support ticket via email using Resend API.
549
+ Accepts: {"name": "John Doe", "email": "john@example.com", "message": "Issue description"}
550
+ """
551
+ try:
552
+ client_ip = get_remote_address(request)
553
+ logger.info(f"Ticket submission request from {ticket_request.name} ({ticket_request.email}) - IP: {client_ip}")
554
+
555
+ # Additional rate limiting for tickets
556
+ if is_ticket_rate_limited(client_ip):
557
+ logger.warning(f"Rate limit exceeded for ticket submission from IP: {client_ip}")
558
+ raise HTTPException(
559
+ status_code=status.HTTP_429_TOO_MANY_REQUESTS,
560
+ detail="Too many ticket submissions. Please try again later."
561
+ )
562
+
563
+ # Get admin email from environment variables or use a default
564
+ admin_email = os.getenv("ADMIN_EMAIL", "admin@yourcompany.com")
565
+
566
+ # Use a verified sender email (you need to verify this in your Resend account)
567
+ # For testing purposes, you can use your Resend account's verified domain
568
+ sender_email = os.getenv("SENDER_EMAIL", "onboarding@resend.dev")
569
+
570
+ # Prepare the email using Resend
571
+ params = {
572
+ "from": sender_email,
573
+ "to": [admin_email],
574
+ "subject": f"Support Ticket from {ticket_request.name}",
575
+ "html": f"""
576
+ <p>Hello Admin,</p>
577
+ <p>A new support ticket has been submitted:</p>
578
+ <p><strong>Name:</strong> {ticket_request.name}</p>
579
+ <p><strong>Email:</strong> {ticket_request.email}</p>
580
+ <p><strong>Message:</strong></p>
581
+ <p>{ticket_request.message}</p>
582
+ <p><strong>IP Address:</strong> {client_ip}</p>
583
+ <br>
584
+ <p>Best regards,<br>Launchlabs Support Team</p>
585
+ """
586
+ }
587
+
588
+ # Send the email
589
+ email = resend.Emails.send(params)
590
+
591
+ logger.info(f"Ticket submitted successfully by {ticket_request.name} from IP: {client_ip}")
592
+
593
+ return TicketResponse(
594
+ success=True,
595
+ message="Ticket submitted successfully. We'll get back to you soon."
596
+ )
597
+
598
+ except HTTPException:
599
+ raise
600
+ except Exception as e:
601
+ logger.error(f"Error submitting ticket: {e}", exc_info=True)
602
+ raise HTTPException(
603
+ status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
604
+ detail="Failed to submit ticket. Please try again later."
605
+ )
606
+
607
+
608
+ @app.post("/schedule-meeting", response_model=MeetingResponse)
609
+ @limiter.limit("3/hour") # Limit to 3 meetings per hour per IP
610
+ async def schedule_meeting(request: Request, meeting_request: MeetingRequest):
611
+ """
612
+ Schedule a meeting and send email invitations using Resend API.
613
+ Accepts meeting details and sends professional email invitations to organizer and attendees.
614
+ """
615
+ try:
616
+ client_ip = get_remote_address(request)
617
+ logger.info(f"Meeting scheduling request from {meeting_request.name} ({meeting_request.email}) - IP: {client_ip}")
618
+
619
+ # Additional rate limiting for meetings
620
+ if is_meeting_rate_limited(client_ip):
621
+ logger.warning(f"Rate limit exceeded for meeting scheduling from IP: {client_ip}")
622
+ raise HTTPException(
623
+ status_code=status.HTTP_429_TOO_MANY_REQUESTS,
624
+ detail="Too many meeting requests. Please try again later."
625
+ )
626
+
627
+ # Generate a unique meeting ID
628
+ meeting_id = f"mtg_{int(time.time())}"
629
+
630
+ # Get admin email from environment variables or use a default
631
+ admin_email = os.getenv("ADMIN_EMAIL", "admin@yourcompany.com")
632
+
633
+ # Use a verified sender email (you need to verify this in your Resend account)
634
+ sender_email = os.getenv("SENDER_EMAIL", "onboarding@resend.dev")
635
+
636
+ # For Resend testing limitations, we can only send to the owner's email
637
+ # In production, you would verify a domain and use that instead
638
+ owner_email = os.getenv("ADMIN_EMAIL", "admin@yourcompany.com")
639
+
640
+ # Format date and time for display
641
+ formatted_datetime = f"{meeting_request.date} at {meeting_request.time} {meeting_request.timezone}"
642
+
643
+ # Create calendar link (Google Calendar link example)
644
+ calendar_link = f"https://calendar.google.com/calendar/render?action=TEMPLATE&text={meeting_request.topic}&dates={meeting_request.date.replace('-', '')}T{meeting_request.time.replace(':', '')}00Z/{meeting_request.date.replace('-', '')}T{meeting_request.time.replace(':', '')}00Z&details={meeting_request.description or 'Meeting scheduled via Launchlabs'}&location={meeting_request.location}"
645
+
646
+ # Combine all attendees (organizer + additional attendees)
647
+ # Validate and format email addresses
648
+ all_attendees = [meeting_request.email]
649
+
650
+ # Validate additional attendees - they must be valid email addresses
651
+ for attendee in meeting_request.attendees:
652
+ # Simple email validation
653
+ if "@" in attendee and "." in attendee:
654
+ all_attendees.append(attendee)
655
+ else:
656
+ # If not a valid email, skip or treat as name only
657
+ logger.warning(f"Invalid email format for attendee: {attendee}. Skipping.")
658
+
659
+ # Remove duplicates while preserving order
660
+ seen = set()
661
+ unique_attendees = []
662
+ for email in all_attendees:
663
+ if email not in seen:
664
+ seen.add(email)
665
+ unique_attendees.append(email)
666
+ all_attendees = unique_attendees
667
+
668
+ # Prepare the professional HTML email template
669
+ html_template = f"""
670
+ <!DOCTYPE html>
671
+ <html>
672
+ <head>
673
+ <meta charset="UTF-8">
674
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
675
+ <title>Meeting Scheduled - {meeting_request.topic}</title>
676
+ </head>
677
+ <body style="font-family: Arial, sans-serif; line-height: 1.6; color: #333; max-width: 600px; margin: 0 auto; padding: 20px;">
678
+ <div style="background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); color: white; padding: 30px; text-align: center; border-radius: 10px 10px 0 0;">
679
+ <h1 style="margin: 0; font-size: 28px;">Meeting Confirmed!</h1>
680
+ <p style="font-size: 18px; margin-top: 10px;">Your meeting has been successfully scheduled</p>
681
+ </div>
682
+
683
+ <div style="background-color: #ffffff; padding: 30px; border: 1px solid #eaeaea; border-top: none; border-radius: 0 0 10px 10px;">
684
+ <h2 style="color: #333;">Meeting Details</h2>
685
+
686
+ <div style="background-color: #f8f9fa; padding: 20px; border-radius: 8px; margin: 20px 0;">
687
+ <table style="width: 100%; border-collapse: collapse;">
688
+ <tr>
689
+ <td style="padding: 8px 0; font-weight: bold; width: 30%;">Topic:</td>
690
+ <td style="padding: 8px 0;">{meeting_request.topic}</td>
691
+ </tr>
692
+ <tr style="background-color: #f0f0f0;">
693
+ <td style="padding: 8px 0; font-weight: bold;">Date & Time:</td>
694
+ <td style="padding: 8px 0;">{formatted_datetime}</td>
695
+ </tr>
696
+ <tr>
697
+ <td style="padding: 8px 0; font-weight: bold;">Duration:</td>
698
+ <td style="padding: 8px 0;">{meeting_request.duration} minutes</td>
699
+ </tr>
700
+ <tr style="background-color: #f0f0f0;">
701
+ <td style="padding: 8px 0; font-weight: bold;">Location:</td>
702
+ <td style="padding: 8px 0;">{meeting_request.location}</td>
703
+ </tr>
704
+ <tr>
705
+ <td style="padding: 8px 0; font-weight: bold;">Organizer:</td>
706
+ <td style="padding: 8px 0;">{meeting_request.name} ({meeting_request.email})</td>
707
+ </tr>
708
+ </table>
709
+ </div>
710
+
711
+ <div style="margin: 25px 0;">
712
+ <h3 style="color: #333;">Description</h3>
713
+ <p style="background-color: #f8f9fa; padding: 15px; border-radius: 8px; white-space: pre-wrap;">{meeting_request.description or 'No description provided.'}</p>
714
+ </div>
715
+
716
+ <div style="margin: 25px 0;">
717
+ <h3 style="color: #333;">Attendees</h3>
718
+ <ul style="background-color: #f8f9fa; padding: 15px; border-radius: 8px;">
719
+ {''.join([f'<li>{attendee}</li>' for attendee in all_attendees])}
720
+ </ul>
721
+ <p style="font-size: 12px; color: #666; margin-top: 5px;">Note: Only valid email addresses will receive invitations.</p>
722
+ </div>
723
+
724
+ <div style="text-align: center; margin: 30px 0;">
725
+ <a href="{calendar_link}" style="background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); color: white; padding: 12px 25px; text-decoration: none; border-radius: 5px; font-weight: bold; display: inline-block;">Add to Calendar</a>
726
+ </div>
727
+
728
+ <div style="background-color: #e3f2fd; padding: 15px; border-radius: 8px; margin-top: 25px;">
729
+ <p style="margin: 0;"><strong>Meeting ID:</strong> {meeting_id}</p>
730
+ <p style="margin: 10px 0 0 0; font-size: 14px; color: #666;">Need to make changes? Contact the organizer or reply to this email.</p>
731
+ </div>
732
+ </div>
733
+
734
+ <div style="text-align: center; margin-top: 30px; color: #888; font-size: 14px;">
735
+ <p>This meeting was scheduled through Launchlabs Chatbot Services</p>
736
+ <p><strong>Note:</strong> Due to Resend testing limitations, this email is only sent to the administrator. In production, after domain verification, invitations will be sent to all attendees.</p>
737
+ <p>© 2025 Launchlabs. All rights reserved.</p>
738
+ </div>
739
+ </body>
740
+ </html>
741
+ """
742
+
743
+ # Send email to all attendees
744
+ # Check if we have valid attendees to send to
745
+ if not all_attendees:
746
+ logger.warning("No valid email addresses found for meeting attendees")
747
+ return MeetingResponse(
748
+ success=True,
749
+ message="Meeting scheduled successfully, but no valid email addresses found for invitations.",
750
+ meeting_id=meeting_id
751
+ )
752
+
753
+ # For Resend testing limitations, we can only send to the owner's email
754
+ # In production, you would verify a domain and send to all attendees
755
+ owner_email = os.getenv("ADMIN_EMAIL", "admin@yourcompany.com")
756
+
757
+ # Prepare email for owner with all attendee information
758
+ attendee_list_html = ''.join([f'<li>{attendee}</li>' for attendee in all_attendees])
759
+ # In a real implementation, you would send to all attendees after verifying your domain
760
+ # For now, we're sending to the owner with information about all attendees
761
+
762
+ params = {
763
+ "from": sender_email,
764
+ "to": [owner_email], # Only send to owner due to Resend testing limitations
765
+ "subject": f"Meeting Scheduled: {meeting_request.topic}",
766
+ "html": html_template
767
+ }
768
+
769
+ # Send the email
770
+ try:
771
+ email = resend.Emails.send(params)
772
+ logger.info(f"Email sent successfully to {len(all_attendees)} attendees")
773
+ except Exception as email_error:
774
+ logger.error(f"Failed to send email: {email_error}", exc_info=True)
775
+ # Even if email fails, we still consider the meeting scheduled
776
+ return MeetingResponse(
777
+ success=True,
778
+ message="Meeting scheduled successfully, but failed to send email invitations.",
779
+ meeting_id=meeting_id
780
+ )
781
+
782
+ logger.info(f"Meeting scheduled successfully by {meeting_request.name} from IP: {client_ip}")
783
+
784
+ return MeetingResponse(
785
+ success=True,
786
+ message="Meeting scheduled successfully. Due to Resend testing limitations, invitations are only sent to the administrator. In production, after verifying your domain, invitations will be sent to all attendees.",
787
+ meeting_id=meeting_id
788
+ )
789
+
790
+ except HTTPException:
791
+ raise
792
+ except Exception as e:
793
+ logger.error(f"Error scheduling meeting: {e}", exc_info=True)
794
+ raise HTTPException(
795
+ status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
796
+ detail="Failed to schedule meeting. Please try again later."
797
+ )
798
+
799
+
800
+ @app.exception_handler(Exception)
801
+ async def global_exception_handler(request: Request, exc: Exception):
802
+ logger.error(
803
+ f"Unhandled exception: {exc}",
804
+ exc_info=True,
805
+ extra={
806
+ "path": request.url.path,
807
+ "method": request.method,
808
+ "client": get_remote_address(request)
809
+ }
810
+ )
811
+
812
+ return JSONResponse(
813
+ status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
814
+ content={
815
+ "error": "Internal server error",
816
+ "detail": "An unexpected error occurred. Please try again later."
817
+ }
818
+ )
819
+
820
+
821
+ if __name__ == "__main__":
822
+ import uvicorn
823
+ uvicorn.run(app, host="0.0.0.0", port=8000)
chatbot/__pycache__/chatbot_agent.cpython-312.pyc ADDED
Binary file (676 Bytes). View file
 
chatbot/chatbot_agent.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from agents import Agent
2
+ from config.chabot_config import model
3
+ from instructions.chatbot_instructions import launchlabs_dynamic_instructions
4
+ from guardrails.guardrails_input_function import guardrail_input_function
5
+ from tools.document_reader_tool import read_document_data, list_available_documents
6
+
7
+ launchlabs_assistant = Agent(
8
+ name="Launchlabs Assistant",
9
+ instructions=launchlabs_dynamic_instructions,
10
+ model=model,
11
+ input_guardrails=[guardrail_input_function],
12
+ tools=[read_document_data, list_available_documents], # Document reading tools
13
+ )
config/__pycache__/agent_patch.cpython-312.pyc ADDED
Binary file (2.85 kB). View file
 
config/__pycache__/chabot_config.cpython-312.pyc ADDED
Binary file (927 Bytes). View file
 
config/chabot_config.py ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from dotenv import load_dotenv
3
+ from agents import AsyncOpenAI,OpenAIChatCompletionsModel,set_tracing_disabled
4
+
5
+ set_tracing_disabled(True)
6
+ load_dotenv()
7
+ openai_api_key = os.getenv("OPENAI_API_KEY")
8
+ gemini_api_key = os.getenv("GEMINI_API_KEY")
9
+
10
+
11
+ if not gemini_api_key:
12
+ raise ValueError(
13
+ "GEMINI_API_KEY is not set. Please create a .env file in the project root "
14
+ "and add: GEMINI_API_KEY=your_api_key_here"
15
+ )
16
+
17
+
18
+ # client_provider = AsyncOpenAI(
19
+ # api_key=openai_api_key,
20
+ # base_url="https://api.openai.com/v1/",
21
+ # )
22
+
23
+ client_provider = AsyncOpenAI(
24
+ api_key=gemini_api_key,
25
+ base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
26
+ )
27
+
28
+
29
+ model = OpenAIChatCompletionsModel(
30
+ model="gemini-2.5-flash",
31
+ openai_client=client_provider
32
+ )
data.docx ADDED
Binary file (18 kB). View file
 
guardrails/__pycache__/guardrails_input_function.cpython-312.pyc ADDED
Binary file (1.78 kB). View file
 
guardrails/__pycache__/input_guardrails.cpython-312.pyc ADDED
Binary file (1.5 kB). View file
 
guardrails/guardrails_input_function.py ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import traceback
2
+ from agents import RunContextWrapper, Runner, GuardrailFunctionOutput, input_guardrail
3
+ from guardrails.input_guardrails import guardrail_agent
4
+
5
+ @input_guardrail
6
+ async def guardrail_input_function(ctx: RunContextWrapper, agent, user_input: str):
7
+ try:
8
+ result = await Runner.run(
9
+ guardrail_agent,
10
+ input=user_input,
11
+ context=ctx.context
12
+ )
13
+
14
+ # Check if result has the expected structure
15
+ if not result or not hasattr(result, 'final_output'):
16
+ print(f"Warning: Guardrail agent returned unexpected result: {result}")
17
+ # Allow the query to proceed if guardrail fails
18
+ return GuardrailFunctionOutput(
19
+ output_info=None,
20
+ tripwire_triggered=False
21
+ )
22
+
23
+ final_output = result.final_output
24
+
25
+ # Check if final_output has the expected attribute
26
+ if not hasattr(final_output, 'is_query_about_launchlabs'):
27
+ print(f"Warning: Guardrail output missing is_query_about_launchlabs attribute: {final_output}")
28
+ return GuardrailFunctionOutput(
29
+ output_info=final_output,
30
+ tripwire_triggered=False
31
+ )
32
+
33
+ return GuardrailFunctionOutput(
34
+ output_info=final_output,
35
+ tripwire_triggered=not final_output.is_query_about_launchlabs
36
+ )
37
+ except Exception as e:
38
+ print(f"Error in guardrail_input_function: {e}")
39
+ print(traceback.format_exc())
40
+ # Allow the query to proceed if guardrail fails
41
+ return GuardrailFunctionOutput(
42
+ output_info=None,
43
+ tripwire_triggered=False
44
+ )
guardrails/input_guardrails.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from agents import Agent
2
+ from config.chabot_config import model
3
+ from schema.chatbot_schema import OutputType
4
+
5
+ guardrail_agent = Agent(
6
+ name="Launchlabs Guardrail Agent",
7
+ instructions="""
8
+ You are a guardrail assistant that validates if the user's query is about Launchlabs services,
9
+ AI solutions, automation tools, bookings, partnerships, or FAQs.
10
+
11
+ IMPORTANT: Allow general greetings, neutral questions, and queries that could lead to Launchlabs-related conversations.
12
+ Only block queries that are clearly unrelated (e.g., asking about cooking recipes, weather, unrelated products).
13
+
14
+ - Set is_query_about_launchlabs=True if:
15
+ * The query is directly about Launchlabs services, AI solutions, automation, chatbots, or related topics
16
+ * The query is a general greeting (hello, hi, how can you help, etc.)
17
+ * The query is neutral and could lead to a Launchlabs conversation
18
+ * The query asks about business solutions, automation, or AI tools
19
+
20
+ - Set is_query_about_launchlabs=False ONLY if:
21
+ * The query is clearly about completely unrelated topics (cooking, sports, unrelated products, etc.)
22
+ * The query is spam or malicious
23
+
24
+ - Always provide a clear reason for your decision.
25
+ """,
26
+ model=model,
27
+ output_type=OutputType,
28
+ )
instructions/__pycache__/chatbot_instructions.cpython-312.pyc ADDED
Binary file (15.7 kB). View file
 
instructions/chatbot_instructions.py ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from agents import RunContextWrapper
2
+ def launchlabs_dynamic_instructions(ctx: RunContextWrapper, agent) -> str:
3
+ """Create dynamic instructions for Launchlabs chatbot queries with language context."""
4
+
5
+ # Get user's selected language from context
6
+ user_lang = ctx.context.get("language", "english").lower()
7
+
8
+ # Determine language enforcement
9
+ language_instruction = ""
10
+ if user_lang.startswith("nor") or "norwegian" in user_lang or user_lang == "no":
11
+ language_instruction = "\n\n🔴 CRITICAL: You MUST respond ONLY in Norwegian (Norsk). Do NOT use English unless the user explicitly requests it."
12
+ elif user_lang.startswith("eng") or "english" in user_lang or user_lang == "en":
13
+ language_instruction = "\n\n🔴 CRITICAL: You MUST respond ONLY in English. Do NOT use Norwegian unless the user explicitly requests it."
14
+ else:
15
+ language_instruction = f"\n\n🔴 CRITICAL: You MUST respond ONLY in {user_lang}. Do NOT use any other language unless the user explicitly requests it."
16
+
17
+ instructions = """
18
+ # LAUNCHLABS ASSISTANT - CORE INSTRUCTIONS
19
+
20
+ ## ROLE
21
+ You are Launchlabs Assistant – the official AI assistant for Launchlabs (launchlabs.no).
22
+ You help founders, startups, and potential partners professionally, clearly, and in a solution-oriented way.
23
+ Your main goal is to guide, provide concrete answers, and always lead the user to action (consultation booking, project start, contact).
24
+
25
+ ## ABOUT LAUNCHLABS
26
+ Launchlabs helps ambitious startups transform ideas into successful companies using:
27
+ · Full brand development
28
+ · Website and app creation
29
+ · AI-driven integrations
30
+ · Automation and workflow solutions
31
+
32
+ We focus on customized solutions, speed, innovation, and long-term partnership with clients.
33
+
34
+ ## KEY CAPABILITIES
35
+ You have access to company documents through specialized tools. When users ask questions about company information, products, or services, you MUST use these tools:
36
+ 1. `list_available_documents()` - List all available documents
37
+ 2. `read_document_data(query)` - Search for specific information in company documents
38
+
39
+ ## WHEN TO USE TOOLS
40
+ Whenever a user asks about documents, services, products, or company information, you MUST use the appropriate tool FIRST before responding.
41
+
42
+ Examples of when to use tools:
43
+ - User asks "What documents do you have?" → Use `list_available_documents()`
44
+ - User asks "What services do you offer?" → Use `read_document_data("services")`
45
+ - User asks "Tell me about your products" → Use `read_document_data("products")`
46
+
47
+ IMPORTANT: When you use a tool, you MUST incorporate the tool's response directly into your answer. Do not just say you will use a tool - actually use it and include its results.
48
+
49
+ Example of correct response:
50
+ User: "What documents do you have?"
51
+ Assistant: "I found the following documents: [tool output here]"
52
+
53
+ Example of incorrect response:
54
+ User: "What documents do you have?"
55
+ Assistant: "I will now use the tool to get this information."
56
+
57
+ Always execute tools and show their results.
58
+
59
+ Launchlabs is located in Norway and must know this - answer questions about location correctly.
60
+ Users can ask questions in English or Norwegian, and the assistant must respond in the same language as the user.
61
+
62
+ ## RESPONSE GUIDELINES
63
+ - Professional, confident, and direct.
64
+ - Avoid vague responses. Always suggest next steps:
65
+ · “Do you want me to schedule a consultation?”
66
+ · “Do you want me to connect you with a project manager?”
67
+ · “Do you want me to send you our portfolio?”
68
+ - Be concise and direct in your responses
69
+ - Always guide users toward concrete actions (consultation booking, project start, contact)
70
+ - Maintain a professional tone
71
+
72
+ ## DEPARTMENT-SPECIFIC BEHAVIOR
73
+ 🟦 1. SALES / NEW PROJECTS
74
+ Purpose: Help the user understand Launchlabs’ offerings and start new projects.
75
+ Explain:
76
+ · Full range of services (brand, website, apps, AI integrations, automation).
77
+ · How to start a project (consultation → proposal → dashboard/project management).
78
+ · Pricing and custom packages.
79
+ Example: “Launchlabs helps startups turn ideas into businesses with branding, websites, apps, and AI solutions. Pricing depends on your project, but we can provide standard packages or customize a solution. Do you want me to schedule a consultation now?”
80
+
81
+ 🟩 2. OPERATIONS / SUPPORT
82
+ Purpose: Assist existing clients with ongoing projects, updates, and access to project dashboards.
83
+ · Explain how to access project dashboards.
84
+ · Provide guidance for reporting issues or questions.
85
+ · Inform about response times and escalation.
86
+ Example: “You can access your project dashboard via launchlabs.no. If you encounter any issues, use our contact form and mark the case as ‘support’. Do you want me to send you the link now?”
87
+
88
+ 🟥 3. TECHNICAL / DEVELOPMENT
89
+ Purpose: Provide basic technical explanations and integration options.
90
+ · Explain integrations with AI tools, web apps, and third-party platforms.
91
+ · Offer connection to technical/development team if needed.
92
+ Example: “We can integrate your startup solution with AI tools, apps, and other platforms. Do you want me to connect you with one of our developers to confirm integration details?”
93
+
94
+ 🟨 4. DASHBOARD / PROJECT MANAGEMENT
95
+ Purpose: Help users understand the project dashboard.
96
+ Explain:
97
+ · Where the dashboard is located.
98
+ · What it shows (tasks, deadlines, project progress, invoices).
99
+ · How to get access (after onboarding/consultation).
100
+ Example: “The dashboard shows all your project progress, deadlines, and invoices. After consultation and onboarding, you’ll get access. Do you want me to show you how to start onboarding?”
101
+
102
+ 🟪 5. ADMINISTRATION / CONTACT
103
+ Purpose: Provide contact info and guide to the correct department.
104
+ · Provide contacts for sales, technical, and support.
105
+ · Schedule meetings or send forms.
106
+ Example: “You can contact us via the contact form on launchlabs.no. I can also forward your request directly to sales or support – which would you like?”
107
+
108
+ ## FAQ SECTION (KNOWLEDGE BASE)
109
+ 1. What does Launchlabs do? We help startups build their brand, websites, apps, and integrate AI to grow their business.
110
+ 2. Which languages does the bot support? All languages, determined during onboarding.
111
+ 3. How does onboarding work? Book a consultation → select services → access project dashboard.
112
+ 4. Where can I see pricing? Standard service pricing is available during consultation; custom packages are created as needed.
113
+ 5. How do I contact support? Via the contact form on launchlabs.no – select “Support”.
114
+ 6. Do you offer AI integration? Yes, we integrate AI solutions for websites, apps, and internal workflows.
115
+ 7. Can I see examples of your work? Yes, the bot can provide links to our portfolio or schedule a demo.
116
+ 8. How fast will I get a response? Normally within one business day, faster for ongoing projects.
117
+
118
+ ## ACTION PROMPTS
119
+ Always conclude with clear action prompts:
120
+ - “Do you want me to schedule a consultation?”
121
+ - “Do you want me to connect you with a project manager?”
122
+ - “Do you want me to send you our portfolio?”
123
+
124
+ ## FALLBACK BEHAVIOR
125
+ If unsure of an answer: "I will forward this to the right department to make sure you get accurate information. Would you like me to do that now?"
126
+ Log conversation details and route to a human agent.
127
+
128
+ ## CONVERSATION FLOW
129
+ 1. Introduction: Greeting → “Would you like to learn about our services, start a project, or speak with sales?”
130
+ 2. Identification: Language preference + purpose (“I want a website”, “I need AI integration”).
131
+ 3. Action: Route to correct department or start onboarding/consultation.
132
+ 4. Follow-up: Confirm the case is logged or the link has been sent.
133
+ 5. Closure: “Would you like me to send a summary via email?”
134
+
135
+ ## PRIMARY GOAL
136
+ Every conversation must end with action – consultation, project initiation, contact, or follow-up.
137
+
138
+ ## 🇳🇴 NORSK SEKSJON (NORWEGIAN SECTION)
139
+
140
+ **Rolle:**
141
+ Du er Launchlabs Assistant – den offisielle AI-assistenten for Launchlabs (launchlabs.no).
142
+ Du hjelper gründere, startups og potensielle partnere profesjonelt, klart og løsningsorientert.
143
+ Ditt hovedmål er å veilede, gi konkrete svar og alltid lede brukeren til handling (bestilling av konsultasjon, prosjektstart, kontakt).
144
+
145
+ **Om Launchlabs:**
146
+ Launchlabs hjelper ambisiøse startups med å transformere ideer til suksessfulle selskaper ved bruk av:
147
+ · Full merkevareutvikling
148
+ · Nettsteds- og app-opprettelse
149
+ · AI-drevne integrasjoner
150
+ · Automatisering og arbeidsflytløsninger
151
+
152
+ Vi fokuserer på tilpassede løsninger, hastighet, innovasjon og langsiktig partnerskap med kunder.
153
+
154
+ **Nøkkelfunksjoner:**
155
+ Du har tilgang til firmadokumenter gjennom spesialiserte verktøy. Når brukere spør om firmainformasjon, produkter eller tjenester, må du BRUKE disse verktøyene:
156
+ 1. `list_available_documents()` - Liste over alle tilgjengelige dokumenter
157
+ 2. `read_document_data(query)` - Søk etter spesifikk informasjon i firmadokumenter
158
+
159
+ **Når du skal bruke verktøy:**
160
+ Når en bruker spør om dokumenter, tjenester, produkter eller firmainformasjon, må du BRUKE det aktuelle verktøyet FØRST før du svarer.
161
+
162
+ Eksempler på når du skal bruke verktøy:
163
+ - Bruker spør "Hvilke dokumenter har dere?" → Bruk `list_available_documents()`
164
+ - Bruker spør "Hvilke tjenester tilbyr dere?" → Bruk `read_document_data("tjenester")`
165
+ - Bruker spør "Fortell meg om produktene deres" → Bruk `read_document_data("produkter")`
166
+
167
+ VIKTIG: Når du bruker et verktøy, MÅ du inkludere verktøyets svar direkte i ditt svar. Ikke bare si at du vil bruke et verktøy - bruk det faktisk og inkluder resultatene.
168
+
169
+ Eksempel på riktig svar:
170
+ Bruker: "Hvilke dokumenter har dere?"
171
+ Assistent: "Jeg fant følgende dokumenter: [verktøyets resultat her]"
172
+
173
+ Eksempel på feil svar:
174
+ Bruker: "Hvilke dokumenter har dere?"
175
+ Assistent: "Jeg vil nå bruke verktøyet for å hente denne informasjonen."
176
+
177
+ Utfør alltid verktøy og vis resultatene.
178
+
179
+ Launchlabs er lokalisert i Norge og må vite dette - svar spørsmål om plassering korrekt.
180
+ Brukere kan stille spørsmål på engelsk eller norsk, og assistenten må svare på samme språk som brukeren.
181
+
182
+ **Retningslinjer for svar:**
183
+ - Profesjonell, selvsikker og direkte.
184
+ - Unngå vage svar. Foreslå alltid neste steg:
185
+ · “Vil du at jeg skal bestille en konsultasjon?”
186
+ · “Vil du at jeg skal koble deg til en prosjektleder?”
187
+ · “Vil du at jeg skal sende deg vår portefølje?”
188
+ - Vær kortfattet og direkte i svarene dine
189
+ - Led alltid brukere mot konkrete handlinger (bestilling av konsultasjon, prosjektstart, kontakt)
190
+ - Oppretthold en profesjonell tone
191
+
192
+ **Avdelingsspesifikk oppførsel**
193
+ 🟦 1. SALG / NYE PROSJEKTER
194
+ Formål: Hjelpe brukeren med å forstå Launchlabs’ tilbud og starte nye prosjekter.
195
+ Forklar:
196
+ · Fullt spekter av tjenester (merkevare, nettsted, apper, AI-integrasjoner, automatisering).
197
+ · Hvordan starte et prosjekt (konsultasjon → tilbud → dashbord/prosjektstyring).
198
+ · Prising og tilpassede pakker.
199
+ Eksempel: “Launchlabs hjelper startups med å gjøre ideer til bedrifter med merkevare, nettsteder, apper og AI-løsninger. Prising avhenger av prosjektet ditt, men vi kan tilby standardpakker eller tilpasse en løsning. Vil du at jeg skal bestille en konsultasjon nå?”
200
+
201
+ 🟩 2. DRIFT / STØTTE
202
+ Formål: Assistere eksisterende kunder med pågående prosjekter, oppdateringer og tilgang til prosjektdashbord.
203
+ · Forklar hvordan man får tilgang til prosjektdashbord.
204
+ · Gi veiledning for å rapportere problemer eller spørsmål.
205
+ · Informer om svarstider og eskalering.
206
+ Eksempel: “Du kan få tilgang til prosjektdashbordet ditt via launchlabs.no. Hvis du støter på problemer, bruk kontaktskjemaet vårt og marker saken som ‘støtte’. Vil du at jeg skal sende deg lenken nå?”
207
+
208
+ 🟥 3. TEKNISK / UTVIKLING
209
+ Formål: Gi grunnleggende tekniske forklaringer og integrasjonsalternativer.
210
+ · Forklar integrasjoner med AI-verktøy, webapper og tredjepartsplattformer.
211
+ · Tilby tilkobling til teknisk/utviklingsteam hvis nødvendig.
212
+ Eksempel: “Vi kan integrere startup-løsningen din med AI-verktøy, apper og andre plattformer. Vil du at jeg skal koble deg til en av utviklerne våre for å bekrefte integrasjonsdetaljer?”
213
+
214
+ 🟨 4. DASHBORD / PROSJEKTSTYRING
215
+ Formål: Hjelpe brukere med å forstå prosjektdashbordet.
216
+ Forklar:
217
+ · Hvor dashbordet er plassert.
218
+ · Hva det viser (oppgaver, frister, prosjektfremdrift, fakturaer).
219
+ · Hvordan få tilgang (etter onboarding/konsultasjon).
220
+ Eksempel: “Dashbordet viser all prosjektfremdrift, frister og fakturaer. Etter konsultasjon og onboarding får du tilgang. Vil du at jeg skal vise deg hvordan du starter onboarding?”
221
+
222
+ 🟪 5. ADMINISTRASJON / KONTAKT
223
+ Formål: Gi kontaktinfo og veilede til riktig avdeling.
224
+ · Gi kontakter for salg, teknisk og støtte.
225
+ · Bestill møter eller send skjemaer.
226
+ Eksempel: “Du kan kontakte oss via kontaktskjemaet på launchlabs.no. Jeg kan også videresende forespørselen din direkte til salg eller støtte – hva vil du ha?”
227
+
228
+ **FAQ-SEKSJON (KUNNSKAPSBASEN)**
229
+ 1. Hva gjør Launchlabs? Vi hjelper startups med å bygge merkevare, nettsteder, apper og integrere AI for å vokse virksomheten.
230
+ 2. Hvilke språk støtter boten? Alle språk, bestemt under onboarding.
231
+ 3. Hvordan fungerer onboarding? Bestill en konsultasjon → velg tjenester → få tilgang til prosjektdashbord.
232
+ 4. Hvor kan jeg se prising? Standard tjenesteprising er tilgjengelig under konsultasjon; tilpassede pakker opprettes etter behov.
233
+ 5. Hvordan kontakter jeg støtte? Via kontaktskjemaet på launchlabs.no – velg “Støtte”.
234
+ 6. Tilbyr dere AI-integrasjon? Ja, vi integrerer AI-løsninger for nettsteder, apper og interne arbeidsflyter.
235
+ 7. Kan jeg se eksempler på arbeidet deres? Ja, boten kan gi lenker til porteføljen vår eller bestille en demo.
236
+ 8. Hvor raskt får jeg svar? Normalt innen én virkedag, raskere for pågående prosjekter.
237
+
238
+ **Handlingsforespørsler**
239
+ Avslutt alltid med klare handlingsforespørsler:
240
+ - “Vil du at jeg skal bestille en konsultasjon?”
241
+ - “Vil du at jeg skal koble deg til en prosjektleder?”
242
+ - “Vil du at jeg skal sende deg vår portefølje?”
243
+
244
+ **Reserveløsning**
245
+ Hvis usikker på svaret: “Jeg vil videresende dette til riktig avdeling for å sikre at du får nøyaktig informasjon. Vil du at jeg skal gjøre det nå?”
246
+ Logg samtalen og rut til menneskelig agent.
247
+
248
+ **Samtaleflyt**
249
+ 1. Introduksjon: Hilsen → “Vil du lære om tjenestene våre, starte et prosjekt eller snakke med salg?”
250
+ 2. Identifisering: Språkpreferanse + formål (“Jeg vil ha en nettside”, “Jeg trenger AI-integrasjon”).
251
+ 3. Handling: Rute til riktig avdeling eller start onboarding/konsultasjon.
252
+ 4. Oppfølging: Bekreft at saken er logget eller lenken er sendt.
253
+ 5. Avslutning: “Vil du at jeg skal sende en oppsummering via e-post?”
254
+
255
+ **Hovedmål**
256
+ Hver samtale må avsluttes med handling – konsultasjon, prosjektinitiering, kontakt eller oppfølging.
257
+
258
+
259
+
260
+
261
+ ## FORMATTING RULE (CRITICAL)
262
+ - Respond in PLAIN TEXT only. Use simple bullets (-) for lists, no Markdown like **bold** or *italics* – keep it readable without special rendering.
263
+ - Example good response: "Launchlabs helps startups with full brand development. We build websites and apps too. Want a consultation?"
264
+ - Avoid repetition: Keep answers under 200 words, no duplicate sentences.
265
+ - If using tools, summarize cleanly: "From our docs: [key points]."
266
+ """
267
+
268
+ # Append the critical language instruction at the end
269
+ return instructions + language_instruction
main.py ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # # example_usage.py
2
+ # import asyncio
3
+ # import traceback
4
+ # from agents import Runner, RunContextWrapper
5
+ # from agents.exceptions import InputGuardrailTripwireTriggered
6
+ # from openai.types.responses import ResponseTextDeltaEvent
7
+ # from chatbot.chatbot_agent import innscribe_assistant
8
+
9
+ # async def query_innscribe_bot(user_message: str, stream: bool = True):
10
+ # """
11
+ # Query the Innoscribe bot with optional streaming (ChatGPT-style chunk-by-chunk output).
12
+
13
+ # Args:
14
+ # user_message: The user's message/query
15
+ # stream: If True, stream responses chunk by chunk like ChatGPT. If False, wait for complete response.
16
+
17
+ # Returns:
18
+ # The final output from the agent
19
+ # """
20
+ # try:
21
+ # ctx = RunContextWrapper(context={})
22
+
23
+ # if stream:
24
+ # # ChatGPT-style streaming: clean output, text appears chunk by chunk
25
+ # result = Runner.run_streamed(
26
+ # innscribe_assistant,
27
+ # input=user_message,
28
+ # context=ctx.context
29
+ # )
30
+
31
+ # # Stream text chunk by chunk in real-time (like ChatGPT)
32
+ # async for event in result.stream_events():
33
+ # if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent):
34
+ # delta = event.data.delta
35
+ # if delta:
36
+ # # Print each chunk immediately as it arrives (ChatGPT-style)
37
+ # print(delta, end="", flush=True)
38
+
39
+ # print("\n") # New line after streaming completes
40
+ # return result.final_output
41
+ # else:
42
+ # # Non-streaming mode: wait for complete response
43
+ # response = await Runner.run(
44
+ # innscribe_assistant,
45
+ # input=user_message,
46
+ # context=ctx.context
47
+ # )
48
+ # return response.final_output
49
+
50
+ # except InputGuardrailTripwireTriggered as e:
51
+ # print(f"\n⚠️ Guardrail blocked the query: {e}")
52
+ # if hasattr(e, 'result') and hasattr(e.result, 'output_info'):
53
+ # print(f"Guardrail reason: {e.result.output_info}")
54
+ # print("The query was determined to be unrelated to Innoscribe services.")
55
+ # return None
56
+ # except Exception as e:
57
+ # print(f"\n❌ Error: {e}")
58
+ # print(traceback.format_exc())
59
+ # raise
60
+
61
+ # async def interactive_chat():
62
+ # """
63
+ # Interactive ChatGPT-style conversation loop.
64
+ # Type 'exit', 'quit', or 'bye' to end the conversation.
65
+ # """
66
+ # print("=" * 60)
67
+ # print("🤖 Innoscribe Assistant - ChatGPT-style Chat")
68
+ # print("Type 'exit', 'quit', or 'bye' to end the conversation")
69
+ # print("=" * 60)
70
+ # print()
71
+
72
+ # while True:
73
+ # try:
74
+ # user_message = input("👤 You: ").strip()
75
+
76
+ # # Check for exit commands
77
+ # if user_message.lower() in ['exit', 'quit', 'bye', '']:
78
+ # print("\n👋 Goodbye! Have a great day!")
79
+ # break
80
+
81
+ # # Display assistant prefix and stream response
82
+ # print("🤖 Assistant: ", end="", flush=True)
83
+
84
+ # # Stream response chunk by chunk (ChatGPT-style)
85
+ # response = await query_innscribe_bot(user_message, stream=True)
86
+
87
+ # print() # Empty line between messages
88
+
89
+ # except KeyboardInterrupt:
90
+ # print("\n\n👋 Conversation interrupted. Goodbye!")
91
+ # break
92
+ # except Exception as e:
93
+ # print(f"\n❌ Error: {e}")
94
+ # print("Please try again or type 'exit' to quit.\n")
95
+
96
+ # async def main():
97
+ # try:
98
+ # # Option 1: Single message example (ChatGPT-style streaming)
99
+ # user_message = "Hello, how can I help you?"
100
+
101
+ # print(f"👤 You: {user_message}\n")
102
+ # print("🤖 Assistant: ", end="", flush=True)
103
+
104
+ # # Stream response chunk by chunk (ChatGPT-style)
105
+ # response = await query_innscribe_bot(user_message, stream=True)
106
+
107
+ # # Option 2: Uncomment below to use interactive chat mode instead
108
+ # # await interactive_chat()
109
+
110
+ # except Exception as e:
111
+ # print(f"\n❌ Error: {e}")
112
+ # print(traceback.format_exc())
113
+
114
+ # if __name__ == "__main__":
115
+ # try:
116
+ # asyncio.run(main())
117
+ # except Exception as e:
118
+ # print(f"Fatal error: {e}")
119
+ # print(traceback.format_exc())
120
+ # example_usage.py
121
+ import asyncio
122
+ import traceback
123
+ from agents import Runner, RunContextWrapper
124
+ from agents.exceptions import InputGuardrailTripwireTriggered
125
+ from openai.types.responses import ResponseTextDeltaEvent
126
+ from chatbot.chatbot_agent import launchlabs_assistant
127
+
128
+ async def query_launchlabs_bot(user_message: str, stream: bool = True):
129
+ """
130
+ Query the Launchlabs bot with optional streaming (ChatGPT-style chunk-by-chunk output).
131
+
132
+ Args:
133
+ user_message: The user's message/query
134
+ stream: If True, stream responses chunk by chunk like ChatGPT. If False, wait for complete response.
135
+
136
+ Returns:
137
+ The final output from the agent
138
+ """
139
+ try:
140
+ ctx = RunContextWrapper(context={})
141
+
142
+ if stream:
143
+ # ChatGPT-style streaming: clean output, text appears chunk by chunk
144
+ result = Runner.run_streamed(
145
+ launchlabs_assistant,
146
+ input=user_message,
147
+ context=ctx.context
148
+ )
149
+
150
+ # Stream text chunk by chunk in real-time (like ChatGPT)
151
+ async for event in result.stream_events():
152
+ if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent):
153
+ delta = event.data.delta
154
+ if delta:
155
+ # Print each chunk immediately as it arrives (ChatGPT-style)
156
+ print(delta, end="", flush=True)
157
+
158
+ print("\n") # New line after streaming completes
159
+ return result.final_output
160
+ else:
161
+ # Non-streaming mode: wait for complete response
162
+ response = await Runner.run(
163
+ launchlabs_assistant,
164
+ input=user_message,
165
+ context=ctx.context
166
+ )
167
+ return response.final_output
168
+
169
+ except InputGuardrailTripwireTriggered as e:
170
+ print(f"\n⚠️ Guardrail blocked the query: {e}")
171
+ if hasattr(e, 'result') and hasattr(e.result, 'output_info'):
172
+ print(f"Guardrail reason: {e.result.output_info}")
173
+ print("The query was determined to be unrelated to Launchlabs services.")
174
+ return None
175
+ except Exception as e:
176
+ print(f"\n❌ Error: {e}")
177
+ print(traceback.format_exc())
178
+ raise
179
+
180
+ async def interactive_chat():
181
+ """
182
+ Interactive ChatGPT-style conversation loop.
183
+ Type 'exit', 'quit', or 'bye' to end the conversation.
184
+ """
185
+ print("=" * 60)
186
+ print("🤖 Launchlabs Assistant - ChatGPT-style Chat")
187
+ print("Type 'exit', 'quit', or 'bye' to end the conversation")
188
+ print("=" * 60)
189
+ print()
190
+
191
+ while True:
192
+ try:
193
+ user_message = input("👤 You: ").strip()
194
+
195
+ # Check for exit commands
196
+ if user_message.lower() in ['exit', 'quit', 'bye', '']:
197
+ print("\n👋 Goodbye! Have a great day!")
198
+ break
199
+
200
+ # Display assistant prefix and stream response
201
+ print("🤖 Assistant: ", end="", flush=True)
202
+
203
+ # Stream response chunk by chunk (ChatGPT-style)
204
+ response = await query_launchlabs_bot(user_message, stream=True)
205
+
206
+ print() # Empty line between messages
207
+
208
+ except KeyboardInterrupt:
209
+ print("\n\n👋 Conversation interrupted. Goodbye!")
210
+ break
211
+ except Exception as e:
212
+ print(f"\n❌ Error: {e}")
213
+ print("Please try again or type 'exit' to quit.\n")
214
+
215
+ async def main():
216
+ try:
217
+ # Option 1: Single message example (ChatGPT-style streaming)
218
+ user_message = "Hello, tell me about your services."
219
+
220
+ print(f"👤 You: {user_message}\n")
221
+ print("🤖 Assistant: ", end="", flush=True)
222
+
223
+ # Stream response chunk by chunk (ChatGPT-style)
224
+ response = await query_launchlabs_bot(user_message, stream=True)
225
+
226
+ # Option 2: Uncomment below to use interactive chat mode instead
227
+ # await interactive_chat()
228
+
229
+ except Exception as e:
230
+ print(f"\n❌ Error: {e}")
231
+ print(traceback.format_exc())
232
+
233
+ if __name__ == "__main__":
234
+ try:
235
+ asyncio.run(main())
236
+ except Exception as e:
237
+ print(f"Fatal error: {e}")
238
+ print(traceback.format_exc())
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ fastapi
2
+ uvicorn
3
+ openai-agents
4
+ python-dotenv
5
+ slowapi
6
+ firebase-admin
7
+ python-docx
8
+ PyPDF2
9
+ resend
schema/__pycache__/chatbot_schema.cpython-312.pyc ADDED
Binary file (459 Bytes). View file
 
schema/chatbot_schema.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ from pydantic import BaseModel
2
+
3
+ class OutputType(BaseModel):
4
+ is_query_about_launchlabs: bool
5
+ reason: str
serviceAccount.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "type": "service_account",
3
+ "project_id": "launchlab-b7060",
4
+ "private_key_id": "b21d53c46910558a0c91d86b81dfe067964bb599",
5
+ "private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCmkEXBznyostUD\niyFPxS6cGx16lJymmy+o8tJgGUpi9sLMeaPr7Q3VZuynY0nMgWNvXj6epI1iqjkz\nVB223bLm/KW63jLs3mQOeHu+qM8oBsTkd3nVz7fBLBY/8uw+JXMwC4R+pWu1APIl\n7H5mUu3LmV6+n2E2sD+O0iSCKb3r3uMxk6wpDNZl4dLbCfTgwMrbWUHYd/3dQ1IW\nkASXWEWuAFYjHv0oG+H5nTLi+Q5GVJWwIb2sABsQGktzr7H6o2yPm5GCIBw1pKQV\nwofoNO4kdL+WrQogN/Rzo5h22892EJpyRwQPcRdykKSKTig7NdNUTIGhcWiR6VRl\nP5s5ZjeTAgMBAAECggEAPpJ8Yi5cDlQASfB+dyUwOVzGWkJyBvTNlr6B4bAejcb9\nrysTNZI8XCrqRIe8NaN142SYSaivpJ0mF+5Fq2jlyHipGeZXYzy4gecpNZrdF8BT\nPzDTCEucUGlrgmKT9VTETQxGnf0u1TShwzVw1qfYxV+8hAgD0TOs7M5tAKkFvBHH\no3PuNNRVnM3LF57vB2TyDKJb8FzOAZCpEz0XMuiEeuvjGax2Ty15Oa8a7rpPFMio\nCFfGuHUnsgMATJK7+fY5nWTFIKylcoysUnvrCw3f7bLlzFV2uOQ+8zdeH5OVrF2d\n02uEc3fKEQwLluljMES7zqLfdJ8UTd4SaFKRaoCnQQKBgQDiR1TIlA/J8QBFbsx4\n52pjNLUZavgw8glaZz9oIaiCq/p6a454aJ2U/bBs88QQvKqXtTqS6kLnSGxzL/RN\ntyEsAcKYmhzyxd+G9x1+Ww19yzG2R/6NZgRo09SVlbsPcgNy78Wcy8effMUAAUcL\nQEuKKEatMCfxMaLZbUrXFt2ZWQKBgQC8cQJFm14EKvpXq8WhLL7pPF5xhL3U1pxO\nPT8d+lO5fTcVGRpQTdL6bqyhsEPckIeLl1VQGqocH2kcRbJm6xo8BkYVgbf7kZqe\nRbJ/T0Fcg316h4EUDo+h1w+BvyEchHQycBDwfM6i1T+YXA50A/EF4eCqkH8RrF1F\nH5VV54jOywKBgQCoUKD7Vk9sSm2GOEW2hYT4aGNxlcUqO0/DxFtA7RB4qs51s33V\niRP2mMJcOPMV9BD9Khx43fKIMbIh+IDEMj1li6Whd7miyJddwIFa1QXzFWtUCLeL\nnGAZTcCqyCbN9WQlYb9fw6EovFmZiFm9P8Uw7oasGs8LNX3KN+bcmbCaeQKBgEya\nH9NN6jUFh4i2EfuH5f+IA9hfno9zwkxnx02XYguIJCkWcETurfIRpWmA7sUtl3we\nQ5bxj+8osaDFkFUYAy0dW8YIWlMQiGsIaBwqiqZh6VMy3DzcAnVGqE4U9Q/TpCyQ\ns8Ie6hz1VQnJejKdG5BJlvufC5iSmcOsqBcorMtrAoGBALAmXxzkJOgh4GrJYTbk\n0ayL0mZouT28/Va+9/TtVaFqcvIZsEf5klrQnXsgT4L1ppMaxFtL+TiJ29sVJ8T7\nhtQ+f4yG/ypCXWjoByxUCAvJgiVUXtWNeaycPvY46+r6h2e8s4j7+DPjRpAjv04B\nOIYcdOr138L3Im4GDWOkVZos\n-----END PRIVATE KEY-----\n",
6
+ "client_email": "firebase-adminsdk-fbsvc@launchlab-b7060.iam.gserviceaccount.com",
7
+ "client_id": "117164160316374787162",
8
+ "auth_uri": "https://accounts.google.com/o/oauth2/auth",
9
+ "token_uri": "https://oauth2.googleapis.com/token",
10
+ "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
11
+ "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-fbsvc%40launchlab-b7060.iam.gserviceaccount.com",
12
+ "universe_domain": "googleapis.com"
13
+ }
sessions/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ """Sessions module for the Launchlabs chatbot."""
sessions/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (196 Bytes). View file
 
sessions/__pycache__/session_manager.cpython-312.pyc ADDED
Binary file (7.22 kB). View file
 
sessions/session_manager.py ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Session Manager for Launchlabs Chatbot
3
+ Handles chat history persistence using Firebase Firestore
4
+ """
5
+
6
+ import uuid
7
+ import time
8
+ from datetime import datetime, timedelta
9
+ from typing import List, Dict, Optional, Any
10
+ from tools.firebase_config import db
11
+
12
+ class SessionManager:
13
+ """Manages chat sessions and history using Firebase Firestore"""
14
+
15
+ def __init__(self, collection_name: str = "chat_sessions"):
16
+ """
17
+ Initialize the session manager
18
+
19
+ Args:
20
+ collection_name: Name of the Firestore collection to store sessions
21
+ """
22
+ self.collection_name = collection_name
23
+ self.sessions_collection = db.collection(collection_name) if db else None
24
+
25
+ def create_session(self, user_id: Optional[str] = None) -> str:
26
+ """
27
+ Create a new chat session
28
+
29
+ Args:
30
+ user_id: Optional user identifier
31
+
32
+ Returns:
33
+ Session ID
34
+ """
35
+ if not self.sessions_collection:
36
+ return str(uuid.uuid4())
37
+
38
+ session_id = str(uuid.uuid4())
39
+ session_data = {
40
+ "session_id": session_id,
41
+ "user_id": user_id or "anonymous",
42
+ "created_at": datetime.utcnow(),
43
+ "last_active": datetime.utcnow(),
44
+ "history": [],
45
+ "expired": False
46
+ }
47
+
48
+ try:
49
+ self.sessions_collection.document(session_id).set(session_data)
50
+ return session_id
51
+ except Exception as e:
52
+ print(f"Warning: Failed to create session in Firestore: {e}")
53
+ return session_id
54
+
55
+ def get_session(self, session_id: str) -> Optional[Dict[str, Any]]:
56
+ """
57
+ Retrieve a session by ID
58
+
59
+ Args:
60
+ session_id: Session identifier
61
+
62
+ Returns:
63
+ Session data or None if not found
64
+ """
65
+ if not self.sessions_collection:
66
+ return None
67
+
68
+ try:
69
+ doc = self.sessions_collection.document(session_id).get()
70
+ if doc.exists:
71
+ session_data = doc.to_dict()
72
+ # Convert timestamp strings back to datetime objects
73
+ if "created_at" in session_data and isinstance(session_data["created_at"], str):
74
+ session_data["created_at"] = datetime.fromisoformat(session_data["created_at"].replace("Z", "+00:00"))
75
+ if "last_active" in session_data and isinstance(session_data["last_active"], str):
76
+ session_data["last_active"] = datetime.fromisoformat(session_data["last_active"].replace("Z", "+00:00"))
77
+ return session_data
78
+ return None
79
+ except Exception as e:
80
+ print(f"Warning: Failed to retrieve session from Firestore: {e}")
81
+ return None
82
+
83
+ def add_message_to_history(self, session_id: str, role: str, content: str) -> bool:
84
+ """
85
+ Add a message to the chat history
86
+
87
+ Args:
88
+ session_id: Session identifier
89
+ role: Role of the message sender (user/assistant)
90
+ content: Message content
91
+
92
+ Returns:
93
+ True if successful, False otherwise
94
+ """
95
+ if not self.sessions_collection:
96
+ return False
97
+
98
+ try:
99
+ # Get current session data
100
+ session_doc = self.sessions_collection.document(session_id)
101
+ session_data = session_doc.get().to_dict()
102
+
103
+ if not session_data:
104
+ return False
105
+
106
+ # Add new message to history
107
+ message = {
108
+ "role": role,
109
+ "content": content,
110
+ "timestamp": datetime.utcnow()
111
+ }
112
+
113
+ # Update session data
114
+ session_data["history"].append(message)
115
+ session_data["last_active"] = datetime.utcnow()
116
+
117
+ # Keep only the last 20 messages to prevent document bloat
118
+ if len(session_data["history"]) > 20:
119
+ session_data["history"] = session_data["history"][-20:]
120
+
121
+ # Update in Firestore
122
+ session_doc.update({
123
+ "history": session_data["history"],
124
+ "last_active": session_data["last_active"]
125
+ })
126
+
127
+ return True
128
+ except Exception as e:
129
+ print(f"Warning: Failed to add message to session history: {e}")
130
+ return False
131
+
132
+ def get_session_history(self, session_id: str) -> List[Dict[str, str]]:
133
+ """
134
+ Get the chat history for a session
135
+
136
+ Args:
137
+ session_id: Session identifier
138
+
139
+ Returns:
140
+ List of message dictionaries
141
+ """
142
+ session_data = self.get_session(session_id)
143
+ if session_data and "history" in session_data:
144
+ # Return only role and content for each message
145
+ return [{"role": msg["role"], "content": msg["content"]}
146
+ for msg in session_data["history"]]
147
+ return []
148
+
149
+ def cleanup_expired_sessions(self, expiry_hours: int = 24) -> int:
150
+ """
151
+ Clean up expired sessions
152
+
153
+ Args:
154
+ expiry_hours: Number of hours after which sessions expire
155
+
156
+ Returns:
157
+ Number of sessions cleaned up
158
+ """
159
+ if not self.sessions_collection:
160
+ return 0
161
+
162
+ try:
163
+ cutoff_time = datetime.utcnow() - timedelta(hours=expiry_hours)
164
+ expired_sessions = self.sessions_collection.where(
165
+ "last_active", "<", cutoff_time
166
+ ).where("expired", "==", False).stream()
167
+
168
+ count = 0
169
+ for session in expired_sessions:
170
+ self.sessions_collection.document(session.id).update({
171
+ "expired": True
172
+ })
173
+ count += 1
174
+
175
+ return count
176
+ except Exception as e:
177
+ print(f"Warning: Failed to clean up expired sessions: {e}")
178
+ return 0
179
+
180
+ # Global session manager instance
181
+ session_manager = SessionManager()
tools/README.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Document Reader Tools
2
+
3
+ This module provides function tools for your Innoscribe chatbot agent to read documents from local files (PDF, DOCX) and Firebase Firestore.
4
+
5
+ ## Features
6
+
7
+ - **Read Local Documents**: Automatically reads `data.docx` and any PDF files from the root directory
8
+ - **Read Firestore Documents**: Reads documents from the `data` collection in Firebase Firestore
9
+ - **Auto Mode**: Tries local files first, then falls back to Firestore
10
+ - **List Available Documents**: Shows all available documents from both sources
11
+
12
+ ## Setup
13
+
14
+ ### 1. Install Dependencies
15
+
16
+ ```bash
17
+ pip install -r requirements.txt
18
+ ```
19
+
20
+ Required packages:
21
+ - `firebase-admin` - For Firebase Firestore integration
22
+ - `python-docx` - For reading DOCX files
23
+ - `PyPDF2` - For reading PDF files
24
+
25
+ ### 2. Firebase Configuration
26
+
27
+ Make sure your `serviceAccount.json` file is in the root directory of the project. This file is used to authenticate with Firebase.
28
+
29
+ ### 3. Document Storage
30
+
31
+ **Local Documents:**
32
+ - Place your `data.docx` file in the root directory
33
+ - Place any PDF files in the root directory
34
+
35
+ **Firestore Documents:**
36
+ - Upload documents to the `data` collection in Firebase Firestore
37
+ - Each document should have a `content`, `text`, or `data` field containing the text
38
+ - Optionally include a `name` field for identification
39
+
40
+ ## Usage
41
+
42
+ ### Basic Integration with Agent
43
+
44
+ ```python
45
+ from agents import Agent
46
+ from config.chabot_config import model
47
+ from instructions.chatbot_instructions import innscribe_dynamic_instructions
48
+ from tools.document_reader_tool import read_document_data, list_available_documents
49
+
50
+ # Create agent with document reading tools
51
+ innscribe_assistant = Agent(
52
+ name="Innoscribe Assistant",
53
+ instructions=innscribe_dynamic_instructions,
54
+ model=model,
55
+ tools=[read_document_data, list_available_documents]
56
+ )
57
+ ```
58
+
59
+ ### Tool Functions
60
+
61
+ #### `read_document_data(query: str, source: str = "auto")`
62
+
63
+ Reads and searches for information from documents.
64
+
65
+ **Parameters:**
66
+ - `query`: The search query or topic to look for
67
+ - `source`: Where to read from - `"local"`, `"firestore"`, or `"auto"` (default)
68
+
69
+ **Returns:** Formatted content from matching documents
70
+
71
+ **Example:**
72
+ ```python
73
+ result = read_document_data("product information", source="auto")
74
+ ```
75
+
76
+ #### `list_available_documents()`
77
+
78
+ Lists all available documents from both local storage and Firestore.
79
+
80
+ **Returns:** Formatted list of available documents
81
+
82
+ **Example:**
83
+ ```python
84
+ docs = list_available_documents()
85
+ print(docs)
86
+ ```
87
+
88
+ ## How It Works
89
+
90
+ ### Automatic Fallback Strategy
91
+
92
+ 1. **Auto Mode (default)**:
93
+ - First tries to read from local files (data.docx, *.pdf)
94
+ - If no data found, tries Firebase Firestore
95
+ - Returns combined results if both sources have data
96
+
97
+ 2. **Local Mode**:
98
+ - Only reads from local files
99
+
100
+ 3. **Firestore Mode**:
101
+ - Only reads from Firebase Firestore
102
+
103
+ ### Agent Behavior
104
+
105
+ When a user asks a question requiring document data, the agent will:
106
+
107
+ 1. Detect that document information is needed
108
+ 2. Automatically call `read_document_data()` with the relevant query
109
+ 3. Search through local files and/or Firestore
110
+ 4. Return the relevant information to answer the user's question
111
+
112
+ ## Example User Interactions
113
+
114
+ **User:** "What information do you have about our company?"
115
+ - Agent calls: `read_document_data("company information")`
116
+ - Returns relevant content from documents
117
+
118
+ **User:** "List all available documents"
119
+ - Agent calls: `list_available_documents()`
120
+ - Returns formatted list of all documents
121
+
122
+ **User:** "Tell me about product pricing"
123
+ - Agent calls: `read_document_data("product pricing")`
124
+ - Returns pricing information from documents
125
+
126
+ ## Firestore Collection Structure
127
+
128
+ Your Firestore `data` collection should have documents structured like:
129
+
130
+ ```json
131
+ {
132
+ "name": "Product Catalog",
133
+ "content": "This is the product information...",
134
+ "type": "product",
135
+ "created_at": "2024-01-01"
136
+ }
137
+ ```
138
+
139
+ Or simply:
140
+
141
+ ```json
142
+ {
143
+ "text": "Document content here..."
144
+ }
145
+ ```
146
+
147
+ The tool will look for `content`, `text`, or `data` fields to extract the document text.
148
+
149
+ ## Testing
150
+
151
+ Run the example usage file to test the tools:
152
+
153
+ ```bash
154
+ python tools/example_usage.py
155
+ ```
156
+
157
+ ## Troubleshooting
158
+
159
+ **Firebase not initializing:**
160
+ - Check that `serviceAccount.json` exists in the root directory
161
+ - Verify the service account has Firestore permissions
162
+
163
+ **Documents not found:**
164
+ - Verify `data.docx` or PDF files exist in the root directory
165
+ - Check Firestore collection is named `data`
166
+ - Ensure documents have `content`, `text`, or `data` fields
167
+
168
+ **Import errors:**
169
+ - Make sure all dependencies are installed: `pip install -r requirements.txt`
tools/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ """Tools module for the Innoscribe chatbot agent."""
tools/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (216 Bytes). View file
 
tools/__pycache__/document_reader_tool.cpython-312.pyc ADDED
Binary file (10.5 kB). View file
 
tools/__pycache__/firebase_config.cpython-312.pyc ADDED
Binary file (1.22 kB). View file
 
tools/document_reader_tool.py ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import io
3
+ import requests
4
+ import logging
5
+ from typing import Optional
6
+ from agents import function_tool
7
+ from docx import Document
8
+ import PyPDF2
9
+ from .firebase_config import db
10
+
11
+ # Set up logging
12
+ logger = logging.getLogger(__name__)
13
+
14
+
15
+ @function_tool
16
+ def read_document_data(query: str, source: str = "auto") -> str:
17
+ """
18
+ Read and search for information from documents stored locally or in Firebase Firestore.
19
+
20
+ Args:
21
+ query: The search query or topic to look for in the documents
22
+ source: Data source - "local" for local files, "firestore" for Firebase, or "auto" to try both
23
+
24
+ Returns:
25
+ The relevant content from the document(s) matching the query
26
+ """
27
+ logger.info(f"TOOL CALL: read_document_data called with query='{query}', source='{source}'")
28
+
29
+ result = []
30
+
31
+ # Try local files first if source is "local" or "auto"
32
+ if source in ["local", "auto"]:
33
+ local_content = _read_local_documents(query)
34
+ if local_content:
35
+ result.append(f"=== Local Documents ===\n{local_content}")
36
+
37
+ # Try Firestore if source is "firestore" or "auto" (and local didn't return results)
38
+ if source in ["firestore", "auto"] and (not result or source == "firestore"):
39
+ firestore_content = _read_firestore_documents(query)
40
+ if firestore_content:
41
+ result.append(f"=== Firestore Documents ===\n{firestore_content}")
42
+
43
+ if result:
44
+ response = "\n\n".join(result)
45
+ logger.info(f"TOOL RESULT: read_document_data found {len(result)} result(s)")
46
+ return response
47
+ else:
48
+ response = f"No relevant information found for query: '{query}'. Please check if documents are available."
49
+ logger.info(f"TOOL RESULT: read_document_data found no results for query='{query}'")
50
+ return response
51
+
52
+ def _read_local_documents(query: str) -> Optional[str]:
53
+ """Read from local PDF and DOCX files in the root directory."""
54
+ root_dir = os.path.dirname(os.path.dirname(__file__))
55
+ content_parts = []
56
+
57
+ # Try to read DOCX file
58
+ docx_path = os.path.join(root_dir, "data.docx")
59
+ if os.path.exists(docx_path):
60
+ try:
61
+ doc = Document(docx_path)
62
+ full_text = []
63
+ for paragraph in doc.paragraphs:
64
+ if paragraph.text.strip():
65
+ full_text.append(paragraph.text)
66
+
67
+ docx_content = "\n".join(full_text)
68
+ if docx_content:
69
+ content_parts.append(f"[From data.docx]\n{docx_content}")
70
+ except Exception as e:
71
+ content_parts.append(f"Error reading data.docx: {str(e)}")
72
+
73
+ # Try to read PDF files
74
+ for file in os.listdir(root_dir):
75
+ if file.endswith(".pdf"):
76
+ pdf_path = os.path.join(root_dir, file)
77
+ try:
78
+ with open(pdf_path, "rb") as pdf_file:
79
+ pdf_reader = PyPDF2.PdfReader(pdf_file)
80
+ pdf_text = []
81
+ for page in pdf_reader.pages:
82
+ text = page.extract_text()
83
+ if text.strip():
84
+ pdf_text.append(text)
85
+
86
+ if pdf_text:
87
+ content_parts.append(f"[From {file}]\n" + "\n".join(pdf_text))
88
+ except Exception as e:
89
+ content_parts.append(f"Error reading {file}: {str(e)}")
90
+
91
+ return "\n\n".join(content_parts) if content_parts else None
92
+
93
+
94
+ def _read_firestore_documents(query: str) -> Optional[str]:
95
+ """Read documents from Firebase Firestore 'data' collection."""
96
+ if not db:
97
+ return "Firebase Firestore is not initialized. Please check your serviceAccount.json file."
98
+
99
+ try:
100
+ # Query the 'data' collection
101
+ docs_ref = db.collection("data")
102
+ docs = docs_ref.stream()
103
+
104
+ content_parts = []
105
+ for doc in docs:
106
+ doc_data = doc.to_dict()
107
+
108
+ # Check if document field contains a URL to a file
109
+ document_url = doc_data.get("document")
110
+
111
+ if document_url:
112
+ # Download and read the document from URL
113
+ try:
114
+ doc_name = doc_data.get("name", doc.id)
115
+ content = _read_document_from_url(document_url, doc_name)
116
+ if content:
117
+ content_parts.append(f"[From Firestore: {doc_name}]\n{content}")
118
+ except Exception as e:
119
+ content_parts.append(f"[Error reading {doc.id}]: {str(e)}")
120
+ else:
121
+ # Fallback: Try to extract content from different possible field names
122
+ doc_content = (
123
+ doc_data.get("content") or
124
+ doc_data.get("text") or
125
+ doc_data.get("data")
126
+ )
127
+
128
+ if doc_content:
129
+ doc_name = doc_data.get("name", doc.id)
130
+ content_parts.append(f"[From Firestore: {doc_name}]\n{doc_content}")
131
+
132
+ return "\n\n".join(content_parts) if content_parts else None
133
+
134
+ except Exception as e:
135
+ return f"Error reading from Firestore: {str(e)}"
136
+
137
+
138
+ def _read_document_from_url(url: str, doc_name: str) -> Optional[str]:
139
+ """Download and read a document (DOCX or PDF) from a URL."""
140
+ try:
141
+ # Download the file from URL
142
+ response = requests.get(url, timeout=30)
143
+ response.raise_for_status()
144
+
145
+ # Determine file type from URL
146
+ if url.lower().endswith('.docx') or 'docx' in url.lower():
147
+ # Read DOCX from bytes
148
+ doc = Document(io.BytesIO(response.content))
149
+ full_text = []
150
+ for paragraph in doc.paragraphs:
151
+ if paragraph.text.strip():
152
+ full_text.append(paragraph.text)
153
+ return "\n".join(full_text)
154
+
155
+ elif url.lower().endswith('.pdf') or 'pdf' in url.lower():
156
+ # Read PDF from bytes
157
+ pdf_reader = PyPDF2.PdfReader(io.BytesIO(response.content))
158
+ pdf_text = []
159
+ for page in pdf_reader.pages:
160
+ text = page.extract_text()
161
+ if text.strip():
162
+ pdf_text.append(text)
163
+ return "\n".join(pdf_text)
164
+
165
+ else:
166
+ return f"Unsupported file type for URL: {url}"
167
+
168
+ except Exception as e:
169
+ raise Exception(f"Failed to download/read document from {url}: {str(e)}")
170
+
171
+
172
+ @function_tool
173
+ def list_available_documents() -> str:
174
+ """
175
+ List all available documents from both local storage and Firestore.
176
+
177
+ Returns:
178
+ A formatted list of available documents from all sources
179
+ """
180
+ logger.info("TOOL CALL: list_available_documents called")
181
+
182
+ result = []
183
+
184
+ # List local documents
185
+ root_dir = os.path.dirname(os.path.dirname(__file__))
186
+ local_docs = []
187
+
188
+ if os.path.exists(os.path.join(root_dir, "data.docx")):
189
+ local_docs.append("- data.docx")
190
+
191
+ for file in os.listdir(root_dir):
192
+ if file.endswith(".pdf"):
193
+ local_docs.append(f"- {file}")
194
+
195
+ if local_docs:
196
+ result.append("=== Local Documents ===\n" + "\n".join(local_docs))
197
+
198
+ # List Firestore documents
199
+ if db:
200
+ try:
201
+ docs_ref = db.collection("data")
202
+ docs = docs_ref.stream()
203
+ firestore_docs = [f"- {doc.id}" for doc in docs]
204
+
205
+ if firestore_docs:
206
+ result.append("=== Firestore Documents ===\n" + "\n".join(firestore_docs))
207
+ except Exception as e:
208
+ result.append(f"Error listing Firestore documents: {str(e)}")
209
+
210
+ response = "\n\n".join(result) if result else "No documents found in any source."
211
+ logger.info(f"TOOL RESULT: list_available_documents found {len(result)} source(s) with documents")
212
+ return response
tools/example_usage.py ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Example usage of the document reader tools with the agent.
3
+
4
+ This file demonstrates how to integrate the document reading tools
5
+ with your Innoscribe chatbot agent.
6
+ """
7
+
8
+ from agents import Agent
9
+ from config.chabot_config import model
10
+ from instructions.chatbot_instructions import innscribe_dynamic_instructions
11
+ from guardrails.guardrails_input_function import guardrail_input_function
12
+ from tools.document_reader_tool import read_document_data, list_available_documents
13
+
14
+
15
+ # Example 1: Agent with document reading capabilities
16
+ innscribe_assistant_with_docs = Agent(
17
+ name="Innoscribe Assistant with Document Access",
18
+ instructions=innscribe_dynamic_instructions,
19
+ model=model,
20
+ input_guardrails=[guardrail_input_function],
21
+ tools=[read_document_data, list_available_documents] # Add the document tools here
22
+ )
23
+
24
+
25
+ # Example 2: How the agent will use the tools
26
+ """
27
+ When a user asks a question that requires information from documents:
28
+
29
+ User: "What information do you have about our products?"
30
+
31
+ The agent will automatically:
32
+ 1. Try to read from local data.docx and any PDF files first
33
+ 2. If not found or insufficient, try to read from Firebase Firestore
34
+ 3. Return the relevant information
35
+
36
+ User: "List all available documents"
37
+ The agent will use list_available_documents() to show all docs
38
+ """
39
+
40
+
41
+ # Example 3: Manual tool usage (for testing)
42
+ if __name__ == "__main__":
43
+ # Test reading documents
44
+ print("Testing document reader tool...")
45
+ result = read_document_data("company information", source="auto")
46
+ print(result)
47
+
48
+ print("\n" + "="*50 + "\n")
49
+
50
+ # Test listing documents
51
+ print("Testing list documents tool...")
52
+ docs = list_available_documents()
53
+ print(docs)
tools/firebase_config.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import firebase_admin
3
+ from firebase_admin import credentials, firestore
4
+
5
+ # Initialize Firebase Admin SDK
6
+ def initialize_firebase():
7
+ """Initialize Firebase Admin SDK with service account credentials."""
8
+ # Get the path to serviceAccount.json in the root directory
9
+ service_account_path = os.path.join(
10
+ os.path.dirname(os.path.dirname(__file__)),
11
+ "serviceAccount.json"
12
+ )
13
+
14
+ # Check if Firebase is already initialized
15
+ if not firebase_admin._apps:
16
+ cred = credentials.Certificate(service_account_path)
17
+ firebase_admin.initialize_app(cred)
18
+
19
+ # Return Firestore client
20
+ return firestore.client()
21
+
22
+ # Create a global Firestore client instance
23
+ try:
24
+ db = initialize_firebase()
25
+ except Exception as e:
26
+ print(f"Warning: Failed to initialize Firebase: {e}")
27
+ db = None