sliitguy commited on
Commit
782bbd9
·
0 Parent(s):

updated for deployment

Browse files
.github/workflows/main.yaml ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Sync to Hugging Face Space
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - main
7
+ - master
8
+ workflow_dispatch:
9
+
10
+ jobs:
11
+ sync-to-hub:
12
+ runs-on: ubuntu-latest
13
+
14
+ steps:
15
+ - name: Checkout repository
16
+ uses: actions/checkout@v4
17
+ with:
18
+ fetch-depth: 0
19
+ lfs: true
20
+
21
+ - name: Setup Git LFS
22
+ run: |
23
+ git lfs install
24
+ git lfs pull
25
+ git lfs checkout
26
+
27
+ - name: Configure Git
28
+ run: |
29
+ git config --global user.email "github-actions[bot]@users.noreply.github.com"
30
+ git config --global user.name "github-actions[bot]"
31
+
32
+ - name: Push to Hugging Face Space
33
+ env:
34
+ HF_TOKEN: ${{ secrets.HF_TOKEN }}
35
+ HF_USERNAME: nivakaran
36
+ HF_SPACE: sparrowagenticai
37
+ run: |
38
+ git branch -M main
39
+ git push https://$HF_USERNAME:$HF_TOKEN@huggingface.co/spaces/$HF_USERNAME/$HF_SPACE main --force
40
+ if [ $? -ne 0 ]; then
41
+ echo "Push failed - check token and permissions"
42
+ exit 1
43
+ fi
44
+
45
+ - name: Verify Sync
46
+ if: success()
47
+ run: echo "successfully synced to Hugging Face Space!"
48
+
49
+ - name: Sync Failed
50
+ if: failure()
51
+ run: echo "Failed to sync to Hugging Face Space. Check logs above."
.gitignore ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python-generated files
2
+ __pycache__/
3
+ *.py[oc]
4
+ build/
5
+ dist/
6
+ wheels/
7
+ *.egg-info
8
+
9
+ # Virtual environments
10
+ .venv
11
+
12
+ .env
.python-version ADDED
@@ -0,0 +1 @@
 
 
1
+ 3.11
Dockerfile ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
2
+ FROM python:3.11
3
+
4
+ # Create non-root user
5
+ RUN useradd -m -u 1000 user
6
+ USER user
7
+
8
+ # Set environment variables
9
+ ENV PATH="/home/user/.local/bin:$PATH"
10
+ ENV PYTHONUNBUFFERED=1
11
+
12
+ # Set working directory
13
+ WORKDIR /app
14
+
15
+ # Copy requirements first for better caching
16
+ COPY --chown=user ./requirements.txt requirements.txt
17
+
18
+ # Install dependencies
19
+ RUN pip install --no-cache-dir --upgrade -r requirements.txt
20
+
21
+ # Copy application code
22
+ COPY --chown=user . /app
23
+
24
+ # Expose port 7860 (required by Hugging Face Spaces)
25
+ EXPOSE 7860
26
+
27
+ # Run the application
28
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Sparrow Agent Chatbot - Parcel Consolidation And Tracking Agentic AI System
2
+
3
+ ## Overview
4
+ Welcome to Sparrow, a cutting-edge agentic AI chatbot engineered with LangGraph and powered by Groq's high-performance LLM. Designed for a parcel consolidation and tracking platform, Sparrow exemplifies scalable, intelligent automation, delivering seamless user assistance through sophisticated natural language processing and dynamic workflow orchestration. This project is a testament to modern AI engineering, blending modularity, efficiency, and extensibility.
5
+
6
+ ## Key Features
7
+ - **Clarify Agent**: Analyzes and refines user queries into precise, summarized inputs for downstream agents.
8
+ - **Worker Agent**: Executes tasks with a robust toolkit including parcel tracking, user information retrieval, ETA estimation, and a "think" capability for complex reasoning (future-proof for additional tools).
9
+ - **Master Agent**: Orchestrates parallel worker tasks dynamically, synthesizing outputs into cohesive responses.
10
+ - **Final Graph**: Integrates the entire workflow into a streamlined, end-to-end solution.
11
+
12
+ ## Technical Highlights
13
+ - **Framework & Libraries**: Built with Python, Flask, LangChain, and GroqLLM for state-of-the-art AI performance.
14
+ - **Graph Technology**: Leverages LangGraph for a modular, graph-based agent architecture.
15
+ - **Asynchronous Processing**: Utilizes async programming for high-efficiency task handling.
16
+ - **Code Quality**: Maintains a clean, well-documented codebase with modular file structures (states, nodes, graph builder).
17
+ - **Scalability**: Designed to handle increasing loads with dynamic worker allocation.
18
+
19
+ ## System Architecture
20
+ The Sparrow system features a sophisticated workflow, visualized through the following diagrams:
21
+
22
+ ![Workflow 1 - Clarify User Query Flow](./assets/Screenshot%202025-08-26%20034929.png)
23
+ ![Workflow 3 - Worker Agent Subgraph Flow](./assets/Screenshot%202025-08-26%20033447.png)
24
+ ![Workflow 4 - Master Agent Subgraph Flow](./assets/Screenshot%202025-08-26%20033458.png)
25
+ ![Workflow 5 - Final Graph Flow](./assets/Screenshot%202025-08-26%20033424.png)
26
+
27
+ These graphs illustrate the journey from query intake to final response, showcasing agent-tool interactions and parallel processing.
28
+
29
+ ## Project Structure
30
+ The repository is organized for clarity and scalability:
31
+
32
+ ```
33
+ sparrow-agent/
34
+ ├── src/
35
+ │ ├── graphs/
36
+ │ │ ├── __init__.py
37
+ │ │ ├── queryGraph.py # Query clarification logic
38
+ │ │ ├── actionGraph.py # Worker task execution with tools
39
+ │ │ ├── masterGraph.py # Orchestration and output synthesis
40
+ │ │ ├── finalAgentGraph.py # End-to-end workflow integration
41
+ │ ├── states/
42
+ │ │ ├── __init__.py
43
+ │ │ ├── conversationState.py # Manages conversation thread states
44
+ │ │ ├── taskState.py # Tracks task execution states
45
+ │ ├── llms/
46
+ │ │ ├── __init__.py
47
+ │ │ ├── groqllm.py # Manages conversation thread states
48
+ │ ├── nodes/
49
+ │ │ ├── __init__.py
50
+ │ │ ├── actionNode.py # Tool execution node
51
+ │ │ ├── masterNode.py # Response compression node
52
+ │ │ ├── queryNode.py
53
+ │ ├── utils/
54
+ │ │ ├── __init__.py
55
+ │ │ ├── actionState.py # Custom logging setup
56
+ │ │ ├── masterState.py
57
+ │ │ ├── queryState.py
58
+ ├── app.py # Main Flask application entry point
59
+ ├── requirements.txt # Dependency list
60
+ ├── templates/
61
+ │ ├── index.html # Chat interface template
62
+ ├── README.md # This file
63
+
64
+ ```
65
+
66
+ ## Setup & Installation
67
+ 1. **Clone the Repository**: `git clone https://github.com/Nivakaran-S/sparrow-agent.git`
68
+ 2. **Install Dependencies**: `pip install -r requirements.txt` (includes Flask, LangChain, LangGraph, GroqLLM, etc.)
69
+ 3. **Configure Environment**: Set `export FLASK_SECRET_KEY='your-secret-key'` and API keys for Groq.
70
+ 4. **Run the Application**: `python app.py` (defaults to port 5000, adjustable via `PORT` env variable).
71
+
72
+ ## Usage
73
+ - **Chat Interface**: Access at `http://localhost:5000`.
74
+ - **API Endpoints**:
75
+ - `/chat` (POST): Send messages with JSON `{ "message": "your query" }`.
76
+ - `/new_conversation` (POST): Reset to a new thread.
77
+ - `/health` (GET): Check server status.
78
+ - **Interaction**: Real-time responses powered by GroqLLM and agent workflows.
79
+
80
+ ## Contributions
81
+ - **Enhancements**: Add new tools to `workerAgent.py` or optimize graph logic.
82
+ - **Performance**: Improve async handling or worker scalability.
83
+ - **UI/UX**: Upgrade `index.html`
84
+
85
+
86
+ ## Acknowledgements
87
+ - Powered by Groq for high-speed LLM inference.
88
+ - Built with LangGraph for advanced agent orchestration.
app.py ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Flask, request, jsonify, render_template, session
2
+ import uuid
3
+ import logging
4
+ from datetime import datetime
5
+ import os
6
+ import threading
7
+ import time
8
+ from src.graphs.finalAgentGraph import sparrowAgent
9
+ from langchain_core.messages import HumanMessage, AIMessage
10
+
11
+ app = Flask(__name__)
12
+ app.secret_key = os.environ.get('FLASK_SECRET_KEY', 'your-secret-key-here')
13
+
14
+ logging.basicConfig(level=logging.INFO)
15
+ logger = logging.getLogger(__name__)
16
+
17
+ conversations = {}
18
+ conversations_lock = threading.Lock()
19
+
20
+
21
+ def ensure_langchain_message(message):
22
+ """Ensure a message is a proper LangChain message object"""
23
+ if isinstance(message, (HumanMessage, AIMessage)):
24
+ return message
25
+ elif isinstance(message, dict):
26
+ content = message.get('content', str(message))
27
+ message_type = message.get('type', 'ai')
28
+ if message_type == 'human':
29
+ return HumanMessage(content=content)
30
+ else:
31
+ return AIMessage(content=content)
32
+ elif isinstance(message, str):
33
+ return AIMessage(content=message)
34
+ else:
35
+ return AIMessage(content=str(message))
36
+
37
+
38
+ def clean_messages_list(messages):
39
+ """Ensure all messages are valid LangChain message objects"""
40
+ return [ensure_langchain_message(msg) for msg in messages]
41
+
42
+
43
+ @app.route('/')
44
+ def index():
45
+ """Serve main chat interface"""
46
+ return "<h1>Sparrow Agent API</h1><p>POST to <code>/chat</code> to talk with the agent.</p>"
47
+
48
+
49
+ @app.route('/chat', methods=['POST'])
50
+ def chat():
51
+ """Handle chat messages"""
52
+ try:
53
+ data = request.get_json(force=True)
54
+ user_message = data.get('message', '').strip()
55
+
56
+ if not user_message:
57
+ return jsonify({'success': False, 'error': 'Empty message'})
58
+
59
+ thread_id = session.get('thread_id')
60
+ if not thread_id:
61
+ thread_id = str(uuid.uuid4())
62
+ session['thread_id'] = thread_id
63
+
64
+ with conversations_lock:
65
+ if thread_id not in conversations:
66
+ conversations[thread_id] = {
67
+ 'messages': [],
68
+ 'notes': [],
69
+ 'query_brief': '',
70
+ 'final_message': '',
71
+ 'created_at': datetime.now(),
72
+ 'last_updated': datetime.now()
73
+ }
74
+ conversation = conversations[thread_id]
75
+
76
+ human_message = HumanMessage(content=user_message)
77
+ conversation['messages'].append(human_message)
78
+ conversation['last_updated'] = datetime.now()
79
+
80
+ cleaned_messages = clean_messages_list(conversation['messages'])
81
+
82
+ sparrow_input = {
83
+ 'messages': cleaned_messages,
84
+ 'notes': conversation.get('notes', []),
85
+ 'query_brief': conversation.get('query_brief', ''),
86
+ 'final_message': conversation.get('final_message', '')
87
+ }
88
+
89
+ logger.info(f"[{thread_id}] Processing message: {user_message[:100]}")
90
+
91
+ result = sparrowAgent.invoke(sparrow_input)
92
+
93
+ response_message = ""
94
+ ai_message = None
95
+
96
+ if result.get('final_message'):
97
+ response_message = result['final_message']
98
+ ai_message = AIMessage(content=response_message)
99
+ else:
100
+ result_messages = clean_messages_list(result.get('messages', []))
101
+ last_user_index = max(
102
+ (i for i, msg in enumerate(result_messages) if isinstance(msg, HumanMessage)),
103
+ default=-1
104
+ )
105
+ for i in range(last_user_index + 1, len(result_messages)):
106
+ msg = result_messages[i]
107
+ if isinstance(msg, AIMessage) and msg.content.strip():
108
+ response_message = msg.content
109
+ ai_message = msg
110
+ break
111
+
112
+ if not response_message:
113
+ response_message = "I'm processing your request. Could you provide more details?"
114
+ ai_message = AIMessage(content=response_message)
115
+
116
+ status_info = ""
117
+ if result.get('execution_jobs'):
118
+ status_info = f"Executed: {', '.join(result['execution_jobs'])}"
119
+ elif result.get('notes') and isinstance(result['notes'], list) and result['notes']:
120
+ status_info = str(result['notes'][-1])
121
+
122
+ with conversations_lock:
123
+ if result.get('messages'):
124
+ conversation['messages'] = clean_messages_list(result['messages'])
125
+ else:
126
+ conversation['messages'].append(ai_message)
127
+
128
+ # Deduplicate
129
+ seen = set()
130
+ unique_messages = []
131
+ for msg in conversation['messages']:
132
+ key = (type(msg).__name__, getattr(msg, "content", str(msg)))
133
+ if key not in seen:
134
+ seen.add(key)
135
+ unique_messages.append(msg)
136
+ conversation['messages'] = unique_messages
137
+
138
+ conversation['notes'] = result.get('notes', conversation['notes'])
139
+ conversation['query_brief'] = result.get('query_brief', conversation['query_brief'])
140
+ conversation['final_message'] = result.get('final_message', conversation['final_message'])
141
+ conversation['last_updated'] = datetime.now()
142
+
143
+ logger.info(f"[{thread_id}] Response generated: {response_message[:100]}")
144
+
145
+ return jsonify({
146
+ 'success': True,
147
+ 'response': response_message,
148
+ 'status': status_info,
149
+ 'thread_id': thread_id
150
+ })
151
+
152
+ except Exception as e:
153
+ logger.exception("Error in /chat")
154
+ return jsonify({'success': False, 'error': str(e)})
155
+
156
+
157
+ @app.route('/health')
158
+ def health():
159
+ """Health check endpoint"""
160
+ with conversations_lock:
161
+ active = len(conversations)
162
+ return jsonify({'status': 'healthy', 'active_conversations': active, 'timestamp': datetime.now().isoformat()})
163
+
164
+
165
+ def cleanup_conversations():
166
+ """Remove old conversations older than 24 hours"""
167
+ while True:
168
+ time.sleep(3600)
169
+ cutoff = datetime.now().timestamp() - 24 * 3600
170
+ with conversations_lock:
171
+ old = [tid for tid, conv in conversations.items() if conv['last_updated'].timestamp() < cutoff]
172
+ for tid in old:
173
+ del conversations[tid]
174
+ if old:
175
+ logger.info(f"Cleaned up {len(old)} old conversations")
176
+
177
+
178
+ if __name__ == '__main__':
179
+ cleanup_thread = threading.Thread(target=cleanup_conversations, daemon=True)
180
+ cleanup_thread.start()
181
+
182
+ port = int(os.environ.get("PORT", 7860)) # ✅ HF expects 7860
183
+ logger.info(f"Starting Sparrow Agent Flask app on port {port}")
184
+ app.run(host="0.0.0.0", port=port)
app2.py ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import uuid
3
+ import logging
4
+ from datetime import datetime
5
+ import os
6
+
7
+ from src.graphs.finalAgentGraph import sparrowAgent
8
+ from langchain_core.messages import HumanMessage, AIMessage
9
+
10
+ # Setup logging
11
+ logging.basicConfig(level=logging.INFO)
12
+ logger = logging.getLogger(__name__)
13
+
14
+
15
+ def ensure_langchain_message(message):
16
+ """Ensure a message is a proper LangChain message object"""
17
+ if isinstance(message, (HumanMessage, AIMessage)):
18
+ return message
19
+ elif isinstance(message, dict):
20
+ content = message.get('content', str(message))
21
+ message_type = message.get('type', 'ai')
22
+ if message_type == 'human':
23
+ return HumanMessage(content=content)
24
+ else:
25
+ return AIMessage(content=content)
26
+ elif isinstance(message, str):
27
+ return AIMessage(content=message)
28
+ else:
29
+ return AIMessage(content=str(message))
30
+
31
+
32
+ def clean_messages_list(messages):
33
+ """Clean and ensure all messages in list are proper LangChain message objects"""
34
+ cleaned_messages = []
35
+ for msg in messages:
36
+ cleaned_msg = ensure_langchain_message(msg)
37
+ cleaned_messages.append(cleaned_msg)
38
+ return cleaned_messages
39
+
40
+
41
+ def initialize_conversation():
42
+ """Initialize a new conversation state"""
43
+ return {
44
+ 'thread_id': str(uuid.uuid4()),
45
+ 'messages': [],
46
+ 'notes': [],
47
+ 'query_brief': '',
48
+ 'final_message': '',
49
+ 'created_at': datetime.now(),
50
+ 'last_updated': datetime.now()
51
+ }
52
+
53
+
54
+ def process_message(user_message, history, conversation_state):
55
+ """
56
+ Process user message and return response
57
+
58
+ Args:
59
+ user_message: The user's input message
60
+ history: Gradio chat history (list of [user_msg, bot_msg] pairs)
61
+ conversation_state: Dictionary containing conversation context
62
+
63
+ Returns:
64
+ Tuple of (empty string, updated history, updated conversation state, status message)
65
+ """
66
+ try:
67
+ if not user_message or not user_message.strip():
68
+ return "", history, conversation_state, "Please enter a message"
69
+
70
+ # Initialize conversation state if None
71
+ if conversation_state is None:
72
+ conversation_state = initialize_conversation()
73
+
74
+ thread_id = conversation_state['thread_id']
75
+
76
+ # Add user message to conversation
77
+ human_message = HumanMessage(content=user_message)
78
+ conversation_state['messages'].append(human_message)
79
+ conversation_state['last_updated'] = datetime.now()
80
+
81
+ # Clean messages
82
+ cleaned_messages = clean_messages_list(conversation_state['messages'])
83
+
84
+ # Prepare input for sparrow agent
85
+ sparrow_input = {
86
+ 'messages': cleaned_messages,
87
+ 'notes': conversation_state.get('notes', []),
88
+ 'query_brief': conversation_state.get('query_brief', ''),
89
+ 'final_message': conversation_state.get('final_message', '')
90
+ }
91
+
92
+ logger.info(f"[{thread_id}] Processing message: {user_message[:100]}")
93
+ logger.info(f"[{thread_id}] Input messages count: {len(cleaned_messages)}")
94
+
95
+ # Invoke the sparrow agent
96
+ result = sparrowAgent.invoke(sparrow_input)
97
+
98
+ # Extract response message
99
+ response_message = ""
100
+ ai_message = None
101
+
102
+ if result.get('final_message'):
103
+ response_message = result['final_message']
104
+ ai_message = AIMessage(content=response_message)
105
+ else:
106
+ result_messages = clean_messages_list(result.get('messages', []))
107
+
108
+ # Find last user message index
109
+ last_user_index = -1
110
+ for i, msg in enumerate(result_messages):
111
+ if isinstance(msg, HumanMessage):
112
+ last_user_index = i
113
+
114
+ # Get first AI message after last user message
115
+ for i in range(last_user_index + 1, len(result_messages)):
116
+ msg = result_messages[i]
117
+ if isinstance(msg, AIMessage) and msg.content and msg.content.strip():
118
+ response_message = msg.content
119
+ ai_message = msg
120
+ break
121
+
122
+ if not response_message:
123
+ response_message = "I'm processing your request. Could you provide more details?"
124
+ ai_message = AIMessage(content=response_message)
125
+
126
+ # Update conversation state
127
+ if result.get('messages'):
128
+ conversation_state['messages'] = clean_messages_list(result['messages'])
129
+ else:
130
+ conversation_state['messages'].append(ai_message)
131
+
132
+ # Remove consecutive duplicates
133
+ cleaned_conversation_messages = []
134
+ prev_content = None
135
+ prev_type = None
136
+
137
+ for msg in conversation_state['messages']:
138
+ current_content = msg.content if hasattr(msg, 'content') else str(msg)
139
+ current_type = type(msg).__name__
140
+
141
+ if current_content != prev_content or current_type != prev_type:
142
+ cleaned_conversation_messages.append(msg)
143
+ prev_content = current_content
144
+ prev_type = current_type
145
+
146
+ conversation_state['messages'] = cleaned_conversation_messages
147
+ conversation_state['notes'] = result.get('notes', conversation_state.get('notes', []))
148
+ conversation_state['query_brief'] = result.get('query_brief', conversation_state.get('query_brief', ''))
149
+ conversation_state['final_message'] = result.get('final_message', conversation_state.get('final_message', ''))
150
+ conversation_state['last_updated'] = datetime.now()
151
+
152
+ # Update Gradio chat history
153
+ history.append([user_message, response_message])
154
+
155
+ # Create status message
156
+ status_info = f"Thread: {thread_id[:8]}... | Messages: {len(conversation_state['messages'])}"
157
+ if result.get('execution_jobs'):
158
+ status_info += f" | Executed: {', '.join(result['execution_jobs'])}"
159
+ elif result.get('notes') and isinstance(result['notes'], list) and result['notes']:
160
+ status_info += f" | Note: {str(result['notes'][-1])[:50]}"
161
+
162
+ logger.info(f"[{thread_id}] Response generated: {response_message[:100]}")
163
+ logger.info(f"[{thread_id}] Final messages count: {len(conversation_state['messages'])}")
164
+
165
+ return "", history, conversation_state, status_info
166
+
167
+ except Exception as e:
168
+ logger.error(f"Error processing message: {str(e)}", exc_info=True)
169
+ error_msg = f"An error occurred: {str(e)}"
170
+ history.append([user_message, error_msg])
171
+ return "", history, conversation_state, f"Error: {str(e)}"
172
+
173
+
174
+ def clear_conversation():
175
+ """Clear conversation and start fresh"""
176
+ new_state = initialize_conversation()
177
+ logger.info(f"[{new_state['thread_id']}] New conversation started")
178
+ return [], new_state, f"New conversation started (ID: {new_state['thread_id'][:8]}...)"
179
+
180
+
181
+ def get_conversation_info(conversation_state):
182
+ """Get current conversation information"""
183
+ if conversation_state is None:
184
+ return "No active conversation"
185
+
186
+ info_lines = [
187
+ f"**Thread ID:** {conversation_state['thread_id']}",
188
+ f"**Messages:** {len(conversation_state.get('messages', []))}",
189
+ f"**Notes:** {len(conversation_state.get('notes', []))}",
190
+ f"**Has Query Brief:** {bool(conversation_state.get('query_brief'))}",
191
+ f"**Has Final Message:** {bool(conversation_state.get('final_message'))}",
192
+ f"**Created:** {conversation_state.get('created_at', 'N/A')}",
193
+ f"**Last Updated:** {conversation_state.get('last_updated', 'N/A')}"
194
+ ]
195
+
196
+ return "\n\n".join(info_lines)
197
+
198
+
199
+ # Create Gradio interface
200
+ with gr.Blocks(title="Sparrow Agent Chat", theme=gr.themes.Soft()) as demo:
201
+ gr.Markdown("# 🦜 Sparrow Agent Chat")
202
+ gr.Markdown("Interact with the Sparrow AI Agent. Ask questions and get intelligent responses!")
203
+
204
+ # State to store conversation context
205
+ conversation_state = gr.State(initialize_conversation())
206
+
207
+ with gr.Row():
208
+ with gr.Column(scale=4):
209
+ chatbot = gr.Chatbot(
210
+ label="Conversation",
211
+ height=500,
212
+ show_copy_button=True
213
+ )
214
+
215
+ with gr.Row():
216
+ msg = gr.Textbox(
217
+ label="Your Message",
218
+ placeholder="Type your message here...",
219
+ lines=2,
220
+ scale=4
221
+ )
222
+ submit_btn = gr.Button("Send", variant="primary", scale=1)
223
+
224
+ with gr.Row():
225
+ clear_btn = gr.Button("New Conversation", variant="secondary")
226
+ status_box = gr.Textbox(
227
+ label="Status",
228
+ interactive=False,
229
+ lines=1
230
+ )
231
+
232
+ with gr.Column(scale=1):
233
+ gr.Markdown("### Debug Info")
234
+ info_btn = gr.Button("Show Conversation Info")
235
+ info_display = gr.Markdown("Click button to show info")
236
+
237
+ # Event handlers
238
+ submit_btn.click(
239
+ fn=process_message,
240
+ inputs=[msg, chatbot, conversation_state],
241
+ outputs=[msg, chatbot, conversation_state, status_box]
242
+ )
243
+
244
+ msg.submit(
245
+ fn=process_message,
246
+ inputs=[msg, chatbot, conversation_state],
247
+ outputs=[msg, chatbot, conversation_state, status_box]
248
+ )
249
+
250
+ clear_btn.click(
251
+ fn=clear_conversation,
252
+ inputs=[],
253
+ outputs=[chatbot, conversation_state, status_box]
254
+ )
255
+
256
+ info_btn.click(
257
+ fn=get_conversation_info,
258
+ inputs=[conversation_state],
259
+ outputs=[info_display]
260
+ )
261
+
262
+ # Initialize status on load
263
+ demo.load(
264
+ fn=lambda state: f"Ready | Thread: {state['thread_id'][:8]}...",
265
+ inputs=[conversation_state],
266
+ outputs=[status_box]
267
+ )
268
+
269
+
270
+ # Launch the app
271
+ if __name__ == "__main__":
272
+ demo.launch(
273
+ server_name="0.0.0.0",
274
+ server_port=int(os.environ.get('PORT', 7860)),
275
+ share=False
276
+ )
assets/Screenshot 2025-08-26 033424.png ADDED
assets/Screenshot 2025-08-26 033447.png ADDED
assets/Screenshot 2025-08-26 033458.png ADDED
assets/Screenshot 2025-08-26 034929.png ADDED
langgraph.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dependencies": ["."],
3
+ "graphs": {
4
+ "query_graph": "./src/graphs/queryGraph.py:graph",
5
+ "execution_agent_graph": "./src/graphs/actionGraph.py:graph",
6
+ "master_graph": "./src/graphs/masterGraph.py:master_graph",
7
+ "finalSparrowAgent": "./src/graphs/finalAgentGraph.py:sparrowAgent"
8
+ },
9
+ "env": "./env"
10
+ }
main.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ def main():
2
+ print("Hello from sparrow-agent!")
3
+
4
+
5
+ if __name__ == "__main__":
6
+ main()
pyproject.toml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "sparrow-agent"
3
+ version = "0.1.0"
4
+ description = "Add your description here"
5
+ readme = "README.md"
6
+ requires-python = ">=3.11"
7
+ dependencies = [
8
+ "fastapi>=0.116.1",
9
+ "flask>=3.1.2",
10
+ "gradio>=5.49.1",
11
+ "langchain>=0.3.27",
12
+ "langchain-community>=0.3.27",
13
+ "langchain-core>=0.3.74",
14
+ "langchain-groq>=0.3.7",
15
+ "langgraph>=0.6.6",
16
+ "langgraph-cli[inmem]>=0.3.8",
17
+ "streamlit>=1.48.1",
18
+ "tavily-python>=0.7.11",
19
+ "uuid>=1.30",
20
+ "uvicorn>=0.35.0",
21
+ "watchdog>=6.0.0",
22
+ ]
requirements.txt ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ langchain
2
+ langgraph
3
+ langchain_community
4
+ langchain_core
5
+ langchain_groq
6
+ fastapi
7
+ uvicorn
8
+ watchdog
9
+ langgraph-cli[inmem]
10
+ streamlit
11
+ tavily-python
12
+ flask
13
+ python-dotenv
14
+ requests
15
+ pymongo
16
+ uuid
17
+ gradio
src/__init__.py ADDED
File without changes
src/graphs/__init__.py ADDED
File without changes
src/graphs/actionGraph.py ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langgraph.graph import StateGraph, START, END
2
+ from src.states.masterState import ExecutorState, ExecutorOutputState
3
+
4
+ from src.nodes.actionNode import ExecutorNode
5
+ from src.llms.groqllm import GroqLLM
6
+ from src.utils.prompts import execution_agent_prompt, compress_execution_human_message, compress_execution_system_prompt
7
+
8
+ from src.utils.utils import think_tool, track_package, estimated_time_analysis
9
+
10
+ tools = [think_tool, track_package, estimated_time_analysis]
11
+
12
+
13
+ class ExecutorGraphBuilder:
14
+ def __init__(self, llm ):
15
+ self.llm = llm
16
+ self.graph = StateGraph(ExecutorState, output=ExecutorOutputState)
17
+ self.tools = tools
18
+ self.execution_agent_prompt = execution_agent_prompt
19
+ self.compress_execution_system_prompt = compress_execution_system_prompt
20
+ self.compress_execution_human_message = compress_execution_human_message
21
+
22
+ def build_executor_graph(self):
23
+ """Build a graph to build the executor"""
24
+ self.executor_node_obj = ExecutorNode(
25
+ self.llm,
26
+ )
27
+
28
+ self.graph.add_node("llm_call", self.executor_node_obj.llm_call)
29
+ self.graph.add_node("tool_node", self.executor_node_obj.tool_node)
30
+ self.graph.add_node("compress_execution", self.executor_node_obj.compress_execution)
31
+
32
+ # Flow
33
+ self.graph.add_edge(START, "llm_call")
34
+ self.graph.add_conditional_edges(
35
+ "llm_call",
36
+ self.executor_node_obj.guard_llm,
37
+ {
38
+ "tool_node": "tool_node",
39
+ "compress_execution": "compress_execution",
40
+ },
41
+ )
42
+ self.graph.add_edge("tool_node", "llm_call")
43
+ self.graph.add_edge("compress_execution", END)
44
+
45
+ return self.graph
46
+
47
+ def setup_graph(self):
48
+ return self.graph.compile()
49
+
50
+
51
+ llm=GroqLLM().get_llm()
52
+
53
+ ## Creating the graph
54
+ graph_builder=ExecutorGraphBuilder(llm)
55
+ graph=graph_builder.build_executor_graph().compile()
src/graphs/finalAgentGraph.py ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Updated Sparrow Agent with proper routing
2
+ import asyncio
3
+ import logging
4
+ from src.graphs.masterGraph import master_graph
5
+ from src.llms.groqllm import GroqLLM
6
+ from src.states.queryState import SparrowAgentState, SparrowInputState
7
+ from langgraph.graph import StateGraph, START, END
8
+ from src.states.masterState import MasterState
9
+ from langgraph.checkpoint.memory import MemorySaver
10
+ from src.nodes.queryNode import QueryNode
11
+ from langchain_core.messages import HumanMessage
12
+
13
+ logger = logging.getLogger(__name__)
14
+
15
+ llm = GroqLLM().get_llm()
16
+ queryNode = QueryNode(llm)
17
+
18
+ def convert_sparrow_to_master(state: SparrowAgentState) -> dict:
19
+ """Convert SparrowAgentState to master graph input format"""
20
+ return {
21
+ "query_brief": state.get("query_brief", ""),
22
+ "execution_jobs": [],
23
+ "completed_jobs": [],
24
+ "worker_outputs": [],
25
+ "final_output": ''
26
+ }
27
+
28
+ def update_sparrow_from_master(sparrow_state: SparrowAgentState, master_state: dict) -> SparrowAgentState:
29
+ """Update sparrow state with master results"""
30
+ # Add the final result as a message and update notes
31
+ from langchain_core.messages import AIMessage
32
+
33
+ final_output = master_state.get("final_output", "")
34
+ if final_output:
35
+ sparrow_state["messages"] = sparrow_state.get("messages", []) + [AIMessage(content=final_output)]
36
+ sparrow_state["final_message"] = final_output
37
+
38
+ # Add execution details to notes
39
+ execution_jobs = master_state.get("execution_jobs", [])
40
+ completed_jobs = master_state.get("completed_jobs", [])
41
+
42
+ if execution_jobs:
43
+ sparrow_state["notes"] = sparrow_state.get("notes", []) + [f"Execution jobs: {', '.join(execution_jobs)}"]
44
+
45
+ if completed_jobs:
46
+ sparrow_state["notes"] = sparrow_state.get("notes", []) + [f"Completed: {', '.join(completed_jobs)}"]
47
+
48
+ return sparrow_state
49
+
50
+ def route_after_clarification(state: SparrowAgentState) -> str:
51
+ """Route based on clarification status from queryNode response"""
52
+
53
+ # Check if clarification_complete flag is set (most reliable)
54
+ clarification_complete = state.get("clarification_complete", False)
55
+ needs_clarification = state.get("needs_clarification", True)
56
+
57
+ if clarification_complete or not needs_clarification:
58
+ print("Clarification complete, proceeding to query brief")
59
+ return "write_query_brief"
60
+
61
+ # Check messages for clarification status as fallback
62
+ messages = state.get("messages", [])
63
+ if not messages:
64
+ return "need_clarification"
65
+
66
+ # Prevent infinite clarification loops
67
+ if len(messages) > 10:
68
+ print("Too many clarification rounds, proceeding to query brief")
69
+ return "write_query_brief"
70
+
71
+ # Default: needs more clarification
72
+ print("More clarification needed")
73
+ return "need_clarification"
74
+
75
+ def route_after_query_brief(state: SparrowAgentState) -> str:
76
+ """Route after query brief creation"""
77
+
78
+ # Check if query brief exists and is adequate
79
+ query_brief = state.get("query_brief", "")
80
+
81
+ if query_brief and len(query_brief.strip()) > 20: # Reasonable length check
82
+ print(f"Query brief created: {query_brief[:100]}...")
83
+ return "master_subgraph"
84
+ else:
85
+ # Check how many times we've tried
86
+ messages = state.get("messages", [])
87
+ if len(messages) > 15:
88
+ print("Too many attempts, ending conversation")
89
+ return "__end__"
90
+
91
+ print("Query brief insufficient or missing, going back to clarification")
92
+ state["notes"] = state.get("notes", []) + ["Query brief creation failed, requesting more clarification"]
93
+ return "clarify_with_user"
94
+
95
+ def need_clarification(state: SparrowAgentState) -> SparrowAgentState:
96
+ """Handle case where clarification is needed"""
97
+ from langchain_core.messages import AIMessage
98
+
99
+ print("Additional clarification needed.")
100
+
101
+ # Add a message indicating we need more information
102
+ clarification_msg = AIMessage(
103
+ content="I need a bit more information to help you effectively. Could you provide more details about your request?"
104
+ )
105
+
106
+ state["messages"] = state.get("messages", []) + [clarification_msg]
107
+ state["notes"] = state.get("notes", []) + ["Requested additional clarification from user"]
108
+
109
+ return state
110
+
111
+ def run_master_subgraph(state: SparrowAgentState) -> SparrowAgentState:
112
+ """Run the master subgraph - using sync version to avoid async issues with Send"""
113
+ try:
114
+ print("Running master subgraph...")
115
+ master_input = convert_sparrow_to_master(state)
116
+
117
+ # Use invoke instead of ainvoke to avoid issues with Send
118
+ master_result = master_graph.invoke(master_input)
119
+
120
+ return update_sparrow_from_master(state, master_result)
121
+
122
+ except Exception as e:
123
+ logger.error(f"Master subgraph failed: {e}")
124
+ return {**state, "error": str(e)}
125
+
126
+ def route_after_need_clarification(state: SparrowAgentState) -> str:
127
+ """Route after need_clarification node - always end to wait for user input"""
128
+ return "__end__"
129
+
130
+ # Build the graph
131
+ sparrowAgentBuilder = StateGraph(SparrowAgentState, input_schema=SparrowInputState)
132
+
133
+ sparrowAgentBuilder.add_node("clarify_with_user", queryNode.clarify_with_user)
134
+ sparrowAgentBuilder.add_node("need_clarification", need_clarification)
135
+ sparrowAgentBuilder.add_node("write_query_brief", queryNode.write_query_brief)
136
+ sparrowAgentBuilder.add_node("master_subgraph", run_master_subgraph)
137
+
138
+ # Edges
139
+ sparrowAgentBuilder.add_edge(START, "clarify_with_user")
140
+
141
+ sparrowAgentBuilder.add_conditional_edges(
142
+ "clarify_with_user",
143
+ route_after_clarification,
144
+ {
145
+ "need_clarification": "need_clarification",
146
+ "write_query_brief": "write_query_brief",
147
+ "__end__": END
148
+ }
149
+ )
150
+
151
+ # Improved clarification flow
152
+ sparrowAgentBuilder.add_conditional_edges(
153
+ "need_clarification",
154
+ route_after_need_clarification,
155
+ {
156
+ "clarify_with_user": "clarify_with_user",
157
+ "__end__": END
158
+ }
159
+ )
160
+
161
+ sparrowAgentBuilder.add_conditional_edges(
162
+ "write_query_brief",
163
+ route_after_query_brief,
164
+ {
165
+ "clarify_with_user": "clarify_with_user",
166
+ "master_subgraph": "master_subgraph",
167
+ "__end__": END
168
+ }
169
+ )
170
+
171
+ sparrowAgentBuilder.add_edge("master_subgraph", END)
172
+
173
+ sparrowAgent = sparrowAgentBuilder.compile()
src/graphs/masterGraph.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langgraph.graph import StateGraph, START, END
2
+ from src.nodes.masterNode import MasterOrchestrator
3
+ from src.states.masterState import MasterState
4
+ from src.llms.groqllm import GroqLLM
5
+
6
+
7
+ class MasterBuilder:
8
+ def __init__(self, llm, ):
9
+ self.llm = llm
10
+
11
+ def build_master_graph(self):
12
+ master_obj = MasterOrchestrator(self.llm)
13
+ master_graph = StateGraph(MasterState)
14
+
15
+ # Add nodes
16
+ master_graph.add_node("orchestrator", master_obj.orchestrator)
17
+ master_graph.add_node("worker_executor", master_obj.worker_executor)
18
+ master_graph.add_node("synthesizer", master_obj.synthesizer)
19
+
20
+ # Add edges
21
+ master_graph.add_edge(START, "orchestrator")
22
+ master_graph.add_conditional_edges("orchestrator", master_obj.assign_workers, ["worker_executor"])
23
+ master_graph.add_edge("worker_executor", "synthesizer")
24
+ master_graph.add_edge("synthesizer", END)
25
+
26
+ return master_graph.compile()
27
+
28
+
29
+
30
+ # Building the graph
31
+ llm = GroqLLM().get_llm()
32
+ graph_builder = MasterBuilder(llm)
33
+ master_graph = graph_builder.build_master_graph()
34
+ print("Graph created successfully")
src/graphs/queryGraph.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langgraph.graph import StateGraph, START, END
2
+ from src.states.queryState import SparrowAgentState, SparrowInputState
3
+
4
+ from src.nodes.queryNode import QueryNode
5
+ from src.llms.groqllm import GroqLLM
6
+
7
+
8
+
9
+ class QueryGraphBuilder:
10
+ def __init__(self, llm):
11
+ self.llm = llm
12
+ self.graph = StateGraph(SparrowAgentState, input_schema=SparrowInputState)
13
+
14
+ def build_query_graph(self):
15
+ """
16
+ Build a graph for customer query inquiry
17
+
18
+ """
19
+ self.query_node_obj= QueryNode(self.llm)
20
+ print(self.llm)
21
+
22
+ self.graph.add_node("clarify_with_user", self.query_node_obj.clarify_with_user)
23
+ self.graph.add_node("write_query_brief", self.query_node_obj.write_query_brief)
24
+
25
+ self.graph.add_edge(START, "clarify_with_user")
26
+ self.graph.add_edge("clarify_with_user", "write_query_brief")
27
+ self.graph.add_edge("write_query_brief", END)
28
+
29
+ return self.graph
30
+
31
+ llm = GroqLLM().get_llm()
32
+
33
+ graph_builder=QueryGraphBuilder(llm)
34
+
35
+ graph=graph_builder.build_query_graph().compile()
src/llms/__init__.py ADDED
File without changes
src/llms/groqllm.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langchain_groq import ChatGroq
2
+ import os
3
+ from dotenv import load_dotenv
4
+
5
+ class GroqLLM:
6
+ def __init__(self):
7
+ load_dotenv()
8
+
9
+ def get_llm(self):
10
+ try:
11
+ # Don't print API key - security risk
12
+ # print(os.getenv("GROQ_API_KEY"))
13
+ os.environ["GROQ_API_KEY"] = self.groq_api_key = os.getenv("GROQ_API_KEY")
14
+ # Using llama models which have better JSON support
15
+ llm=ChatGroq(
16
+ api_key=self.groq_api_key,
17
+ model="llama-3.3-70b-versatile", # Better JSON support
18
+ streaming=False,
19
+ temperature=0.1 # Lower temperature for more consistent structured output
20
+ )
21
+ return llm
22
+ except Exception as e:
23
+ raise ValueError(f"Error occurred with exception: {e}")
24
+
25
+ def get_moon(self):
26
+ try:
27
+ # Don't print API key - security risk
28
+ # print(os.getenv("GROQ_API_KEY"))
29
+ os.environ["GROQ_API_KEY"] = self.groq_api_key = os.getenv("GROQ_API_KEY")
30
+ llm=ChatGroq(
31
+ api_key=self.groq_api_key,
32
+ model="moonshotai/kimi-k2-instruct",
33
+ streaming=False
34
+ )
35
+ return llm
36
+ except Exception as e:
37
+ raise ValueError(f"Error occurred with exception: {e}")
src/nodes/__init__.py ADDED
File without changes
src/nodes/actionNode.py ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pydantic import BaseModel, Field
2
+ from langchain_core.messages import SystemMessage, HumanMessage, ToolMessage, filter_messages
3
+ from src.utils.prompts import execution_agent_prompt, compress_execution_system_prompt, compress_execution_human_message
4
+ from src.utils.utils import think_tool, track_package, estimated_time_analysis, get_today_str
5
+ import logging
6
+
7
+ tools = [think_tool, track_package, estimated_time_analysis]
8
+ tools_by_name = {tool.name: tool for tool in tools}
9
+
10
+ class ExecutorNode:
11
+ """
12
+ Executor node for handling tasks:
13
+ 1. LLM reasoning
14
+ 2. Tool invocation
15
+ 3. Final compression
16
+ """
17
+
18
+ def __init__(self, llm):
19
+ self.llm = llm
20
+ self.tools = tools
21
+ self.tools_by_name = {tool.name: tool for tool in tools}
22
+ self.model_with_tools = llm.bind_tools(tools)
23
+ self.MAX_ITERATIONS = 6 # Increased to allow more tool calls (including think_tool)
24
+ self.execution_agent_prompt_template = execution_agent_prompt
25
+ self.compress_execution_system_prompt_template = compress_execution_system_prompt
26
+ self.compress_execution_human_message = compress_execution_human_message
27
+
28
+ # Debug tool binding
29
+ print(f"Available tools: {list(self.tools_by_name.keys())}")
30
+
31
+ def llm_call(self, state: dict) -> dict:
32
+ """Calls the LLM with the executor message history and returns updated state."""
33
+ try:
34
+ # Ensure we have the execution job in the messages
35
+ execution_job = state.get("execution_job", "")
36
+ existing_messages = state.get("executor_messages", [])
37
+ print("EXECUTOR MESSAGES MESSAGES", existing_messages)
38
+ print("EXECUTION JOB", execution_job)
39
+
40
+ # If no existing messages, add the execution job as initial human message
41
+ if not existing_messages and execution_job:
42
+ existing_messages = [HumanMessage(content=execution_job)]
43
+
44
+ # Format the prompt with current date
45
+ formatted_prompt = self.execution_agent_prompt_template.format(date=get_today_str())
46
+ messages = [SystemMessage(content=formatted_prompt)] + existing_messages
47
+
48
+ print(f"Calling LLM with {len(messages)} messages")
49
+ print(f"Last message: {messages[-1] if messages else 'No messages'}")
50
+
51
+ response = self.model_with_tools.invoke(messages)
52
+
53
+ print(f"LLM Response type: {type(response)}")
54
+ print(f"LLM Response content: {response.content[:100] if response.content else 'No content'}...")
55
+ print(f"Tool calls in response: {getattr(response, 'tool_calls', 'No tool_calls attribute')}")
56
+
57
+ return {
58
+ **state,
59
+ "executor_messages": existing_messages + [response]
60
+ }
61
+
62
+ except Exception as e:
63
+ return {
64
+ **state,
65
+ "error": str(e),
66
+ "executor_messages": state.get("executor_messages", [])
67
+ }
68
+
69
+ def tool_node(self, state: dict) -> dict:
70
+ """Executes any tools requested by the LLM and appends ToolMessages."""
71
+ try:
72
+ executor_messages = state.get("executor_messages", [])
73
+ if not executor_messages:
74
+ print("No executor messages found")
75
+ return state
76
+
77
+ last_message = executor_messages[-1]
78
+ print(f"Last message type: {type(last_message)}")
79
+ print(f"Last message attributes: {dir(last_message)}")
80
+
81
+ # Get tool calls
82
+ tool_calls = getattr(last_message, "tool_calls", [])
83
+ print(f"Found {len(tool_calls)} tool calls: {tool_calls}")
84
+
85
+ if not tool_calls:
86
+ print("No tool calls found in last message")
87
+ return state
88
+
89
+ tool_outputs, new_data = [], []
90
+
91
+ for call in tool_calls:
92
+ print(f"Processing tool call: {call}")
93
+
94
+ tool_name = call.get("name")
95
+ args = call.get("args", {})
96
+ tool_id = call.get("id")
97
+
98
+ print(f"Tool: {tool_name}, Args: {args}, ID: {tool_id}")
99
+
100
+ if tool_name in self.tools_by_name:
101
+ try:
102
+ print(f"Invoking tool {tool_name} with args {args}")
103
+ result = self.tools_by_name[tool_name].invoke(args)
104
+ print(f"Tool {tool_name} result: {result}")
105
+
106
+ tool_message = ToolMessage(
107
+ content=str(result),
108
+ name=tool_name,
109
+ tool_call_id=tool_id
110
+ )
111
+ tool_outputs.append(tool_message)
112
+ new_data.append(str(result))
113
+
114
+ except Exception as e:
115
+ error_msg = f"Tool {tool_name} failed: {e}"
116
+ print(f"Tool error: {error_msg}")
117
+ tool_outputs.append(
118
+ ToolMessage(
119
+ content=error_msg,
120
+ name=tool_name,
121
+ tool_call_id=tool_id
122
+ )
123
+ )
124
+ new_data.append(error_msg)
125
+ else:
126
+ error_msg = f"Tool {tool_name} not found. Available: {list(self.tools_by_name.keys())}"
127
+ print(error_msg)
128
+ tool_outputs.append(
129
+ ToolMessage(
130
+ content=error_msg,
131
+ name=tool_name,
132
+ tool_call_id=tool_id
133
+ )
134
+ )
135
+
136
+ print(f"Returning {len(tool_outputs)} tool outputs")
137
+
138
+ return {
139
+ **state,
140
+ "executor_messages": executor_messages + tool_outputs,
141
+ "executor_data": state.get("executor_data", []) + new_data
142
+ }
143
+
144
+ except Exception as e:
145
+ return {
146
+ **state,
147
+ "error": f"Tool execution failed: {str(e)}"
148
+ }
149
+
150
+ def compress_execution(self, state: dict) -> dict:
151
+ """Summarizes the execution and returns final structured output."""
152
+ try:
153
+ execution_job = state.get("execution_job", "Complete the assigned task")
154
+ executor_messages = state.get("executor_messages", [])
155
+
156
+ # Format the system prompt with current date
157
+ formatted_system_prompt = self.compress_execution_system_prompt_template.format(date=get_today_str())
158
+
159
+ messages = [
160
+ SystemMessage(content=formatted_system_prompt),
161
+ *executor_messages,
162
+ HumanMessage(content=self.compress_execution_human_message.format(
163
+ shipment_request=execution_job
164
+ ))
165
+ ]
166
+
167
+ response = self.llm.invoke(messages)
168
+
169
+ executor_data = [
170
+ str(m.content) for m in executor_messages
171
+ if hasattr(m, 'content') and m.content
172
+ ]
173
+
174
+ return {
175
+ "output": str(response.content),
176
+ "executor_data": executor_data,
177
+ "executor_messages": executor_messages
178
+ }
179
+
180
+ except Exception as e:
181
+ return {
182
+ "output": f"Execution completed with errors: {str(e)}",
183
+ "executor_data": state.get("executor_data", []),
184
+ "executor_messages": state.get("executor_messages", [])
185
+ }
186
+
187
+ def route_after_llm(self, state: dict) -> str:
188
+ """Route: decide whether to call a tool or finalize."""
189
+ try:
190
+ executor_messages = state.get("executor_messages", [])
191
+ if not executor_messages:
192
+ return "compress_execution"
193
+
194
+ last_msg = executor_messages[-1]
195
+ has_tool_calls = bool(getattr(last_msg, "tool_calls", None))
196
+
197
+ print(f"Routing decision - Has tool calls: {has_tool_calls}")
198
+
199
+ return "tool_node" if has_tool_calls else "compress_execution"
200
+ except Exception as e:
201
+ return "compress_execution"
202
+
203
+ def guard_llm(self, state: dict) -> str:
204
+ """Prevent infinite loops by limiting iterations."""
205
+ iteration_count = state.get("iteration_count", 0) + 1
206
+ state["iteration_count"] = iteration_count
207
+
208
+ print(f"Iteration count: {iteration_count}/{self.MAX_ITERATIONS}")
209
+
210
+ if iteration_count > self.MAX_ITERATIONS:
211
+ print("Max iterations reached, finalizing...")
212
+ return "compress_execution"
213
+
214
+ return self.route_after_llm(state)
src/nodes/masterNode.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langchain_core.messages import SystemMessage, HumanMessage
2
+ from src.llms.groqllm import GroqLLM
3
+ from src.states.masterState import MasterState, ExecutorState
4
+ from src.nodes.actionNode import ExecutorNode
5
+ import asyncio
6
+ from typing import List, Dict
7
+ from src.utils.prompts import master_agent_prompt
8
+ from src.utils.utils import get_today_str
9
+ from src.states.masterState import PlannerOutput
10
+ from langgraph.constants import Send
11
+ from src.graphs.actionGraph import graph
12
+
13
+ class MasterOrchestrator:
14
+ def __init__(self, llm):
15
+ self.llm = llm
16
+ self.master_planner = llm.with_structured_output(PlannerOutput)
17
+ self.compiled_worker_graph = graph
18
+ self.master_agent_prompt_template = master_agent_prompt
19
+
20
+ def orchestrator(self, state: MasterState):
21
+ """Generate a plan by breaking down the query into execution jobs"""
22
+
23
+ system_prompt = """You are a master task planner. Given a query, break it down into specific, actionable execution jobs.
24
+
25
+ Each job should be:
26
+ 1. Clear and specific
27
+ 2. Actionable by a specialized worker
28
+ 3. Independent or clearly sequenced
29
+ 4. Focused on a single objective
30
+
31
+ Return a list of execution jobs as strings."""
32
+
33
+ planner_result = self.master_planner.invoke([
34
+ SystemMessage(content=system_prompt),
35
+ HumanMessage(content=f"Here is the query brief: {state['query_brief']}")
36
+ ])
37
+
38
+ print("Execution Jobs Generated:", planner_result.executor_jobs)
39
+ return {"execution_jobs": planner_result.executor_jobs}
40
+
41
+ def worker_executor(self, worker_input: dict):
42
+ """Execute a single job using the worker graph"""
43
+
44
+ job_description = worker_input["execution_job"]
45
+
46
+ # Prepare the initial state for the worker
47
+ # Pass the full job description as the execution_job - the worker will use available tools
48
+ worker_state = {
49
+ "executor_messages": [HumanMessage(content=job_description)],
50
+ "execution_job": job_description, # Pass the full job description
51
+ "executor_data": []
52
+ }
53
+
54
+ print(f"Executing job: {job_description}")
55
+
56
+ # Execute the worker graph
57
+ try:
58
+ result = self.compiled_worker_graph.invoke(worker_state)
59
+
60
+ # Return the completed job info
61
+ return {
62
+ "completed_jobs": [f"Job: {job_description} - Status: Completed"],
63
+ "worker_outputs": [result]
64
+ }
65
+ except Exception as e:
66
+ error_result = {
67
+ "output": f"Error executing job: {str(e)}",
68
+ "executor_data": [f"Error: {str(e)}"],
69
+ "executor_messages": []
70
+ }
71
+ return {
72
+ "completed_jobs": [f"Job: {job_description} - Status: Failed - {str(e)}"],
73
+ "worker_outputs": [error_result]
74
+ }
75
+
76
+ def assign_workers(self, state: MasterState):
77
+ """Assign a worker to each execution job using Send"""
78
+ return [
79
+ Send("worker_executor", {"execution_job": job})
80
+ for job in state["execution_jobs"]
81
+ ]
82
+
83
+ def synthesizer(self, state: MasterState):
84
+ """Combine all completed jobs into a final output"""
85
+
86
+ # Create a synthesis prompt
87
+ synthesis_prompt = f"""
88
+ Original Query: {state['query_brief']}
89
+
90
+ Completed Jobs Summary:
91
+ {chr(10).join([f"- {job}" for job in state['completed_jobs']])}
92
+
93
+ Detailed Worker Outputs:
94
+ {chr(10).join([f"Output {i+1}: {output.get('output', 'No output')}" for i, output in enumerate(state['worker_outputs'])])}
95
+
96
+ Please synthesize all the work into a comprehensive final response that addresses the original query.
97
+ """
98
+
99
+ synthesis_result = self.llm.invoke([
100
+ SystemMessage(content="You are a synthesis expert. Combine the worker outputs into a coherent final response."),
101
+ HumanMessage(content=synthesis_prompt)
102
+ ])
103
+
104
+ return {"final_output": synthesis_result.content}
src/nodes/queryNode.py ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ from datetime import datetime
6
+ from typing_extensions import Literal
7
+ from src.llms.groqllm import GroqLLM
8
+ from langchain_core.messages import HumanMessage, SystemMessage, AIMessage, get_buffer_string
9
+ from src.utils.prompts import clarification_with_user_instructions, transform_messages_into_customer_query_brief_prompt
10
+ from src.states.queryState import SparrowAgentState, ClarifyWithUser, CustomerQuestion
11
+ from src.utils.utils import get_today_str
12
+
13
+ class QueryNode:
14
+ def __init__(self, llm):
15
+ self.llm = llm
16
+
17
+ def clarify_with_user(self, state: SparrowAgentState) -> SparrowAgentState:
18
+ """
19
+ Determine if the user's request contains sufficient information to proceed.
20
+ Returns updated state with clarification status.
21
+ """
22
+ try:
23
+ # Use structured output with method="json_mode" for better compatibility
24
+ structured_output_model = self.llm.with_structured_output(
25
+ ClarifyWithUser,
26
+ method="json_mode",
27
+ include_raw=False
28
+ )
29
+
30
+ response = structured_output_model.invoke([
31
+ SystemMessage(
32
+ content="You are a helpful assistant that responds in JSON format. Route the input to yes or no based on the need of clarification of the query."
33
+ ),
34
+ HumanMessage(
35
+ content=clarification_with_user_instructions.format(
36
+ messages=get_buffer_string(messages=state.get("messages", [])),
37
+ date=get_today_str()
38
+ )
39
+ )
40
+ ])
41
+
42
+ print("CLARIFICATION RESPONSE:", response)
43
+
44
+ # Update state based on response
45
+ updated_state = {**state}
46
+
47
+ if response.need_clarification == 'yes':
48
+ updated_state.update({
49
+ "messages": state.get("messages", []) + [AIMessage(content=response.question)],
50
+ "clarification_complete": False,
51
+ "needs_clarification": True
52
+ })
53
+ else:
54
+ updated_state.update({
55
+ "messages": state.get("messages", []) + [AIMessage(content=response.verification)],
56
+ "clarification_complete": True,
57
+ "needs_clarification": False
58
+ })
59
+
60
+ return updated_state
61
+
62
+ except Exception as e:
63
+ print(f"Error in clarify_with_user: {e}")
64
+ print(f"Error type: {type(e).__name__}")
65
+ import traceback
66
+ traceback.print_exc()
67
+
68
+ # Fallback: Ask for clarification if there's an error
69
+ return {
70
+ **state,
71
+ "messages": state.get("messages", []) + [
72
+ AIMessage(content="I'd be happy to help! Could you please provide more details about what you need? For example, if you want to track a package, please share the tracking number.")
73
+ ],
74
+ "clarification_complete": False,
75
+ "needs_clarification": True,
76
+ "error": str(e)
77
+ }
78
+
79
+ def write_query_brief(self, state: SparrowAgentState) -> SparrowAgentState:
80
+ """
81
+ Transform the conversation history into a comprehensive customer query brief.
82
+ """
83
+ try:
84
+ # Use structured output with json_mode for better compatibility
85
+ structured_output_model = self.llm.with_structured_output(
86
+ CustomerQuestion,
87
+ method="json_mode",
88
+ include_raw=False
89
+ )
90
+
91
+ messages = state.get("messages", [])
92
+ print("STATE MESSAGES:", messages)
93
+
94
+ if not messages:
95
+ print("ERROR: No messages in state")
96
+ return {
97
+ **state,
98
+ "query_brief": "",
99
+ "error": "No messages available for query brief creation"
100
+ }
101
+
102
+ prompt = transform_messages_into_customer_query_brief_prompt.format(
103
+ messages=get_buffer_string(messages),
104
+ date=get_today_str()
105
+ )
106
+ print("PROMPT:", prompt[:200], "...") # Print first 200 chars only
107
+
108
+ # Get structured response
109
+ response = structured_output_model.invoke([
110
+ SystemMessage(content="You are a helpful assistant that responds in JSON format."),
111
+ HumanMessage(content=prompt)
112
+ ])
113
+ print("STRUCTURED RESPONSE:", response)
114
+
115
+ if response is None:
116
+ print("ERROR: Structured response is None")
117
+ return {
118
+ **state,
119
+ "query_brief": "",
120
+ "error": "Failed to generate structured response"
121
+ }
122
+
123
+ return {
124
+ **state,
125
+ "query_brief": response.query_brief,
126
+ "master_messages": [HumanMessage(content=response.query_brief)],
127
+ "query_brief_complete": True
128
+ }
129
+
130
+ except Exception as e:
131
+ print(f"Error in write_query_brief: {e}")
132
+ print(f"Error type: {type(e).__name__}")
133
+ import traceback
134
+ traceback.print_exc()
135
+
136
+ # Fallback: Create a simple query brief from the messages
137
+ messages = state.get("messages", [])
138
+ if messages:
139
+ # Extract the last user message as the query brief
140
+ user_messages = [msg.content for msg in messages if hasattr(msg, 'type') and msg.type == 'human']
141
+ fallback_brief = user_messages[-1] if user_messages else "Help with parcel query"
142
+ else:
143
+ fallback_brief = "Help with parcel query"
144
+
145
+ return {
146
+ **state,
147
+ "query_brief": fallback_brief,
148
+ "master_messages": [HumanMessage(content=fallback_brief)],
149
+ "query_brief_complete": True,
150
+ "error": str(e)
151
+ }
src/states/__init__.py ADDED
File without changes
src/states/actionState.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import operator
3
+ from typing_extensions import TypedDict, Annotated, List, Sequence
4
+ from pydantic import BaseModel, Field
5
+ from langchain_core.messages import BaseMessage
6
+ from langgraph.graph.message import add_messages
7
+ from typing_extensions import Literal
8
+
9
+
10
+ class ExecutorState(TypedDict):
11
+ """
12
+ State for the executor agent containing message history and research metadata.
13
+
14
+ This state tracks the executors conversation, iteration count for limiting
15
+ tool calls, the executor topic being
16
+ """
17
+ executor_messages: Annotated[Sequence[BaseMessage], add_messages]
18
+ execution_job: str
19
+ executor_data: List[str]
20
+
21
+ class ExecutorOutputState(TypedDict):
22
+ """
23
+ Output state for the executor agent containing final executor results.
24
+
25
+ This represents the final output of the execution process with executor_data,
26
+ executor_messages and output from the execution process.
27
+
28
+ """
29
+ output: str
30
+ executor_data: List[str]
31
+ executor_messages: Annotated[Sequence[BaseMessage], add_messages]
32
+
33
+
34
+ # Structured output schema
35
+ class CustomerQuestion(BaseModel):
36
+ """Schema for customer query brief generation"""
37
+ query_brief: str = Field(description="A customer question that will be used to guide the execution")
38
+
39
+
src/states/masterState.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing_extensions import TypedDict, Annotated, Sequence, List
2
+ from langchain_core.messages import BaseMessage
3
+ from langgraph.graph.message import add_messages
4
+ from langchain_core.tools import tool
5
+ from pydantic import BaseModel, Field
6
+ import operator
7
+
8
+ class ExecutorState(TypedDict):
9
+ """
10
+ State for the executor agent containing message history and research metadata.
11
+ """
12
+ executor_messages: Annotated[Sequence[BaseMessage], add_messages]
13
+ execution_job: str
14
+ executor_data: List[str]
15
+
16
+ class ExecutorOutputState(TypedDict):
17
+ """
18
+ Output state for the executor agent containing final executor results.
19
+ """
20
+ output: str
21
+ executor_data: List[str]
22
+ executor_messages: Annotated[Sequence[BaseMessage], add_messages]
23
+
24
+
25
+ class PlannerOutput(BaseModel):
26
+ """Simplified output for the planner that only returns execution jobs"""
27
+ executor_jobs: List[str] = Field(description="List of execution jobs to be completed")
28
+
29
+ class MasterState(TypedDict):
30
+ """Master orchestrator state"""
31
+ query_brief: str
32
+ execution_jobs: List[str]
33
+ completed_jobs: Annotated[List[str], operator.add]
34
+ worker_outputs: Annotated[List[ExecutorOutputState], operator.add]
35
+ final_output: str
36
+
37
+
src/states/queryState.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import operator
2
+ from typing_extensions import Optional, Annotated, List, Sequence, Literal, Union
3
+
4
+ from langchain_core.messages import BaseMessage
5
+ from langgraph.graph import MessagesState
6
+ from langgraph.graph.message import add_messages
7
+ from pydantic import BaseModel, Field, field_validator
8
+
9
+
10
+ class SparrowInputState(MessagesState):
11
+ """Input state for the full agent - only contains from the user input."""
12
+ pass
13
+
14
+ class SparrowAgentState(MessagesState):
15
+ """
16
+ Main state for the full multi-agent Sparrow customer service system.
17
+
18
+ Extends MessagesState with additional fields for Sparrow customer service coordination.
19
+ Note: Some fields are duplicated across different state classes for proper
20
+ state management between subgraphs and main workflow.
21
+
22
+ """
23
+ query_brief: Optional[str]
24
+ master_messages: Annotated[Sequence[BaseMessage], add_messages]
25
+ notes: Annotated[list[str], operator.add] = []
26
+ final_message: str
27
+ clarification_complete: Optional[bool]
28
+ needs_clarification: Optional[bool]
29
+ query_brief_complete: Optional[bool]
30
+ execution_jobs: Optional[List[str]]
31
+ error: Optional[str]
32
+
33
+ class ClarifyWithUser(BaseModel):
34
+ """Schema for user clarification decision and questions"""
35
+
36
+ need_clarification: Union[Literal["yes", "no"], bool] = Field(
37
+ description="Whether the user needs to be asked a clarifying question. Can be 'yes'/'no' or true/false"
38
+ )
39
+ question: str = Field(
40
+ description="A question to ask the user to clarify the need",
41
+ default=""
42
+ )
43
+ verification: str = Field(
44
+ description="Verify message that we will start research after the user has provided the necessary information",
45
+ default=""
46
+ )
47
+
48
+ @field_validator('need_clarification', mode='before')
49
+ @classmethod
50
+ def convert_bool_to_string(cls, v):
51
+ """Convert boolean values to yes/no strings"""
52
+ if isinstance(v, bool):
53
+ return "yes" if v else "no"
54
+ if isinstance(v, str):
55
+ v_lower = v.lower()
56
+ if v_lower in ['true', '1', 'yes']:
57
+ return "yes"
58
+ elif v_lower in ['false', '0', 'no']:
59
+ return "no"
60
+ return v
61
+
62
+ class CustomerQuestion(BaseModel):
63
+ """Schema for structured customer query brief """
64
+
65
+ query_brief: str = Field(
66
+ description="A customer question that will be used to guide the research."
67
+ )
src/utils/__init__.py ADDED
File without changes
src/utils/prompts.py ADDED
@@ -0,0 +1,343 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ clarification_with_user_instructions = """
2
+ These are the messages that have been exchanged so far regarding the user's parcel request or tracking inquiry:
3
+ <Messages>
4
+ {messages}
5
+ </Messages>
6
+
7
+ Today's date is {date}.
8
+
9
+ You are Sparrow, a friendly and helpful parcel operations assistant. Your goal is to help users with their parcel tracking and delivery needs in a warm, conversational manner.
10
+
11
+ Assess whether you need to ask a clarifying question, or if the user has already provided enough information for you to proceed.
12
+ IMPORTANT: If you can see in the messages history that you have already asked a clarifying question, you almost always do not need to ask another one. Only ask another question if ABSOLUTELY NECESSARY.
13
+
14
+ If you need to ask a question, follow these guidelines:
15
+ - Be friendly, warm, and conversational - imagine you're helping a friend
16
+ - Use a casual, approachable tone (e.g., "I'd be happy to help!", "Let me check that for you!")
17
+ - Show empathy and understanding (e.g., "I understand you're waiting for your package")
18
+ - Keep it brief and to the point
19
+ - Use emojis sparingly and appropriately to add warmth (📦, ✅, 🚚)
20
+ - Make the user feel comfortable and valued
21
+ - Use bullet points or numbered lists if appropriate for clarity
22
+ - Do not ask for unnecessary information or information the user has already provided
23
+
24
+ Respond in valid JSON format with these exact keys:
25
+ "need_clarification": boolean,
26
+ "question": "<question to ask the user to clarify their parcel request>",
27
+ "verification": "<verification message that we will start processing the parcel request>"
28
+
29
+ If you need to ask a clarifying question, return:
30
+ "need_clarification": "yes",
31
+ "question": "<your friendly clarifying question>",
32
+ "verification": ""
33
+
34
+ Example friendly questions:
35
+ - "I'd be happy to help you track your package! Could you share the tracking number with me?"
36
+ - "Great! Just to make sure I get this right, what's the tracking number for your parcel?"
37
+ - "No problem! To give you the most accurate delivery estimate, could you tell me the distance or the origin and destination?"
38
+
39
+ If you do not need to ask a clarifying question, return:
40
+ "need_clarification": "no",
41
+ "question": "",
42
+ "verification": "<friendly acknowledgement message>"
43
+
44
+ Example friendly verification messages:
45
+ - "Perfect! I've got all the details I need. Let me check that for you right away! 📦"
46
+ - "Great! I'll track down that package for you now. Just a moment!"
47
+ - "Awesome! Let me look into this and get you those delivery details. ✅"
48
+ - "Got it! I'll pull up the tracking information for you right now."
49
+
50
+ For the verification message when no clarification is needed:
51
+ - Start with a friendly acknowledgement ("Perfect!", "Great!", "Awesome!", "Got it!")
52
+ - Briefly mention what you'll do ("Let me track that for you", "I'll look that up")
53
+ - Add a friendly closing ("Just a moment!", "One sec!", "Right away!")
54
+ - Keep it warm, concise, and reassuring
55
+ - Optional: Use a relevant emoji (📦, 🚚, ✅)
56
+ """
57
+
58
+ transform_messages_into_customer_query_brief_prompt = """
59
+ You will be given a set of messages that have been exchanged so far between yourself and the user.
60
+ Your job is to translate these messages into a detailed and actionable parcel request brief that will be used to guide parcel processing, tracking, or consolidation.
61
+
62
+ The messages that have been exchanged so far between yourself and the user are:
63
+ <Messages>
64
+ {messages}
65
+ </Messages>
66
+
67
+ Today's date is {date}.
68
+
69
+ You will return a single, clear, and actionable brief that summarizes the user's parcel request.
70
+
71
+ Guidelines:
72
+ 1. Maximize Specificity and Detail
73
+ - Include all known shipment details provided by the user (e.g., parcel dimensions, weight, destination, delivery preferences, tracking numbers).
74
+ - Explicitly list any key attributes or instructions mentioned by the user for consolidation, delivery timing, or handling.
75
+
76
+ 2. Handle Unstated Dimensions Carefully
77
+ - If certain aspects are not specified but are critical for processing (e.g., preferred courier, insurance, or packaging requirements), note them as open considerations rather than making assumptions.
78
+ - Example: Instead of assuming “fast delivery,” say “consider delivery options unless the user specifies a preference.”
79
+
80
+ 3. Avoid Unwarranted Assumptions
81
+ - Never invent shipment details, preferences, or constraints that the user hasn’t stated.
82
+ - If certain details are missing (e.g., shipment value, fragile contents), explicitly note this lack of specification.
83
+
84
+ 4. Distinguish Between Required Actions and Optional Preferences
85
+ - Required actions: Steps or details necessary for successful processing or tracking of the shipment.
86
+ - User preferences: Specific instructions provided by the user (must only include what the user stated).
87
+ - Example: “Process and track parcels to the provided address, prioritizing delivery timing as specified by the user.”
88
+
89
+ 5. Use the First Person
90
+ - Phrase the brief from the perspective of the user.
91
+
92
+ 6. Sources / References
93
+ - If tracking numbers or courier platforms are mentioned, reference the official tracking pages or courier portals.
94
+ - If specific shipment services or rules apply, prioritize official service guidelines over general logistics advice.
95
+ """
96
+
97
+
98
+
99
+ ## Defining the prompts
100
+ compress_execution_system_prompt = """You are Sparrow, a friendly and helpful parcel operations assistant who has gathered logistics information. Your job is to present the findings in a clear, friendly, and conversational way while preserving all relevant details. For context, today's date is {date}.
101
+
102
+ <Tone Guidelines>
103
+ - Be warm, friendly, and conversational
104
+ - Use simple, clear language that anyone can understand
105
+ - Show empathy and understanding
106
+ - Be encouraging and positive
107
+ - Use appropriate emojis to enhance clarity and warmth (📦, 🚚, ✅, 📍, ⏰)
108
+ - Avoid technical jargon or overly formal language
109
+ - Make the information easy to scan and understand
110
+ </Tone Guidelines>
111
+
112
+ <Task>
113
+ You need to clean up information gathered from tool calls and web searches in the existing messages.
114
+ All relevant parcel, shipment, tracking, and user information (e.g., status, ETA, user history, reports) should be repeated and rewritten in a cleaner, structured format.
115
+ The purpose is to remove irrelevant or duplicate information (e.g., if multiple sources confirm the same ETA, state "Multiple sources confirm ETA is X").
116
+ Only these fully comprehensive cleaned findings will be returned to the user, so it is crucial that no relevant information is lost from the raw messages.
117
+ </Task>
118
+
119
+ <Tool Call Filtering>
120
+ **IMPORTANT**: When processing the research messages, focus only on substantive shipment, tracking, and delivery-related content:
121
+ - **Include**: Results from `track_package`, `estimated_time_analysis`, and findings from carrier websites, courier portals, customs/government sites, and Sparrow docs.
122
+ - **Exclude**: `think_tool` calls and responses — these are internal agent reflections for decision-making and should not be included in the final report.
123
+ - **Focus on**: Actual information gathered from tools (e.g., parcel status, ETA predictions, generated reports) and official sources (e.g., transit times, delivery restrictions, service updates), not the agent's internal reasoning.
124
+ </Tool Call Filtering>
125
+
126
+ <Guidelines>
127
+ 1. Your output findings must be fully comprehensive and include ALL shipment, tracking, and delivery details from tool calls and web searches. Repeat key details verbatim.
128
+ 2. The cleaned logistics report can be as long as necessary to include ALL information gathered.
129
+ 3. Include inline citations for each source (carrier site, government/customs portal, Sparrow docs, or tool outputs).
130
+ 4. Add a "Sources" section at the end listing all sources (including tool outputs) with corresponding citations.
131
+ 5. Ensure every source and tool result used in gathering parcel/tracking/delivery information is preserved.
132
+ 6. Critical: Do not lose any source or tool output, even if it appears repetitive — future steps will handle merging/aggregation.
133
+ 7. For tool outputs, treat results from `track_package` and `estimated_time_analysis` as authoritative sources and cite them as "Sparrow Tool: [Tool Name]".
134
+ </Guidelines>
135
+
136
+ <Output Format>
137
+ Present the information in a friendly, easy-to-read format:
138
+
139
+ **Opening**: Start with a friendly acknowledgment
140
+ - Example: "Here's what I found about your package! 📦"
141
+ - Example: "Good news! I've got the details you need. ✅"
142
+
143
+ **Main Information**: Present key details clearly
144
+ - Use friendly headers and bullet points
145
+ - Highlight important information (status, ETA, location)
146
+ - Present details from `track_package` and `estimated_time_analysis` in a user-friendly way
147
+ - Use emojis to make information easier to scan
148
+
149
+ **Closing**: End with a helpful offer
150
+ - Example: "Is there anything else you'd like to know about your delivery?"
151
+ - Example: "Let me know if you need any other help! 😊"
152
+
153
+ **Citations**: Include sources naturally in the text or at the end
154
+ - Example: "According to our tracking system..."
155
+ - Keep source citations brief and unobtrusive
156
+
157
+ **CRITICAL - System Error Handling**:
158
+ If tool results contain error messages about MongoDB, database connections, timeouts, or system failures:
159
+ - DO NOT include technical error details in your response
160
+ - Replace them with a friendly "system temporarily unavailable" message
161
+ - Example: "I'm sorry, but our tracking system is temporarily unavailable. 🛠️ Please try again in a few minutes!"
162
+ - Show empathy and provide clear next steps
163
+ - Never expose internal system architecture or technical issues to users
164
+ </Output Format>
165
+
166
+ <Citation Rules>
167
+ - Assign each unique source (URL or tool) a single citation number in your text.
168
+ - End with ### Sources listing each source/tool with corresponding numbers.
169
+ - **IMPORTANT**: Number sources sequentially without gaps (1,2,3,4...) in the final list, regardless of order.
170
+ - Example format:
171
+ [1] Sparrow Tool: track_package
172
+ [2] DHL Tracking Portal: https://www.dhl.com/track
173
+ [3] Sparrow Tool: generate_report
174
+ </Citation Rules>
175
+
176
+ Critical Reminder: It is extremely important that any information relevant to the user's parcel request (status codes, ETAs, delivery limits, customs rules, courier policies, user history, generated reports) is preserved verbatim. Do not rewrite or paraphrase this information — keep it intact.
177
+
178
+ """
179
+
180
+ compress_execution_human_message = """All above messages are about parcel/tracking research conducted by Sparrow's AI Operations Assistant for the following shipment request:
181
+
182
+ SHIPMENT REQUEST: {shipment_request}
183
+
184
+ Your task is to clean up these logistics findings while preserving ALL information that is relevant to answering this specific shipment or tracking question.
185
+
186
+ CRITICAL REQUIREMENTS:
187
+ - DO NOT summarize or paraphrase the information — preserve it verbatim
188
+ - DO NOT lose any shipment details, tracking events, carrier names, numbers, or service constraints
189
+ - DO NOT filter out information that seems relevant to the shipment request
190
+ - Organize the information in a cleaner format but keep all the substance
191
+ - Include ALL sources and citations found during research
192
+ - Remember this research was conducted to answer the specific shipment/tracking request above
193
+
194
+ The cleaned findings will be used for final report generation, so comprehensiveness is critical."""
195
+
196
+ """Prompt templates for the deep research system.
197
+
198
+ This module contains all prompt templates used across the research workflow components,
199
+ including user clarification, research brief generation, and report synthesis.
200
+ """
201
+
202
+
203
+
204
+ execution_agent_prompt = """You are Sparrow, a friendly and helpful parcel operations assistant. Today's date is {date}.
205
+
206
+ <Personality>
207
+ You are warm, conversational, and genuinely helpful. You communicate like a knowledgeable friend who's excited to help users with their parcels. Use:
208
+ - Friendly, casual language ("Let me check that for you!", "Here's what I found!")
209
+ - Empathy and understanding ("I know waiting for packages can be frustrating")
210
+ - Clear, straightforward explanations without jargon
211
+ - Positive, reassuring tone
212
+ - Appropriate emojis to add warmth (📦, 🚚, ✅, 📍)
213
+ </Personality>
214
+
215
+ <Task>
216
+ Your job is to gather and verify information needed to process, track, or consolidate parcels. Goals include: interpreting tracking events and ETAs, checking carrier service availability/alerts, comparing delivery options or confirming packaging/size limits, and retrieving user-specific shipment details.
217
+ </Task>
218
+
219
+ <Available Tools>
220
+ 1. **think_tool(reflection: str)**: Summarize findings, note gaps, and plan next steps. Must always be called after any other tool call.
221
+ 2. **track_package(tracking_number: str)**: Tracks parcels using a tracking number.
222
+ 3. **estimated_time_analysis(distance_km: float, courier_experience_yrs: float, vehicle_type: str, weather: str, time_of_day: str, traffic_level: str)**: Estimates delivery time using ML model based on delivery parameters.
223
+ - distance_km: Distance in kilometers (required)
224
+ - courier_experience_yrs: Experience in years (default: 2.0)
225
+ - vehicle_type: 'Scooter', 'Pickup Truck', or 'Motorcycle' (default: 'Scooter')
226
+ - weather: 'Sunny', 'Rainy', 'Foggy', 'Snowy', or 'Windy' (default: 'Sunny')
227
+ - time_of_day: 'Morning', 'Afternoon', 'Evening', or 'Night' (default: 'Morning')
228
+ - traffic_level: 'Low', 'Medium', or 'High' (default: 'Medium')
229
+
230
+ **CRITICAL RULES:**
231
+ - Only call a tool if it is absolutely required to resolve the user’s request.
232
+ - Always provide all required and correct arguments when calling a tool.
233
+ - Never call a tool with empty, null, or placeholder arguments.
234
+ - Never answer a question directly with text if a relevant tool exists and required arguments are available.
235
+ - Always call `think_tool` immediately after any non-think_tool call, with a `reflection` explaining:
236
+ - What information was obtained
237
+ - What is missing
238
+ - Whether another tool call is justified
239
+ </Available Tools>
240
+
241
+ <Instructions>
242
+ 1. **Understand the request** – Determine the exact outcome needed (parcel status, ETA, delivery estimates, etc.).
243
+ 2. **Decide tool usage** – Only use a tool if required; otherwise, explain what information is missing to proceed.
244
+ 3. **Call tools properly** – Provide the correct arguments, do not assume or fabricate values. For `estimated_time_analysis`, the minimum required argument is `distance_km`.
245
+ 4. **Think after each tool** – Use `think_tool` to reflect on results before any further tool calls or final answers.
246
+ 5. **Stop when resolved** – Once sufficient data is available, provide a concise, actionable operational response.
247
+ 6. **Handle missing details** – If critical information is missing (tracking number, distance, etc.), explicitly state it and do not call a tool.
248
+ </Instructions>
249
+
250
+ <Hard Limits>
251
+ - Simple requests: maximum 2–3 tool calls (including `think_tool`).
252
+ - Complex requests: maximum 5 tool calls.
253
+ - Stop immediately if tool calls return duplicate or unnecessary information, or the limit is reached without full resolution.
254
+ </Hard Limits>
255
+
256
+ <Final Output Guidance>
257
+ - Be friendly and conversational in your responses
258
+ - Use clear, simple language that anyone can understand
259
+ - Include sources in a natural way (e.g., "According to our tracking system...")
260
+ - Provide helpful, actionable information
261
+ - Show empathy if there are delays or issues ("I understand this might be frustrating...")
262
+ - Be encouraging and positive when sharing good news ("Great news!" "Your package is on its way!")
263
+ - If unable to complete, politely explain what information you need ("To help you better, I'll need...")
264
+ - End with an offer to help further ("Is there anything else I can help you with?")
265
+ - Use appropriate emojis to enhance friendliness (📦 for packages, 🚚 for delivery, ✅ for success, 📍 for location)
266
+ - Never generate free-text answers instead of calling a tool when the tool is required and arguments are available.
267
+
268
+ **IMPORTANT - Handling System Errors:**
269
+ If the `track_package` tool returns error messages indicating:
270
+ - "MongoDB is not configured"
271
+ - "Failed to connect to MongoDB"
272
+ - "MongoDB operation failed"
273
+ - "Unable to connect"
274
+ - "Connection refused"
275
+ - "Timeout"
276
+ - Any database/system errors
277
+
278
+ You MUST respond with a friendly system unavailable message like:
279
+ - "I'm really sorry, but our tracking system seems to be temporarily unavailable right now. 😔 Could you please try again in a few minutes? If the issue persists, feel free to reach out to our support team!"
280
+ - "Oops! It looks like our system is taking a little break. 🛠️ Please try checking your package again in a few moments. Sorry for the inconvenience!"
281
+ - "I apologize, but I'm having trouble connecting to our tracking database at the moment. ⚠️ This is usually temporary. Please try again shortly, or contact our support team if you need immediate assistance!"
282
+
283
+ DO NOT:
284
+ - Share technical error details with the user
285
+ - Mention MongoDB, database, or internal system names
286
+ - Make the user feel like it's their fault
287
+ - Leave them without any guidance
288
+
289
+ DO:
290
+ - Apologize sincerely
291
+ - Explain the system is temporarily unavailable
292
+ - Ask them to try again later
293
+ - Offer alternative support options
294
+ - Use appropriate emojis (😔, 🛠️, ⚠️) to soften the message
295
+ """
296
+
297
+ master_agent_prompt = """
298
+ You are a master agent for a logistics AI system, responsible for task delegation and coordination. Your job is to analyze logistics queries, break them into subtasks, and assign them to executor agents for parallel execution. Today's date is {date}.
299
+
300
+ <Task>
301
+ Analyze the logistics query and split it into the minimum number of necessary subtasks for efficient parallel execution. Return subtasks as a comma-separated string. Call the "ExecuteLogisticsTask" tool for each subtask to delegate to executor agents. When satisfied with the results, call the "LogisticsComplete" tool to indicate completion.
302
+ </Task>
303
+
304
+ <Available Tools>
305
+ 1. **think_tool(reflection: str)**: Summarize findings, note gaps, and plan next steps. Must always be called after any other tool call.
306
+ 2. **track_package(tracking_number: str)**: Tracks parcels using a tracking number.
307
+ 3. **estimated_time_analysis(distance_km: float, ...)**: Estimates delivery time using ML model based on distance and delivery parameters.
308
+
309
+ **CRITICAL**: Use think_tool before ExecuteLogisticsTask to plan subtasks and after each task to evaluate results. Assign up to {max_concurrent_logistics_units} parallel subtasks per iteration for efficiency.
310
+ </Available Tools>
311
+
312
+ <Instructions>
313
+ Act as a logistics manager with limited resources. Follow these steps:
314
+ 1. **Analyze the query** - Identify specific logistics needs (e.g., routing, inventory, scheduling).
315
+ 2. **Break into subtasks** - Decompose query into clear, independent subtasks for parallel execution. Return as comma-separated string.
316
+ 3. **Delegate subtasks** - Use ExecuteLogisticsTask for each subtask.
317
+ 4. **Assess progress** - After each ExecuteLogisticsTask, use think_tool to evaluate results and decide next steps.
318
+ </Instructions>
319
+
320
+ <Hard Limits>
321
+ - **Bias towards minimal subtasks** - Use single executor unless query clearly benefits from parallelization.
322
+ - **Stop when sufficient** - Call LogisticsComplete when query is resolved adequately.
323
+ </Hard Limits>
324
+
325
+ <Show Your Thinking>
326
+ Before ExecuteLogisticsTask:
327
+ - Use think_tool to plan: Can the query be split into independent subtasks (e.g., check inventory, optimize route, schedule delivery)? List as comma-separated string.
328
+
329
+ After ExecuteLogisticsTask:
330
+ - Use think_tool to analyze: What was achieved? What's missing? Is the query resolved? Should more subtasks be delegated, or is LogisticsComplete appropriate?
331
+ </Show Your Thinking>
332
+
333
+ <Scaling Rules>
334
+ - **Simple queries** (e.g., check single item inventory): Use 1 executor.
335
+ - Example: "Check stock for item X in warehouse Y" → 1 subtask.
336
+ - **Complex logistics queries**: Assign one executor per distinct subtask.
337
+ - Example: "Plan delivery from warehouse A to cities B, C, D" → 3 subtasks (one per city).
338
+ - **Subtask design**: Ensure subtasks are clear, non-overlapping, and standalone. Provide complete instructions for each ExecuteLogisticsTask call.
339
+ - **Note**: A separate agent will compile the final logistics plan; focus on gathering comprehensive data.
340
+ </Scaling Rules>
341
+
342
+ """
343
+
src/utils/utils.py ADDED
@@ -0,0 +1,364 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pathlib import Path
2
+ from datetime import datetime
3
+ from typing_extensions import Annotated, List, Literal
4
+ import os
5
+ import logging
6
+ import requests
7
+ from typing import Optional, Dict, Any
8
+ from pymongo import MongoClient
9
+ from pymongo.errors import ConnectionFailure, OperationFailure
10
+
11
+ from langchain_core.messages import HumanMessage
12
+ from langchain_core.tools import tool, InjectedToolArg
13
+
14
+ from src.llms.groqllm import GroqLLM
15
+
16
+ # Configure logging
17
+ logger = logging.getLogger(__name__)
18
+
19
+ # Configuration for MongoDB
20
+ MONGODB_URI = os.getenv("MONGODB_URI", None)
21
+ MONGODB_DATABASE = os.getenv("MONGODB_DATABASE", "parcel_tracking")
22
+ MONGODB_COLLECTION = os.getenv("MONGODB_COLLECTION", "parcels")
23
+ MONGODB_TIMEOUT = int(os.getenv("MONGODB_TIMEOUT", "5000")) # milliseconds
24
+
25
+ # Configuration for ETA API
26
+ ETA_API_BASE_URL = os.getenv("ETA_API_BASE_URL", "http://localhost:8000")
27
+ ETA_API_TIMEOUT = int(os.getenv("ETA_API_TIMEOUT", "10")) # seconds
28
+
29
+ # MongoDB client singleton
30
+ _mongo_client = None
31
+
32
+ def get_mongo_client():
33
+ """Get or create MongoDB client singleton"""
34
+ global _mongo_client
35
+
36
+ if _mongo_client is None:
37
+ if not MONGODB_URI:
38
+ raise ValueError("MONGODB_URI environment variable is not set")
39
+
40
+ try:
41
+ _mongo_client = MongoClient(
42
+ MONGODB_URI,
43
+ serverSelectionTimeoutMS=MONGODB_TIMEOUT,
44
+ connectTimeoutMS=MONGODB_TIMEOUT
45
+ )
46
+ # Test connection
47
+ _mongo_client.admin.command('ping')
48
+ logger.info("MongoDB connection established successfully")
49
+ except Exception as e:
50
+ logger.error(f"Failed to connect to MongoDB: {e}")
51
+ raise
52
+
53
+ return _mongo_client
54
+
55
+ def get_today_str() -> str:
56
+ """Get current data in a human-readable format."""
57
+ return datetime.now().strftime("%a %b %d, %Y")
58
+
59
+ groq = GroqLLM()
60
+
61
+ @tool
62
+ def think_tool(reflection:str) -> str:
63
+ """
64
+ Tool for strategic reflection on execution progress and decision-making.
65
+
66
+ Use this tool after each search to analyze results and plan next steps systematically.
67
+ This creates a deliberate pause in customer query execution workflow for quality decision-making.
68
+
69
+ When to use:
70
+ - After receiving search results: What key information did I find?
71
+ - Before deciding next steps: Do I have enough to answer comprehensively?
72
+ - When assessing execution gaps: What specific execution am I still missing?
73
+ - Before concluding execution: Can I provide a complete answer now?
74
+
75
+ Reflection should address:
76
+ 1. Analysis of current findings - What concrete information have I gathered?
77
+ 2. Gap assessment - What crucial execution or information is still missing?
78
+ 3. Quality evaluation - Do I have sufficient evidence/examples for a good answer?
79
+ 4. Strategic decision - Should I continue execution or provide my output?
80
+
81
+ Args:
82
+ reflection: Your detailed reflection on the execution progress, findings, gaps, and next steps
83
+
84
+ Returns:
85
+ Confirmation that reflection was recorded for decision-making
86
+ """
87
+ return f"Reflection recorded: {reflection}"
88
+
89
+
90
+ @tool(description="Track parcel based on tracking number")
91
+ def track_package(tracking_number: str) -> str:
92
+ """
93
+ Tool for tracking customer packages/parcels from MongoDB database.
94
+
95
+ This tool retrieves real-time parcel tracking information from the MongoDB database.
96
+ It fetches details such as current status, location, delivery ETA, sender/recipient info,
97
+ and tracking history.
98
+
99
+ Args:
100
+ tracking_number(str): The unique tracking number of the parcel
101
+
102
+ Returns:
103
+ A string describing the parcel status, location, history, and other relevant details
104
+ """
105
+ logger.info(f"Tracking parcel: {tracking_number}")
106
+
107
+ try:
108
+ # Check if MongoDB is configured
109
+ if not MONGODB_URI:
110
+ error_msg = "MongoDB is not configured. Please set MONGODB_URI environment variable."
111
+ logger.error(error_msg)
112
+ return error_msg
113
+
114
+ # Get MongoDB client and collection
115
+ client = get_mongo_client()
116
+ db = client[MONGODB_DATABASE]
117
+ collection = db[MONGODB_COLLECTION]
118
+
119
+ # Query for the tracking number
120
+ logger.debug(f"Querying MongoDB for tracking_number: {tracking_number}")
121
+ parcel = collection.find_one({"tracking_number": tracking_number})
122
+
123
+ if not parcel:
124
+ # Try case-insensitive search
125
+ parcel = collection.find_one(
126
+ {"tracking_number": {"$regex": f"^{tracking_number}$", "$options": "i"}}
127
+ )
128
+
129
+ if not parcel:
130
+ logger.warning(f"Tracking number not found: {tracking_number}")
131
+ return f"Tracking number '{tracking_number}' not found in the system. Please verify the tracking number and try again."
132
+
133
+ # Format the response with all available information
134
+ response_parts = []
135
+ response_parts.append(f"📦 Parcel Tracking Information for {tracking_number}")
136
+ response_parts.append("=" * 50)
137
+
138
+ # Basic Information
139
+ if parcel.get("status"):
140
+ response_parts.append(f"\n🔹 Status: {parcel['status']}")
141
+
142
+ if parcel.get("current_location"):
143
+ response_parts.append(f"🔹 Current Location: {parcel['current_location']}")
144
+
145
+ # Delivery Information
146
+ if parcel.get("estimated_delivery"):
147
+ response_parts.append(f"🔹 Estimated Delivery: {parcel['estimated_delivery']}")
148
+
149
+ if parcel.get("actual_delivery_date"):
150
+ response_parts.append(f"🔹 Delivered On: {parcel['actual_delivery_date']}")
151
+
152
+ # Sender and Recipient Information
153
+ if parcel.get("sender"):
154
+ sender = parcel["sender"]
155
+ if isinstance(sender, dict):
156
+ sender_info = f"{sender.get('name', 'N/A')}"
157
+ if sender.get('address'):
158
+ sender_info += f" ({sender['address']})"
159
+ response_parts.append(f"\n🔹 Sender: {sender_info}")
160
+ else:
161
+ response_parts.append(f"\n🔹 Sender: {sender}")
162
+
163
+ if parcel.get("recipient"):
164
+ recipient = parcel["recipient"]
165
+ if isinstance(recipient, dict):
166
+ recipient_info = f"{recipient.get('name', 'N/A')}"
167
+ if recipient.get('address'):
168
+ recipient_info += f" ({recipient['address']})"
169
+ response_parts.append(f"🔹 Recipient: {recipient_info}")
170
+ else:
171
+ response_parts.append(f"🔹 Recipient: {recipient}")
172
+
173
+ # Package Details
174
+ if parcel.get("weight"):
175
+ response_parts.append(f"\n🔹 Weight: {parcel['weight']}")
176
+
177
+ if parcel.get("dimensions"):
178
+ response_parts.append(f"🔹 Dimensions: {parcel['dimensions']}")
179
+
180
+ if parcel.get("description"):
181
+ response_parts.append(f"🔹 Description: {parcel['description']}")
182
+
183
+ # Courier Information
184
+ if parcel.get("courier_name"):
185
+ response_parts.append(f"\n🔹 Courier: {parcel['courier_name']}")
186
+
187
+ if parcel.get("vehicle_type"):
188
+ response_parts.append(f"🔹 Vehicle Type: {parcel['vehicle_type']}")
189
+
190
+ # Tracking History
191
+ if parcel.get("tracking_history"):
192
+ history = parcel["tracking_history"]
193
+ if isinstance(history, list) and len(history) > 0:
194
+ response_parts.append(f"\n📍 Tracking History:")
195
+ for idx, event in enumerate(history[-5:], 1): # Show last 5 events
196
+ if isinstance(event, dict):
197
+ timestamp = event.get('timestamp', 'N/A')
198
+ location = event.get('location', 'N/A')
199
+ status = event.get('status', 'N/A')
200
+ response_parts.append(f" {idx}. [{timestamp}] {location} - {status}")
201
+ else:
202
+ response_parts.append(f" {idx}. {event}")
203
+
204
+ if len(history) > 5:
205
+ response_parts.append(f" ... and {len(history) - 5} more events")
206
+
207
+ # Additional Notes
208
+ if parcel.get("notes"):
209
+ response_parts.append(f"\n📝 Notes: {parcel['notes']}")
210
+
211
+ # Special Instructions
212
+ if parcel.get("special_instructions"):
213
+ response_parts.append(f"⚠️ Special Instructions: {parcel['special_instructions']}")
214
+
215
+ # Shipping Method
216
+ if parcel.get("shipping_method"):
217
+ response_parts.append(f"\n🔹 Shipping Method: {parcel['shipping_method']}")
218
+
219
+ # Cost Information
220
+ if parcel.get("shipping_cost"):
221
+ response_parts.append(f"🔹 Shipping Cost: {parcel['shipping_cost']}")
222
+
223
+ response_text = "\n".join(response_parts)
224
+ logger.info(f"Successfully retrieved tracking info for: {tracking_number}")
225
+ return response_text
226
+
227
+ except ConnectionFailure as e:
228
+ error_msg = f"Failed to connect to MongoDB: {str(e)}. Please check your connection."
229
+ logger.error(error_msg)
230
+ return error_msg
231
+
232
+ except OperationFailure as e:
233
+ error_msg = f"MongoDB operation failed: {str(e)}. Please check your permissions."
234
+ logger.error(error_msg)
235
+ return error_msg
236
+
237
+ except Exception as e:
238
+ error_msg = f"Error tracking parcel '{tracking_number}': {str(e)}"
239
+ logger.error(error_msg, exc_info=True)
240
+ return error_msg
241
+
242
+ @tool(description="Estimate delivery time for a parcel based on delivery parameters.")
243
+ def estimated_time_analysis(
244
+ distance_km: float,
245
+ courier_experience_yrs: float = 2.0,
246
+ vehicle_type: str = "Scooter",
247
+ weather: str = "Sunny",
248
+ time_of_day: str = "Morning",
249
+ traffic_level: str = "Medium"
250
+ ) -> str:
251
+ """
252
+ Estimate delivery time for a parcel based on various delivery parameters.
253
+
254
+ This tool makes an API call to the ETA prediction service which uses a trained ML model
255
+ to predict delivery time based on distance, courier experience, vehicle type, weather,
256
+ time of day, and traffic conditions.
257
+
258
+ Args:
259
+ distance_km: Distance in kilometers (must be positive, max 1000km)
260
+ courier_experience_yrs: Courier experience in years (0-50, default: 2.0)
261
+ vehicle_type: Type of vehicle - one of: 'Scooter', 'Pickup Truck', 'Motorcycle' (default: 'Scooter')
262
+ weather: Weather condition - one of: 'Sunny', 'Rainy', 'Foggy', 'Snowy', 'Windy' (default: 'Sunny')
263
+ time_of_day: Time of day - one of: 'Morning', 'Afternoon', 'Evening', 'Night' (default: 'Morning')
264
+ traffic_level: Traffic level - one of: 'Low', 'Medium', 'High' (default: 'Medium')
265
+
266
+ Returns:
267
+ A string describing the estimated delivery time in minutes with the input parameters used.
268
+
269
+ Example:
270
+ estimated_time_analysis(distance_km=15.5, vehicle_type="Motorcycle", traffic_level="High")
271
+ """
272
+ logger.info(f"Estimating delivery time: distance={distance_km}km, vehicle={vehicle_type}, traffic={traffic_level}")
273
+
274
+ try:
275
+ # Prepare the request payload matching the API schema
276
+ payload = {
277
+ "Distance_km": distance_km,
278
+ "Courier_Experience_yrs": courier_experience_yrs,
279
+ "Vehicle_Type": vehicle_type,
280
+ "Weather": weather,
281
+ "Time_of_Day": time_of_day,
282
+ "Traffic_Level": traffic_level
283
+ }
284
+
285
+ # Make API call to ETA prediction service
286
+ api_url = f"{ETA_API_BASE_URL}/predict"
287
+ logger.debug(f"Calling ETA API: {api_url} with payload: {payload}")
288
+
289
+ response = requests.post(
290
+ api_url,
291
+ json=payload,
292
+ timeout=ETA_API_TIMEOUT
293
+ )
294
+
295
+ # Check if request was successful
296
+ response.raise_for_status()
297
+
298
+ # Parse response
299
+ result = response.json()
300
+ predicted_time = result.get("predicted_delivery_time")
301
+
302
+ if predicted_time is None:
303
+ raise ValueError("No prediction returned from API")
304
+
305
+ # Format response
306
+ hours = int(predicted_time // 60)
307
+ minutes = int(predicted_time % 60)
308
+
309
+ time_str = ""
310
+ if hours > 0:
311
+ time_str += f"{hours} hour{'s' if hours > 1 else ''}"
312
+ if minutes > 0:
313
+ if hours > 0:
314
+ time_str += " and "
315
+ time_str += f"{minutes} minute{'s' if minutes != 1 else ''}"
316
+
317
+ response_text = f"""Estimated Delivery Time Analysis:
318
+ - Predicted Time: {predicted_time:.1f} minutes ({time_str})
319
+ - Distance: {distance_km} km
320
+ - Vehicle Type: {vehicle_type}
321
+ - Courier Experience: {courier_experience_yrs} years
322
+ - Weather: {weather}
323
+ - Time of Day: {time_of_day}
324
+ - Traffic Level: {traffic_level}
325
+
326
+ This prediction is based on a trained machine learning model considering all the above factors."""
327
+
328
+ logger.info(f"ETA prediction successful: {predicted_time:.1f} minutes")
329
+ return response_text
330
+
331
+ except requests.exceptions.ConnectionError:
332
+ error_msg = f"Unable to connect to ETA prediction service at {ETA_API_BASE_URL}. Please ensure the service is running."
333
+ logger.error(error_msg)
334
+ return error_msg
335
+
336
+ except requests.exceptions.Timeout:
337
+ error_msg = f"ETA prediction service timed out after {ETA_API_TIMEOUT} seconds."
338
+ logger.error(error_msg)
339
+ return error_msg
340
+
341
+ except requests.exceptions.HTTPError as e:
342
+ error_msg = f"ETA prediction service returned an error: {e.response.status_code} - {e.response.text}"
343
+ logger.error(error_msg)
344
+ return error_msg
345
+
346
+ except Exception as e:
347
+ error_msg = f"Error during ETA estimation: {str(e)}"
348
+ logger.error(error_msg, exc_info=True)
349
+ return error_msg
350
+
351
+ @tool
352
+ def conduct_execution(execution_jobs: str) -> str:
353
+ """
354
+ Tool for delegating an execution task to a specialized sub-agent.
355
+ """
356
+ return f"Delegated execution job: {execution_jobs}"
357
+
358
+
359
+ @tool
360
+ def execution_complete() -> str:
361
+ """
362
+ Tool for indicating the execution process is complete.
363
+ """
364
+ return "Execution complete."
templates/index.html ADDED
@@ -0,0 +1,372 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>Sparrow Agent Chat</title>
7
+ <style>
8
+ * {
9
+ margin: 0;
10
+ padding: 0;
11
+ box-sizing: border-box;
12
+ }
13
+
14
+ body {
15
+ font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
16
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
17
+ min-height: 100vh;
18
+ display: flex;
19
+ justify-content: center;
20
+ align-items: center;
21
+ }
22
+
23
+ .chat-container {
24
+ background: white;
25
+ border-radius: 20px;
26
+ box-shadow: 0 20px 40px rgba(0,0,0,0.1);
27
+ width: 90%;
28
+ max-width: 800px;
29
+ height: 80vh;
30
+ display: flex;
31
+ flex-direction: column;
32
+ overflow: hidden;
33
+ }
34
+
35
+ .chat-header {
36
+ background: linear-gradient(135deg, #4f46e5 0%, #7c3aed 100%);
37
+ color: white;
38
+ padding: 20px;
39
+ text-align: center;
40
+ position: relative;
41
+ }
42
+
43
+ .chat-header h1 {
44
+ font-size: 24px;
45
+ font-weight: 600;
46
+ }
47
+
48
+ .chat-header .subtitle {
49
+ opacity: 0.9;
50
+ font-size: 14px;
51
+ margin-top: 5px;
52
+ }
53
+
54
+ .chat-messages {
55
+ flex: 1;
56
+ overflow-y: auto;
57
+ padding: 20px;
58
+ background: #f8fafc;
59
+ }
60
+
61
+ .message {
62
+ margin-bottom: 20px;
63
+ display: flex;
64
+ animation: fadeInUp 0.3s ease;
65
+ }
66
+
67
+ @keyframes fadeInUp {
68
+ from {
69
+ opacity: 0;
70
+ transform: translateY(10px);
71
+ }
72
+ to {
73
+ opacity: 1;
74
+ transform: translateY(0);
75
+ }
76
+ }
77
+
78
+ .message.user {
79
+ justify-content: flex-end;
80
+ }
81
+
82
+ .message-content {
83
+ max-width: 70%;
84
+ padding: 15px 20px;
85
+ border-radius: 18px;
86
+ position: relative;
87
+ }
88
+
89
+ .message.user .message-content {
90
+ background: linear-gradient(135deg, #4f46e5 0%, #7c3aed 100%);
91
+ color: white;
92
+ border-bottom-right-radius: 5px;
93
+ }
94
+
95
+ .message.ai .message-content {
96
+ background: white;
97
+ border: 1px solid #e2e8f0;
98
+ border-bottom-left-radius: 5px;
99
+ box-shadow: 0 2px 8px rgba(0,0,0,0.05);
100
+ }
101
+
102
+ .message-time {
103
+ font-size: 11px;
104
+ opacity: 0.6;
105
+ margin-top: 8px;
106
+ }
107
+
108
+ .chat-input-container {
109
+ padding: 20px;
110
+ background: white;
111
+ border-top: 1px solid #e2e8f0;
112
+ }
113
+
114
+ .chat-input-form {
115
+ display: flex;
116
+ gap: 12px;
117
+ align-items: flex-end;
118
+ }
119
+
120
+ .chat-input {
121
+ flex: 1;
122
+ border: 2px solid #e2e8f0;
123
+ border-radius: 12px;
124
+ padding: 15px 20px;
125
+ font-size: 16px;
126
+ resize: none;
127
+ min-height: 50px;
128
+ max-height: 120px;
129
+ transition: all 0.2s ease;
130
+ }
131
+
132
+ .chat-input:focus {
133
+ outline: none;
134
+ border-color: #4f46e5;
135
+ box-shadow: 0 0 0 3px rgba(79, 70, 229, 0.1);
136
+ }
137
+
138
+ .send-button {
139
+ background: linear-gradient(135deg, #4f46e5 0%, #7c3aed 100%);
140
+ color: white;
141
+ border: none;
142
+ border-radius: 12px;
143
+ padding: 15px 25px;
144
+ font-size: 16px;
145
+ font-weight: 600;
146
+ cursor: pointer;
147
+ transition: all 0.2s ease;
148
+ min-width: 80px;
149
+ }
150
+
151
+ .send-button:hover:not(:disabled) {
152
+ transform: translateY(-2px);
153
+ box-shadow: 0 8px 25px rgba(79, 70, 229, 0.3);
154
+ }
155
+
156
+ .send-button:disabled {
157
+ opacity: 0.6;
158
+ cursor: not-allowed;
159
+ }
160
+
161
+ .loading {
162
+ display: flex;
163
+ align-items: center;
164
+ gap: 10px;
165
+ }
166
+
167
+ .loading-dots {
168
+ display: flex;
169
+ gap: 4px;
170
+ }
171
+
172
+ .loading-dot {
173
+ width: 8px;
174
+ height: 8px;
175
+ border-radius: 50%;
176
+ background: #4f46e5;
177
+ animation: loadingPulse 1.4s infinite ease-in-out;
178
+ }
179
+
180
+ .loading-dot:nth-child(1) { animation-delay: -0.32s; }
181
+ .loading-dot:nth-child(2) { animation-delay: -0.16s; }
182
+
183
+ @keyframes loadingPulse {
184
+ 0%, 80%, 100% {
185
+ transform: scale(0.6);
186
+ opacity: 0.5;
187
+ }
188
+ 40% {
189
+ transform: scale(1);
190
+ opacity: 1;
191
+ }
192
+ }
193
+
194
+ .error-message {
195
+ background: #fee2e2;
196
+ border: 1px solid #fecaca;
197
+ color: #dc2626;
198
+ padding: 15px;
199
+ border-radius: 12px;
200
+ margin: 10px 0;
201
+ }
202
+
203
+ .status-info {
204
+ font-size: 12px;
205
+ color: #64748b;
206
+ margin-top: 5px;
207
+ font-style: italic;
208
+ }
209
+
210
+ /* Mobile responsive */
211
+ @media (max-width: 768px) {
212
+ .chat-container {
213
+ width: 95%;
214
+ height: 90vh;
215
+ border-radius: 15px;
216
+ }
217
+
218
+ .message-content {
219
+ max-width: 85%;
220
+ }
221
+
222
+ .chat-input-container {
223
+ padding: 15px;
224
+ }
225
+ }
226
+ </style>
227
+ </head>
228
+ <body>
229
+ <div class="chat-container">
230
+ <div class="chat-header">
231
+ <h1>Sparrow Logistics Agent</h1>
232
+ <div class="subtitle">Your Intelligent Assistant</div>
233
+ </div>
234
+
235
+ <div class="chat-messages" id="messages">
236
+ <div class="message ai">
237
+ <div class="message-content">
238
+ <div>Hello! I'm Sparrow, your intelligent assistant. I can help you with various tasks and questions. What would you like to know or do today?</div>
239
+ <div class="message-time">Just now</div>
240
+ </div>
241
+ </div>
242
+ </div>
243
+
244
+ <div class="chat-input-container">
245
+ <form class="chat-input-form" id="chatForm">
246
+ <textarea
247
+ class="chat-input"
248
+ id="messageInput"
249
+ placeholder="Type your message here..."
250
+ rows="1"
251
+ required
252
+ ></textarea>
253
+ <button type="submit" class="send-button" id="sendButton">
254
+ Send
255
+ </button>
256
+ </form>
257
+ </div>
258
+ </div>
259
+
260
+ <script>
261
+ const messagesContainer = document.getElementById('messages');
262
+ const chatForm = document.getElementById('chatForm');
263
+ const messageInput = document.getElementById('messageInput');
264
+ const sendButton = document.getElementById('sendButton');
265
+
266
+ // Auto-resize textarea
267
+ messageInput.addEventListener('input', function() {
268
+ this.style.height = 'auto';
269
+ this.style.height = this.scrollHeight + 'px';
270
+ });
271
+
272
+ // Handle Enter key (Shift+Enter for new line)
273
+ messageInput.addEventListener('keydown', function(e) {
274
+ if (e.key === 'Enter' && !e.shiftKey) {
275
+ e.preventDefault();
276
+ chatForm.dispatchEvent(new Event('submit'));
277
+ }
278
+ });
279
+
280
+ function addMessage(content, isUser = false, showTime = true) {
281
+ const messageDiv = document.createElement('div');
282
+ messageDiv.className = `message ${isUser ? 'user' : 'ai'}`;
283
+
284
+ const currentTime = new Date().toLocaleTimeString([], {hour: '2-digit', minute:'2-digit'});
285
+
286
+ messageDiv.innerHTML = `
287
+ <div class="message-content">
288
+ <div>${content}</div>
289
+ ${showTime ? `<div class="message-time">${currentTime}</div>` : ''}
290
+ </div>
291
+ `;
292
+
293
+ messagesContainer.appendChild(messageDiv);
294
+ messagesContainer.scrollTop = messagesContainer.scrollHeight;
295
+
296
+ return messageDiv;
297
+ }
298
+
299
+ function showLoading() {
300
+ return addMessage(`
301
+ <div class="loading">
302
+ <span>Sparrow is thinking</span>
303
+ <div class="loading-dots">
304
+ <div class="loading-dot"></div>
305
+ <div class="loading-dot"></div>
306
+ <div class="loading-dot"></div>
307
+ </div>
308
+ </div>
309
+ `, false, false);
310
+ }
311
+
312
+ function setButtonState(loading) {
313
+ sendButton.disabled = loading;
314
+ sendButton.textContent = loading ? 'Sending...' : 'Send';
315
+ messageInput.disabled = loading;
316
+ }
317
+
318
+ chatForm.addEventListener('submit', async function(e) {
319
+ e.preventDefault();
320
+
321
+ const message = messageInput.value.trim();
322
+ if (!message) return;
323
+
324
+ // Add user message
325
+ addMessage(message, true);
326
+ messageInput.value = '';
327
+ messageInput.style.height = 'auto';
328
+
329
+ // Show loading
330
+ setButtonState(true);
331
+ const loadingMessage = showLoading();
332
+
333
+ try {
334
+ const response = await fetch('/chat', {
335
+ method: 'POST',
336
+ headers: {
337
+ 'Content-Type': 'application/json',
338
+ },
339
+ body: JSON.stringify({ message: message })
340
+ });
341
+
342
+ const data = await response.json();
343
+
344
+ // Remove loading message
345
+ loadingMessage.remove();
346
+
347
+ if (data.success) {
348
+ // Add AI response
349
+ let responseContent = data.response;
350
+ if (data.status) {
351
+ responseContent += `<div class="status-info">${data.status}</div>`;
352
+ }
353
+ addMessage(responseContent);
354
+ } else {
355
+ addMessage(`<div class="error-message">Sorry, I encountered an error: ${data.error}</div>`);
356
+ }
357
+
358
+ } catch (error) {
359
+ loadingMessage.remove();
360
+ addMessage(`<div class="error-message">Sorry, I couldn't connect to the server. Please try again.</div>`);
361
+ console.error('Error:', error);
362
+ } finally {
363
+ setButtonState(false);
364
+ messageInput.focus();
365
+ }
366
+ });
367
+
368
+ // Focus input on load
369
+ messageInput.focus();
370
+ </script>
371
+ </body>
372
+ </html>
uv.lock ADDED
The diff for this file is too large to render. See raw diff