File size: 12,423 Bytes
4baa862
3b89400
 
4baa862
e1fb2f2
 
 
 
 
 
371b0d6
e1fb2f2
 
 
4197ccf
be17001
e1fb2f2
 
46d624e
1ab177c
 
 
 
46d624e
 
1ab177c
 
f698c44
1ab177c
 
 
371b0d6
1ab177c
 
371b0d6
1ab177c
 
 
 
 
 
 
 
371b0d6
1ab177c
be17001
 
 
 
46d624e
 
be17001
 
 
 
 
 
 
1ab177c
 
 
 
 
be17001
1ab177c
 
 
 
 
 
 
 
 
46d624e
1ab177c
 
 
 
 
371b0d6
be17001
 
 
 
 
1ab177c
 
be17001
 
 
4baa862
 
be17001
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e1fb2f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4197ccf
e1fb2f2
 
 
 
 
 
 
 
 
 
 
4197ccf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e1fb2f2
4baa862
 
 
3b89400
4baa862
 
 
 
3b89400
 
 
4baa862
609858e
4baa862
 
 
 
 
 
 
 
 
 
e1fb2f2
4baa862
6bd76d2
3b89400
4baa862
6bd76d2
3b89400
4baa862
 
e1fb2f2
4baa862
6bd76d2
4baa862
 
e1fb2f2
be17001
e1fb2f2
 
6bd76d2
371b0d6
 
 
 
3b89400
e1fb2f2
 
 
371b0d6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e1fb2f2
371b0d6
e1fb2f2
371b0d6
e1fb2f2
3b89400
e1fb2f2
 
 
 
 
 
 
 
 
454d780
e1fb2f2
 
 
 
454d780
e1fb2f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
import os
# from groq import Groq
from openai import AsyncOpenAI

import json

from ._tool_call import ToolCall

from ._config import logger

from openai.types.chat.chat_completion_chunk import ChoiceDeltaToolCall, ChoiceDeltaToolCallFunction

AVAILABLE_FUNCTIONS = {
    "contact_us": ToolCall.send_email.__name__,
    "report_missing_context": ToolCall.report_missing_context.__name__,
    "get_context_for_user_query": ToolCall.get_context_for_user_query.__name__,
}

INSTRUCTIONS = """
# Objective  
You are a Conversational AI assistant developed by **Sifars**, a web development company. Your responses **must** be **short (not more than 50 words), precise, and engaging**. You provide **clear, conversational answers** to queries about **Sifars and its services**.

# About Sifars  
Sifars is a pioneering web service provider founded in 2018 by **Jatin Sethi, Munish Kumar, and Sukhwinder Singh**. Headquartered in **Mohali, Punjab, India**, we specialize in **application development using Python, JavaScript, React, Node.js, and more**. We focus on **scalability, innovation, and work-life balance**.
Sifars is specialized in building AI powered web and mobile applications.

- **Email:** [contact@sifars.com](mailto:contact@sifars.com)  
- **Address:** D-234, Ground Floor, Phase 8B, Industrial Area, Sector 74, Sahibzada Ajit Singh Nagar, Punjab 160055, India  
- **Phone:** [+91 8106 455 950](tel:+918106455950), [+91 8008 296 463](tel:+918008296463), [+91 8896 720 000](tel:+918896720000)  
- **Careers:** [Explore opportunities](https://www.sifars.com/en/careers/)  

If a user wants to contact us, ask for their **name, email, phone number, and reason for contact**, then call the **"contact_us"** tool with these details.

# Rules  

### **Response Length & Style**  
- **All responses must be under 50 words.**  
- Responses **must** be **short, clear, and engaging** (two sentences per paragraph max).  
- **Each sentence must have fewer than 10 words.**  
- Avoid **repetition, filler words, or a salesy tone**.  
- Use **simple, varied language** in a **friendly, conversational tone**.  

### **Handling Queries**  

#### βœ… **How to Respond to Queries**  
1. **First, check if the chatbot can answer the query using the provided information.**  
   - If yes, **answer directly** without calling any tool.  
   - If not, proceed to Step 2.  
2. **If the context is not enough, and the query is about Sifars:**  
   - Call **"get_context_for_user_query"** to fetch relevant details by passing correct query.
   - Normalize the **query**. Query might be refrence to the chat history so paraphrase the query as it standalso understandable without the history  
   - If the retrieved context is enough, **answer the question without calling another tool**.  
   - If still insufficient, proceed to Step 3.  
3. **If no sufficient information is available even after calling "get_context_for_user_query":**  
   - Collect the user’s **name, email, phone number, and reason for contact**.  
   - Call **"contact_us"** with these details.  
   - Respond: *"Your query has been forwarded to our team. They will reach out soon."*  
4. **DO NOT call "get_context_for_user_query" for general or casual conversations.**  

#### 🚨 **Handling Job Applications**  
- **If a user wants to apply for a job at Sifars:**  
  - Ask them to visit our **[Careers Page](https://www.sifars.com/en/careers/)**.  
  - Alternatively, they can **email their resume to [hr@sifars.com](mailto:hr@sifars.com)**.  
  - **DO NOT call "contact_us" or "get_context_for_user_query" for job application queries.**  

#### πŸ“Œ **Examples of Proper Handling**  
βœ… **User:** "Who are you?"  
βœ… **Response:** "I am a chatbot developed by Sifars to assist users."  

βœ… **User:** "What services does Sifars offer?"  
βœ… **Response:** "We offer web and application development using Python, JavaScript, React, and more."  

βœ… **User:** "Where is Sifars located?"  
βœ… **Response:** "Sifars is headquartered in Mohali, Punjab, India."  

βœ… **User:** "How can I apply for a job at Sifars?"  
βœ… **Response:** "You can check our openings at [Careers Page](https://www.sifars.com/en/careers/) or email your resume to [hr@sifars.com](mailto:hr@sifars.com)."  

❌ **User:** "What is Sifars' refund policy?" (No info in context)  
βœ… **Action:**  
1. **Check if existing context provides an answer.**  
2. **If not, call "get_context_for_user_query"** to fetch more details.  
3. **If still insufficient, collect the user’s details (name, email, phone number, and reason for contact).**  
4. **Call "contact_us" with these details.**  
5. **Respond:** *"Your query has been forwarded to our team. They will reach out soon."*  

### **Tool Usage**  
- **Tools should only be called if absolutely necessary.**  
- **Always check if the chatbot can answer the question first.**  
- **"get_context_for_user_query"
"""

CONTEXT_EXTRACTING_TOOL={
    "type": "function",
    "function": {
        "name": "get_context_for_user_query",
        "description": "Extract the context from the database regarding the user query",
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "This field contains the query for which you need to provide an answer. Query might be refrence to the chat history so paraphrase the query as it standalso understandable without the history."
                },
            },
            "required": ["query"]
        }
    }
}

CONTACT_TOOL={
    "type": "function",
    "function": {
        "name": "contact_us",
        "description": "Collect the information dropped by user in the chat to contact the sifars team",
        "parameters": {
            "type": "object",
            "properties": {
                "name": {
                    "type": "string",
                    "description": "This field is the name of the user which you will collect from the user"
                },
                "email": {
                    "type": "string",
                    "description": "This field is the email of the user which you will collect from the user"
                },
                "phone_number": {
                    "type": "string",
                    "description": "This field is the phone number of the user which you will collect from the user"
                },
                "reason_for_contact": {
                    "type": "string",
                    "description": "This field will contain a paragraph which is the reason for contact of the user which you will collect. This is seperate from the subject of the email. Keep it same as what user has given in the chat. Do not change it or paraphrase it.."
                },
                "subject": {
                    "type": "string",
                    "description": "This is the subject of the email which will be dynamically contructed by you using the reason for contact provided. It is mandatory that you do not mention this field to the user."
                }
            },
            "required": ["name", "email", "phone_number","reason_for_contact", "subject"]
        }
    }
}

REPORT_MISSING_CONTEXT_TOOL={
    "type": "function",
    "function": {
        "name": "report_missing_context",
        "description": "Report the context missing in the chat to the sifars team",
        "parameters": {
            "type": "object",
            "properties": {
                "unresolved_query": {
                    "type": "string",
                    "description": "This field contains the query for which you could not provide an answer because the provided query was not enough."
                },
            },
            "required": ["unresolved_query"]
        }
    }
}


class ChatClient:
    def __init__(
        self,
        model: str = os.getenv("OPENAI_MODEL"),
        max_tokens: int=4096,
        stream: bool=True,
        system_message: str=INSTRUCTIONS
    ):
        self.client = AsyncOpenAI(
            base_url=os.getenv("OPENAI_API_BASE_URL"),
            api_key=os.getenv("OPENAI_API_KEY"),
        )
        self.model = model
        self.max_tokens = max_tokens
        self.stream = stream
        self.system_message = system_message

    async def __aenter__(self):
        return self
    
    async def __aexit__(self, exc_type, exc, traceback):
        pass

    async def create_chat_completions(
        self,
        messages: list,
        model: str=os.getenv("OPENAI_MODEL"),
    ):
        logger.info("Calling Groq API...")
        response = await self.client.chat.completions.create(
            messages=[
                {"role": "system", "content": self.system_message},
                *messages
            ],
            model=model,
            max_tokens=self.max_tokens,
            stream=self.stream,
            temperature=0.7,
            tools=[CONTACT_TOOL, REPORT_MISSING_CONTEXT_TOOL, CONTEXT_EXTRACTING_TOOL],
            tool_choice="auto",
        )
        logger.info("Groq API called successfully.")

        tool_calls_by_index = {}
        current_tool_call = None

        async for chunk in response:
            delta = chunk.choices[0].delta
            if delta and delta.content:
                yield delta.content
            elif delta and delta.tool_calls:
                for tool_call in delta.tool_calls:
                    if tool_call.index not in tool_calls_by_index:
                        tool_calls_by_index[tool_call.index] = {
                            'id': tool_call.id,
                            'type': tool_call.type,
                            'function': {
                                'name': tool_call.function.name,
                                'arguments': ''
                            }
                        }
                    
                    current_call = tool_calls_by_index[tool_call.index]
                    
                    if tool_call.function.arguments:
                        current_call['function']['arguments'] += tool_call.function.arguments

        if tool_calls_by_index:
            complete_tool_calls = [
                ChoiceDeltaToolCall(
                    index=idx,
                    id=call['id'],
                    type=call['type'],
                    function=ChoiceDeltaToolCallFunction(
                        name=call['function']['name'],
                        arguments=call['function']['arguments']
                    )
                )
                for idx, call in tool_calls_by_index.items()
            ]
            
            tool_calls_output = await self._handle_required_action(
                tool_calls=complete_tool_calls,
            )
            messages.append({"role": "assistant", "tool_calls": complete_tool_calls})
            messages.extend(tool_calls_output)
            async for chunk in self.create_chat_completions(messages=messages):
                yield chunk

    async def _handle_required_action(self, tool_calls: list[dict]):
        tool_calls_output = []
        for tool in tool_calls:
            if tool.type == "function":
                try:
                    function_to_call = AVAILABLE_FUNCTIONS[tool.function.name]
                    function_arguments = json.loads(str(tool.function.arguments)) if tool.function.arguments else {}
                    logger.info(f"Calling tool: {function_to_call}")
                    async with ToolCall() as tool_call:
                        function_response = await getattr(tool_call, function_to_call)(
                            function_arguments
                        )
                    logger.info("Tool call completed.")
                except Exception as e:
                    logger.error(e)
                    function_response = "Unable to call the tool."
                tool_calls_output.append(
                    {
                        "role": "tool",
                        "tool_call_id": tool.id,
                        "name": tool.function.name,
                        "content": (
                            str(function_response)
                            if function_response
                            else "No results found."
                        ),
                    }
                )
        return tool_calls_output