abdo-Mansour commited on
Commit
b8ee1a5
Β·
1 Parent(s): f08d5a9
Solution/.gitignore ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ .env
2
+ __pycache__
3
+ *.txt
Solution/.python-version ADDED
@@ -0,0 +1 @@
 
 
1
+ 3.11
Solution/LICENSE ADDED
@@ -0,0 +1,373 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Mozilla Public License Version 2.0
2
+ ==================================
3
+
4
+ 1. Definitions
5
+ --------------
6
+
7
+ 1.1. "Contributor"
8
+ means each individual or legal entity that creates, contributes to
9
+ the creation of, or owns Covered Software.
10
+
11
+ 1.2. "Contributor Version"
12
+ means the combination of the Contributions of others (if any) used
13
+ by a Contributor and that particular Contributor's Contribution.
14
+
15
+ 1.3. "Contribution"
16
+ means Covered Software of a particular Contributor.
17
+
18
+ 1.4. "Covered Software"
19
+ means Source Code Form to which the initial Contributor has attached
20
+ the notice in Exhibit A, the Executable Form of such Source Code
21
+ Form, and Modifications of such Source Code Form, in each case
22
+ including portions thereof.
23
+
24
+ 1.5. "Incompatible With Secondary Licenses"
25
+ means
26
+
27
+ (a) that the initial Contributor has attached the notice described
28
+ in Exhibit B to the Covered Software; or
29
+
30
+ (b) that the Covered Software was made available under the terms of
31
+ version 1.1 or earlier of the License, but not also under the
32
+ terms of a Secondary License.
33
+
34
+ 1.6. "Executable Form"
35
+ means any form of the work other than Source Code Form.
36
+
37
+ 1.7. "Larger Work"
38
+ means a work that combines Covered Software with other material, in
39
+ a separate file or files, that is not Covered Software.
40
+
41
+ 1.8. "License"
42
+ means this document.
43
+
44
+ 1.9. "Licensable"
45
+ means having the right to grant, to the maximum extent possible,
46
+ whether at the time of the initial grant or subsequently, any and
47
+ all of the rights conveyed by this License.
48
+
49
+ 1.10. "Modifications"
50
+ means any of the following:
51
+
52
+ (a) any file in Source Code Form that results from an addition to,
53
+ deletion from, or modification of the contents of Covered
54
+ Software; or
55
+
56
+ (b) any new file in Source Code Form that contains any Covered
57
+ Software.
58
+
59
+ 1.11. "Patent Claims" of a Contributor
60
+ means any patent claim(s), including without limitation, method,
61
+ process, and apparatus claims, in any patent Licensable by such
62
+ Contributor that would be infringed, but for the grant of the
63
+ License, by the making, using, selling, offering for sale, having
64
+ made, import, or transfer of either its Contributions or its
65
+ Contributor Version.
66
+
67
+ 1.12. "Secondary License"
68
+ means either the GNU General Public License, Version 2.0, the GNU
69
+ Lesser General Public License, Version 2.1, the GNU Affero General
70
+ Public License, Version 3.0, or any later versions of those
71
+ licenses.
72
+
73
+ 1.13. "Source Code Form"
74
+ means the form of the work preferred for making modifications.
75
+
76
+ 1.14. "You" (or "Your")
77
+ means an individual or a legal entity exercising rights under this
78
+ License. For legal entities, "You" includes any entity that
79
+ controls, is controlled by, or is under common control with You. For
80
+ purposes of this definition, "control" means (a) the power, direct
81
+ or indirect, to cause the direction or management of such entity,
82
+ whether by contract or otherwise, or (b) ownership of more than
83
+ fifty percent (50%) of the outstanding shares or beneficial
84
+ ownership of such entity.
85
+
86
+ 2. License Grants and Conditions
87
+ --------------------------------
88
+
89
+ 2.1. Grants
90
+
91
+ Each Contributor hereby grants You a world-wide, royalty-free,
92
+ non-exclusive license:
93
+
94
+ (a) under intellectual property rights (other than patent or trademark)
95
+ Licensable by such Contributor to use, reproduce, make available,
96
+ modify, display, perform, distribute, and otherwise exploit its
97
+ Contributions, either on an unmodified basis, with Modifications, or
98
+ as part of a Larger Work; and
99
+
100
+ (b) under Patent Claims of such Contributor to make, use, sell, offer
101
+ for sale, have made, import, and otherwise transfer either its
102
+ Contributions or its Contributor Version.
103
+
104
+ 2.2. Effective Date
105
+
106
+ The licenses granted in Section 2.1 with respect to any Contribution
107
+ become effective for each Contribution on the date the Contributor first
108
+ distributes such Contribution.
109
+
110
+ 2.3. Limitations on Grant Scope
111
+
112
+ The licenses granted in this Section 2 are the only rights granted under
113
+ this License. No additional rights or licenses will be implied from the
114
+ distribution or licensing of Covered Software under this License.
115
+ Notwithstanding Section 2.1(b) above, no patent license is granted by a
116
+ Contributor:
117
+
118
+ (a) for any code that a Contributor has removed from Covered Software;
119
+ or
120
+
121
+ (b) for infringements caused by: (i) Your and any other third party's
122
+ modifications of Covered Software, or (ii) the combination of its
123
+ Contributions with other software (except as part of its Contributor
124
+ Version); or
125
+
126
+ (c) under Patent Claims infringed by Covered Software in the absence of
127
+ its Contributions.
128
+
129
+ This License does not grant any rights in the trademarks, service marks,
130
+ or logos of any Contributor (except as may be necessary to comply with
131
+ the notice requirements in Section 3.4).
132
+
133
+ 2.4. Subsequent Licenses
134
+
135
+ No Contributor makes additional grants as a result of Your choice to
136
+ distribute the Covered Software under a subsequent version of this
137
+ License (see Section 10.2) or under the terms of a Secondary License (if
138
+ permitted under the terms of Section 3.3).
139
+
140
+ 2.5. Representation
141
+
142
+ Each Contributor represents that the Contributor believes its
143
+ Contributions are its original creation(s) or it has sufficient rights
144
+ to grant the rights to its Contributions conveyed by this License.
145
+
146
+ 2.6. Fair Use
147
+
148
+ This License is not intended to limit any rights You have under
149
+ applicable copyright doctrines of fair use, fair dealing, or other
150
+ equivalents.
151
+
152
+ 2.7. Conditions
153
+
154
+ Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
155
+ in Section 2.1.
156
+
157
+ 3. Responsibilities
158
+ -------------------
159
+
160
+ 3.1. Distribution of Source Form
161
+
162
+ All distribution of Covered Software in Source Code Form, including any
163
+ Modifications that You create or to which You contribute, must be under
164
+ the terms of this License. You must inform recipients that the Source
165
+ Code Form of the Covered Software is governed by the terms of this
166
+ License, and how they can obtain a copy of this License. You may not
167
+ attempt to alter or restrict the recipients' rights in the Source Code
168
+ Form.
169
+
170
+ 3.2. Distribution of Executable Form
171
+
172
+ If You distribute Covered Software in Executable Form then:
173
+
174
+ (a) such Covered Software must also be made available in Source Code
175
+ Form, as described in Section 3.1, and You must inform recipients of
176
+ the Executable Form how they can obtain a copy of such Source Code
177
+ Form by reasonable means in a timely manner, at a charge no more
178
+ than the cost of distribution to the recipient; and
179
+
180
+ (b) You may distribute such Executable Form under the terms of this
181
+ License, or sublicense it under different terms, provided that the
182
+ license for the Executable Form does not attempt to limit or alter
183
+ the recipients' rights in the Source Code Form under this License.
184
+
185
+ 3.3. Distribution of a Larger Work
186
+
187
+ You may create and distribute a Larger Work under terms of Your choice,
188
+ provided that You also comply with the requirements of this License for
189
+ the Covered Software. If the Larger Work is a combination of Covered
190
+ Software with a work governed by one or more Secondary Licenses, and the
191
+ Covered Software is not Incompatible With Secondary Licenses, this
192
+ License permits You to additionally distribute such Covered Software
193
+ under the terms of such Secondary License(s), so that the recipient of
194
+ the Larger Work may, at their option, further distribute the Covered
195
+ Software under the terms of either this License or such Secondary
196
+ License(s).
197
+
198
+ 3.4. Notices
199
+
200
+ You may not remove or alter the substance of any license notices
201
+ (including copyright notices, patent notices, disclaimers of warranty,
202
+ or limitations of liability) contained within the Source Code Form of
203
+ the Covered Software, except that You may alter any license notices to
204
+ the extent required to remedy known factual inaccuracies.
205
+
206
+ 3.5. Application of Additional Terms
207
+
208
+ You may choose to offer, and to charge a fee for, warranty, support,
209
+ indemnity or liability obligations to one or more recipients of Covered
210
+ Software. However, You may do so only on Your own behalf, and not on
211
+ behalf of any Contributor. You must make it absolutely clear that any
212
+ such warranty, support, indemnity, or liability obligation is offered by
213
+ You alone, and You hereby agree to indemnify every Contributor for any
214
+ liability incurred by such Contributor as a result of warranty, support,
215
+ indemnity or liability terms You offer. You may include additional
216
+ disclaimers of warranty and limitations of liability specific to any
217
+ jurisdiction.
218
+
219
+ 4. Inability to Comply Due to Statute or Regulation
220
+ ---------------------------------------------------
221
+
222
+ If it is impossible for You to comply with any of the terms of this
223
+ License with respect to some or all of the Covered Software due to
224
+ statute, judicial order, or regulation then You must: (a) comply with
225
+ the terms of this License to the maximum extent possible; and (b)
226
+ describe the limitations and the code they affect. Such description must
227
+ be placed in a text file included with all distributions of the Covered
228
+ Software under this License. Except to the extent prohibited by statute
229
+ or regulation, such description must be sufficiently detailed for a
230
+ recipient of ordinary skill to be able to understand it.
231
+
232
+ 5. Termination
233
+ --------------
234
+
235
+ 5.1. The rights granted under this License will terminate automatically
236
+ if You fail to comply with any of its terms. However, if You become
237
+ compliant, then the rights granted under this License from a particular
238
+ Contributor are reinstated (a) provisionally, unless and until such
239
+ Contributor explicitly and finally terminates Your grants, and (b) on an
240
+ ongoing basis, if such Contributor fails to notify You of the
241
+ non-compliance by some reasonable means prior to 60 days after You have
242
+ come back into compliance. Moreover, Your grants from a particular
243
+ Contributor are reinstated on an ongoing basis if such Contributor
244
+ notifies You of the non-compliance by some reasonable means, this is the
245
+ first time You have received notice of non-compliance with this License
246
+ from such Contributor, and You become compliant prior to 30 days after
247
+ Your receipt of the notice.
248
+
249
+ 5.2. If You initiate litigation against any entity by asserting a patent
250
+ infringement claim (excluding declaratory judgment actions,
251
+ counter-claims, and cross-claims) alleging that a Contributor Version
252
+ directly or indirectly infringes any patent, then the rights granted to
253
+ You by any and all Contributors for the Covered Software under Section
254
+ 2.1 of this License shall terminate.
255
+
256
+ 5.3. In the event of termination under Sections 5.1 or 5.2 above, all
257
+ end user license agreements (excluding distributors and resellers) which
258
+ have been validly granted by You or Your distributors under this License
259
+ prior to termination shall survive termination.
260
+
261
+ ************************************************************************
262
+ * *
263
+ * 6. Disclaimer of Warranty *
264
+ * ------------------------- *
265
+ * *
266
+ * Covered Software is provided under this License on an "as is" *
267
+ * basis, without warranty of any kind, either expressed, implied, or *
268
+ * statutory, including, without limitation, warranties that the *
269
+ * Covered Software is free of defects, merchantable, fit for a *
270
+ * particular purpose or non-infringing. The entire risk as to the *
271
+ * quality and performance of the Covered Software is with You. *
272
+ * Should any Covered Software prove defective in any respect, You *
273
+ * (not any Contributor) assume the cost of any necessary servicing, *
274
+ * repair, or correction. This disclaimer of warranty constitutes an *
275
+ * essential part of this License. No use of any Covered Software is *
276
+ * authorized under this License except under this disclaimer. *
277
+ * *
278
+ ************************************************************************
279
+
280
+ ************************************************************************
281
+ * *
282
+ * 7. Limitation of Liability *
283
+ * -------------------------- *
284
+ * *
285
+ * Under no circumstances and under no legal theory, whether tort *
286
+ * (including negligence), contract, or otherwise, shall any *
287
+ * Contributor, or anyone who distributes Covered Software as *
288
+ * permitted above, be liable to You for any direct, indirect, *
289
+ * special, incidental, or consequential damages of any character *
290
+ * including, without limitation, damages for lost profits, loss of *
291
+ * goodwill, work stoppage, computer failure or malfunction, or any *
292
+ * and all other commercial damages or losses, even if such party *
293
+ * shall have been informed of the possibility of such damages. This *
294
+ * limitation of liability shall not apply to liability for death or *
295
+ * personal injury resulting from such party's negligence to the *
296
+ * extent applicable law prohibits such limitation. Some *
297
+ * jurisdictions do not allow the exclusion or limitation of *
298
+ * incidental or consequential damages, so this exclusion and *
299
+ * limitation may not apply to You. *
300
+ * *
301
+ ************************************************************************
302
+
303
+ 8. Litigation
304
+ -------------
305
+
306
+ Any litigation relating to this License may be brought only in the
307
+ courts of a jurisdiction where the defendant maintains its principal
308
+ place of business and such litigation shall be governed by laws of that
309
+ jurisdiction, without reference to its conflict-of-law provisions.
310
+ Nothing in this Section shall prevent a party's ability to bring
311
+ cross-claims or counter-claims.
312
+
313
+ 9. Miscellaneous
314
+ ----------------
315
+
316
+ This License represents the complete agreement concerning the subject
317
+ matter hereof. If any provision of this License is held to be
318
+ unenforceable, such provision shall be reformed only to the extent
319
+ necessary to make it enforceable. Any law or regulation which provides
320
+ that the language of a contract shall be construed against the drafter
321
+ shall not be used to construe this License against a Contributor.
322
+
323
+ 10. Versions of the License
324
+ ---------------------------
325
+
326
+ 10.1. New Versions
327
+
328
+ Mozilla Foundation is the license steward. Except as provided in Section
329
+ 10.3, no one other than the license steward has the right to modify or
330
+ publish new versions of this License. Each version will be given a
331
+ distinguishing version number.
332
+
333
+ 10.2. Effect of New Versions
334
+
335
+ You may distribute the Covered Software under the terms of the version
336
+ of the License under which You originally received the Covered Software,
337
+ or under the terms of any subsequent version published by the license
338
+ steward.
339
+
340
+ 10.3. Modified Versions
341
+
342
+ If you create software not governed by this License, and you want to
343
+ create a new license for such software, you may create and use a
344
+ modified version of this License if you rename the license and remove
345
+ any references to the name of the license steward (except to note that
346
+ such modified license differs from this License).
347
+
348
+ 10.4. Distributing Source Code Form that is Incompatible With Secondary
349
+ Licenses
350
+
351
+ If You choose to distribute Source Code Form that is Incompatible With
352
+ Secondary Licenses under the terms of this version of the License, the
353
+ notice described in Exhibit B of this License must be attached.
354
+
355
+ Exhibit A - Source Code Form License Notice
356
+ -------------------------------------------
357
+
358
+ This Source Code Form is subject to the terms of the Mozilla Public
359
+ License, v. 2.0. If a copy of the MPL was not distributed with this
360
+ file, You can obtain one at https://mozilla.org/MPL/2.0/.
361
+
362
+ If it is not possible or desirable to put the notice in a particular
363
+ file, then You may include the notice in a location (such as a LICENSE
364
+ file in a relevant directory) where a recipient would be likely to look
365
+ for such a notice.
366
+
367
+ You may add additional accurate notices of copyright ownership.
368
+
369
+ Exhibit B - "Incompatible With Secondary Licenses" Notice
370
+ ---------------------------------------------------------
371
+
372
+ This Source Code Form is "Incompatible With Secondary Licenses", as
373
+ defined by the Mozilla Public License, v. 2.0.
Solution/README.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # Envrionment Setup
2
+ * Make a file in the repository folder (right next to LICENSE) and name it ".env"
3
+ * Add to that file the line ```GEMINI_API_KEY=<your api key>```
4
+ * Note that you can get your api key from Google AI Studio.
5
+ * Install UV using [official guide](https://docs.astral.sh/uv/getting-started/installation/#standalone-installer) or via ```pip install uv```.
6
+ * Navigate your terminal to repo folder and run command ```uv sync```
7
+ * You're all set!
8
+
9
+ > Beware Running this agent can eat up your api credits specially that it's not currently limited in terms of calls or steps.
Solution/agent.json ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "tools": [
3
+ "web_search",
4
+ "visit_webpage",
5
+ "final_answer"
6
+ ],
7
+ "model": {
8
+ "class": "HfApiModel",
9
+ "data": {
10
+ "max_tokens": 2096,
11
+ "temperature": 0.5,
12
+ "last_input_token_count": null,
13
+ "last_output_token_count": null,
14
+ "model_id": "Qwen/Qwen2.5-Coder-32B-Instruct",
15
+ "custom_role_conversions": null
16
+ }
17
+ },
18
+ "prompt_templates": {
19
+ "system_prompt": "You are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can.\nTo do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code.\nTo solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.\n\nAt each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.\nThen in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '<end_code>' sequence.\nDuring each intermediate step, you can use 'print()' to save whatever important information you will then need.\nThese print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.\nIn the end you have to return a final answer using the `final_answer` tool.\n\nHere are a few examples using notional tools:\n---\nTask: \"Generate an image of the oldest person in this document.\"\n\nThought: I will proceed step by step and use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.\nCode:\n```py\nanswer = document_qa(document=document, question=\"Who is the oldest person mentioned?\")\nprint(answer)\n```<end_code>\nObservation: \"The oldest person in the document is John Doe, a 55 year old lumberjack living in Newfoundland.\"\n\nThought: I will now generate an image showcasing the oldest person.\nCode:\n```py\nimage = image_generator(\"A portrait of John Doe, a 55-year-old man living in Canada.\")\nfinal_answer(image)\n```<end_code>\n\n---\nTask: \"What is the result of the following operation: 5 + 3 + 1294.678?\"\n\nThought: I will use python code to compute the result of the operation and then return the final answer using the `final_answer` tool\nCode:\n```py\nresult = 5 + 3 + 1294.678\nfinal_answer(result)\n```<end_code>\n\n---\nTask:\n\"Answer the question in the variable `question` about the image stored in the variable `image`. The question is in French.\nYou have been provided with these additional arguments, that you can access using the keys as variables in your python code:\n{'question': 'Quel est l'animal sur l'image?', 'image': 'path/to/image.jpg'}\"\n\nThought: I will use the following tools: `translator` to translate the question into English and then `image_qa` to answer the question on the input image.\nCode:\n```py\ntranslated_question = translator(question=question, src_lang=\"French\", tgt_lang=\"English\")\nprint(f\"The translated question is {translated_question}.\")\nanswer = image_qa(image=image, question=translated_question)\nfinal_answer(f\"The answer is {answer}\")\n```<end_code>\n\n---\nTask:\nIn a 1979 interview, Stanislaus Ulam discusses with Martin Sherwin about other great physicists of his time, including Oppenheimer.\nWhat does he say was the consequence of Einstein learning too much math on his creativity, in one word?\n\nThought: I need to find and read the 1979 interview of Stanislaus Ulam with Martin Sherwin.\nCode:\n```py\npages = search(query=\"1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein\")\nprint(pages)\n```<end_code>\nObservation:\nNo result found for query \"1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein\".\n\nThought: The query was maybe too restrictive and did not find any results. Let's try again with a broader query.\nCode:\n```py\npages = search(query=\"1979 interview Stanislaus Ulam\")\nprint(pages)\n```<end_code>\nObservation:\nFound 6 pages:\n[Stanislaus Ulam 1979 interview](https://ahf.nuclearmuseum.org/voices/oral-histories/stanislaus-ulams-interview-1979/)\n\n[Ulam discusses Manhattan Project](https://ahf.nuclearmuseum.org/manhattan-project/ulam-manhattan-project/)\n\n(truncated)\n\nThought: I will read the first 2 pages to know more.\nCode:\n```py\nfor url in [\"https://ahf.nuclearmuseum.org/voices/oral-histories/stanislaus-ulams-interview-1979/\", \"https://ahf.nuclearmuseum.org/manhattan-project/ulam-manhattan-project/\"]:\n whole_page = visit_webpage(url)\n print(whole_page)\n print(\"\\n\" + \"=\"*80 + \"\\n\") # Print separator between pages\n```<end_code>\nObservation:\nManhattan Project Locations:\nLos Alamos, NM\nStanislaus Ulam was a Polish-American mathematician. He worked on the Manhattan Project at Los Alamos and later helped design the hydrogen bomb. In this interview, he discusses his work at\n(truncated)\n\nThought: I now have the final answer: from the webpages visited, Stanislaus Ulam says of Einstein: \"He learned too much mathematics and sort of diminished, it seems to me personally, it seems to me his purely physics creativity.\" Let's answer in one word.\nCode:\n```py\nfinal_answer(\"diminished\")\n```<end_code>\n\n---\nTask: \"Which city has the highest population: Guangzhou or Shanghai?\"\n\nThought: I need to get the populations for both cities and compare them: I will use the tool `search` to get the population of both cities.\nCode:\n```py\nfor city in [\"Guangzhou\", \"Shanghai\"]:\n print(f\"Population {city}:\", search(f\"{city} population\")\n```<end_code>\nObservation:\nPopulation Guangzhou: ['Guangzhou has a population of 15 million inhabitants as of 2021.']\nPopulation Shanghai: '26 million (2019)'\n\nThought: Now I know that Shanghai has the highest population.\nCode:\n```py\nfinal_answer(\"Shanghai\")\n```<end_code>\n\n---\nTask: \"What is the current age of the pope, raised to the power 0.36?\"\n\nThought: I will use the tool `wiki` to get the age of the pope, and confirm that with a web search.\nCode:\n```py\npope_age_wiki = wiki(query=\"current pope age\")\nprint(\"Pope age as per wikipedia:\", pope_age_wiki)\npope_age_search = web_search(query=\"current pope age\")\nprint(\"Pope age as per google search:\", pope_age_search)\n```<end_code>\nObservation:\nPope age: \"The pope Francis is currently 88 years old.\"\n\nThought: I know that the pope is 88 years old. Let's compute the result using python code.\nCode:\n```py\npope_current_age = 88 ** 0.36\nfinal_answer(pope_current_age)\n```<end_code>\n\nAbove example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools:\n{%- for tool in tools.values() %}\n- {{ tool.name }}: {{ tool.description }}\n Takes inputs: {{tool.inputs}}\n Returns an output of type: {{tool.output_type}}\n{%- endfor %}\n\n{%- if managed_agents and managed_agents.values() | list %}\nYou can also give tasks to team members.\nCalling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'task', a long string explaining your task.\nGiven that this team member is a real human, you should be very verbose in your task.\nHere is a list of the team members that you can call:\n{%- for agent in managed_agents.values() %}\n- {{ agent.name }}: {{ agent.description }}\n{%- endfor %}\n{%- else %}\n{%- endif %}\n\nHere are the rules you should always follow to solve your task:\n1. Always provide a 'Thought:' sequence, and a 'Code:\\n```py' sequence ending with '```<end_code>' sequence, else you will fail.\n2. Use only variables that you have defined!\n3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': \"What is the place where James Bond lives?\"})', but use the arguments directly as in 'answer = wiki(query=\"What is the place where James Bond lives?\")'.\n4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block.\n5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters.\n6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'.\n7. Never create any notional variables in our code, as having these in your logs will derail you from the true variables.\n8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}}\n9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist.\n10. Don't give up! You're in charge of solving the task, not providing directions to solve it.\n\nNow Begin! If you solve the task correctly, you will receive a reward of $1,000,000.",
20
+ "planning": {
21
+ "initial_facts": "Below I will present you a task.\n\nYou will now build a comprehensive preparatory survey of which facts we have at our disposal and which ones we still need.\nTo do so, you will have to read the task and identify things that must be discovered in order to successfully complete it.\nDon't make any assumptions. For each item, provide a thorough reasoning. Here is how you will structure this survey:\n\n---\n### 1. Facts given in the task\nList here the specific facts given in the task that could help you (there might be nothing here).\n\n### 2. Facts to look up\nList here any facts that we may need to look up.\nAlso list where to find each of these, for instance a website, a file... - maybe the task contains some sources that you should re-use here.\n\n### 3. Facts to derive\nList here anything that we want to derive from the above by logical reasoning, for instance computation or simulation.\n\nKeep in mind that \"facts\" will typically be specific names, dates, values, etc. Your answer should use the below headings:\n### 1. Facts given in the task\n### 2. Facts to look up\n### 3. Facts to derive\nDo not add anything else.",
22
+ "initial_plan": "You are a world expert at making efficient plans to solve any task using a set of carefully crafted tools.\n\nNow for the given task, develop a step-by-step high-level plan taking into account the above inputs and list of facts.\nThis plan should involve individual tasks based on the available tools, that if executed correctly will yield the correct answer.\nDo not skip steps, do not add any superfluous steps. Only write the high-level plan, DO NOT DETAIL INDIVIDUAL TOOL CALLS.\nAfter writing the final step of the plan, write the '\\n<end_plan>' tag and stop there.\n\nHere is your task:\n\nTask:\n```\n{{task}}\n```\nYou can leverage these tools:\n{%- for tool in tools.values() %}\n- {{ tool.name }}: {{ tool.description }}\n Takes inputs: {{tool.inputs}}\n Returns an output of type: {{tool.output_type}}\n{%- endfor %}\n\n{%- if managed_agents and managed_agents.values() | list %}\nYou can also give tasks to team members.\nCalling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'request', a long string explaining your request.\nGiven that this team member is a real human, you should be very verbose in your request.\nHere is a list of the team members that you can call:\n{%- for agent in managed_agents.values() %}\n- {{ agent.name }}: {{ agent.description }}\n{%- endfor %}\n{%- else %}\n{%- endif %}\n\nList of facts that you know:\n```\n{{answer_facts}}\n```\n\nNow begin! Write your plan below.",
23
+ "update_facts_pre_messages": "You are a world expert at gathering known and unknown facts based on a conversation.\nBelow you will find a task, and a history of attempts made to solve the task. You will have to produce a list of these:\n### 1. Facts given in the task\n### 2. Facts that we have learned\n### 3. Facts still to look up\n### 4. Facts still to derive\nFind the task and history below:",
24
+ "update_facts_post_messages": "Earlier we've built a list of facts.\nBut since in your previous steps you may have learned useful new facts or invalidated some false ones.\nPlease update your list of facts based on the previous history, and provide these headings:\n### 1. Facts given in the task\n### 2. Facts that we have learned\n### 3. Facts still to look up\n### 4. Facts still to derive\n\nNow write your new list of facts below.",
25
+ "update_plan_pre_messages": "You are a world expert at making efficient plans to solve any task using a set of carefully crafted tools.\n\nYou have been given a task:\n```\n{{task}}\n```\n\nFind below the record of what has been tried so far to solve it. Then you will be asked to make an updated plan to solve the task.\nIf the previous tries so far have met some success, you can make an updated plan based on these actions.\nIf you are stalled, you can make a completely new plan starting from scratch.",
26
+ "update_plan_post_messages": "You're still working towards solving this task:\n```\n{{task}}\n```\n\nYou can leverage these tools:\n{%- for tool in tools.values() %}\n- {{ tool.name }}: {{ tool.description }}\n Takes inputs: {{tool.inputs}}\n Returns an output of type: {{tool.output_type}}\n{%- endfor %}\n\n{%- if managed_agents and managed_agents.values() | list %}\nYou can also give tasks to team members.\nCalling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'task'.\nGiven that this team member is a real human, you should be very verbose in your task, it should be a long string providing informations as detailed as necessary.\nHere is a list of the team members that you can call:\n{%- for agent in managed_agents.values() %}\n- {{ agent.name }}: {{ agent.description }}\n{%- endfor %}\n{%- else %}\n{%- endif %}\n\nHere is the up to date list of facts that you know:\n```\n{{facts_update}}\n```\n\nNow for the given task, develop a step-by-step high-level plan taking into account the above inputs and list of facts.\nThis plan should involve individual tasks based on the available tools, that if executed correctly will yield the correct answer.\nBeware that you have {remaining_steps} steps remaining.\nDo not skip steps, do not add any superfluous steps. Only write the high-level plan, DO NOT DETAIL INDIVIDUAL TOOL CALLS.\nAfter writing the final step of the plan, write the '\\n<end_plan>' tag and stop there.\n\nNow write your new plan below."
27
+ },
28
+ "managed_agent": {
29
+ "task": "You're a helpful agent named '{{name}}'.\nYou have been submitted this task by your manager.\n---\nTask:\n{{task}}\n---\nYou're helping your manager solve a wider task: so make sure to not provide a one-line answer, but give as much information as possible to give them a clear understanding of the answer.\n\nYour final_answer WILL HAVE to contain these parts:\n### 1. Task outcome (short version):\n### 2. Task outcome (extremely detailed version):\n### 3. Additional context (if relevant):\n\nPut all these in your final_answer tool, everything that you do not pass as an argument to final_answer will be lost.\nAnd even if your task resolution is not successful, please return as much context as possible, so that your manager can act upon this feedback.",
30
+ "report": "Here is the final answer from your managed agent '{{name}}':\n{{final_answer}}"
31
+ }
32
+ },
33
+ "max_steps": 6,
34
+ "verbosity_level": 1,
35
+ "grammar": null,
36
+ "planning_interval": null,
37
+ "name": null,
38
+ "description": null,
39
+ "authorized_imports": [
40
+ "unicodedata",
41
+ "stat",
42
+ "datetime",
43
+ "random",
44
+ "pandas",
45
+ "itertools",
46
+ "math",
47
+ "statistics",
48
+ "queue",
49
+ "time",
50
+ "collections",
51
+ "re"
52
+ ]
53
+ }
Solution/basic_workflow.html ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <html>
2
+ <head>
3
+ <meta charset="utf-8">
4
+
5
+ <script src="lib/bindings/utils.js"></script>
6
+ <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/vis-network/9.1.2/dist/dist/vis-network.min.css" integrity="sha512-WgxfT5LWjfszlPHXRmBWHkV2eceiWTOBvrKCNbdgDYTHrT2AeLCGbF4sZlZw3UMN3WtL0tGUoIAKsu8mllg/XA==" crossorigin="anonymous" referrerpolicy="no-referrer" />
7
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/vis-network/9.1.2/dist/vis-network.min.js" integrity="sha512-LnvoEWDFrqGHlHmDD2101OrLcbsfkrzoSpvtSQtxK3RMnRV0eOkhhBN2dXHKRrUU8p2DGRTk35n4O8nWSVe1mQ==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
8
+
9
+
10
+ <center>
11
+ <h1></h1>
12
+ </center>
13
+
14
+ <!-- <link rel="stylesheet" href="../node_modules/vis/dist/vis.min.css" type="text/css" />
15
+ <script type="text/javascript" src="../node_modules/vis/dist/vis.js"> </script>-->
16
+ <link
17
+ href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta3/dist/css/bootstrap.min.css"
18
+ rel="stylesheet"
19
+ integrity="sha384-eOJMYsd53ii+scO/bJGFsiCZc+5NDVN2yr8+0RDqr0Ql0h+rP48ckxlpbzKgwra6"
20
+ crossorigin="anonymous"
21
+ />
22
+ <script
23
+ src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.0-beta3/dist/js/bootstrap.bundle.min.js"
24
+ integrity="sha384-JEW9xMcG8R+pH31jmWH6WWP0WintQrMb4s7ZOdauHnUtxwoG2vI5DkLtS3qm9Ekf"
25
+ crossorigin="anonymous"
26
+ ></script>
27
+
28
+
29
+ <center>
30
+ <h1></h1>
31
+ </center>
32
+ <style type="text/css">
33
+
34
+ #mynetwork {
35
+ width: 100%;
36
+ height: 750px;
37
+ background-color: #ffffff;
38
+ border: 1px solid lightgray;
39
+ position: relative;
40
+ float: left;
41
+ }
42
+
43
+
44
+
45
+
46
+
47
+
48
+ </style>
49
+ </head>
50
+
51
+
52
+ <body>
53
+ <div class="card" style="width: 100%">
54
+
55
+
56
+ <div id="mynetwork" class="card-body"></div>
57
+ </div>
58
+
59
+
60
+
61
+
62
+ <script type="text/javascript">
63
+
64
+ // initialize global variables.
65
+ var edges;
66
+ var nodes;
67
+ var allNodes;
68
+ var allEdges;
69
+ var nodeColors;
70
+ var originalNodes;
71
+ var network;
72
+ var container;
73
+ var options, data;
74
+ var filter = {
75
+ item : '',
76
+ property : '',
77
+ value : []
78
+ };
79
+
80
+
81
+
82
+
83
+
84
+ // This method is responsible for drawing the graph, returns the drawn network
85
+ function drawGraph() {
86
+ var container = document.getElementById('mynetwork');
87
+
88
+
89
+
90
+ // parsing and collecting nodes and edges from the python
91
+ nodes = new vis.DataSet([{"color": "#ADD8E6", "id": "_done", "label": "_done", "shape": "box"}, {"color": "#FFA07A", "id": "StopEvent", "label": "StopEvent", "shape": "ellipse"}, {"color": "#ADD8E6", "id": "aggregate_tool_results", "label": "aggregate_tool_results", "shape": "box"}, {"color": "#90EE90", "id": "ToolCallResult", "label": "ToolCallResult", "shape": "ellipse"}, {"color": "#90EE90", "id": "AgentInput", "label": "AgentInput", "shape": "ellipse"}, {"color": "#ADD8E6", "id": "call_tool", "label": "call_tool", "shape": "box"}, {"color": "#90EE90", "id": "ToolCall", "label": "ToolCall", "shape": "ellipse"}, {"color": "#ADD8E6", "id": "init_run", "label": "init_run", "shape": "box"}, {"color": "#E27AFF", "id": "AgentWorkflowStartEvent", "label": "AgentWorkflowStartEvent", "shape": "ellipse"}, {"color": "#ADD8E6", "id": "parse_agent_output", "label": "parse_agent_output", "shape": "box"}, {"color": "#90EE90", "id": "AgentOutput", "label": "AgentOutput", "shape": "ellipse"}, {"color": "#ADD8E6", "id": "run_agent_step", "label": "run_agent_step", "shape": "box"}, {"color": "#90EE90", "id": "AgentSetup", "label": "AgentSetup", "shape": "ellipse"}, {"color": "#ADD8E6", "id": "setup_agent", "label": "setup_agent", "shape": "box"}]);
92
+ edges = new vis.DataSet([{"arrows": "to", "from": "StopEvent", "to": "_done"}, {"arrows": "to", "from": "aggregate_tool_results", "to": "AgentInput"}, {"arrows": "to", "from": "aggregate_tool_results", "to": "StopEvent"}, {"arrows": "to", "from": "ToolCallResult", "to": "aggregate_tool_results"}, {"arrows": "to", "from": "call_tool", "to": "ToolCallResult"}, {"arrows": "to", "from": "ToolCall", "to": "call_tool"}, {"arrows": "to", "from": "init_run", "to": "AgentInput"}, {"arrows": "to", "from": "AgentWorkflowStartEvent", "to": "init_run"}, {"arrows": "to", "from": "parse_agent_output", "to": "StopEvent"}, {"arrows": "to", "from": "parse_agent_output", "to": "ToolCall"}, {"arrows": "to", "from": "AgentOutput", "to": "parse_agent_output"}, {"arrows": "to", "from": "run_agent_step", "to": "AgentOutput"}, {"arrows": "to", "from": "AgentSetup", "to": "run_agent_step"}, {"arrows": "to", "from": "setup_agent", "to": "AgentSetup"}, {"arrows": "to", "from": "AgentInput", "to": "setup_agent"}]);
93
+
94
+ nodeColors = {};
95
+ allNodes = nodes.get({ returnType: "Object" });
96
+ for (nodeId in allNodes) {
97
+ nodeColors[nodeId] = allNodes[nodeId].color;
98
+ }
99
+ allEdges = edges.get({ returnType: "Object" });
100
+ // adding nodes and edges to the graph
101
+ data = {nodes: nodes, edges: edges};
102
+
103
+ var options = {
104
+ "configure": {
105
+ "enabled": false
106
+ },
107
+ "edges": {
108
+ "color": {
109
+ "inherit": true
110
+ },
111
+ "smooth": {
112
+ "enabled": true,
113
+ "type": "dynamic"
114
+ }
115
+ },
116
+ "interaction": {
117
+ "dragNodes": true,
118
+ "hideEdgesOnDrag": false,
119
+ "hideNodesOnDrag": false
120
+ },
121
+ "physics": {
122
+ "enabled": true,
123
+ "stabilization": {
124
+ "enabled": true,
125
+ "fit": true,
126
+ "iterations": 1000,
127
+ "onlyDynamicEdges": false,
128
+ "updateInterval": 50
129
+ }
130
+ }
131
+ };
132
+
133
+
134
+
135
+
136
+
137
+
138
+ network = new vis.Network(container, data, options);
139
+
140
+
141
+
142
+
143
+
144
+
145
+
146
+
147
+
148
+
149
+ return network;
150
+
151
+ }
152
+ drawGraph();
153
+ </script>
154
+ </body>
155
+ </html>
Solution/main.py ADDED
@@ -0,0 +1,300 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # libraries imports
2
+ import yaml
3
+ import asyncio
4
+ from datetime import datetime
5
+ from phoenix.otel import register
6
+ from llama_index.core.workflow import Context
7
+ from src.llms.gemini_2_flash import create_gemini
8
+ from llama_index.core.tools import QueryEngineTool
9
+ from llama_index.core.prompts import PromptTemplate
10
+ from llama_index.core.callbacks import CallbackManager
11
+ from llama_index.utils.workflow import draw_all_possible_flows
12
+ from llama_index.core.callbacks import CallbackManager, LlamaDebugHandler
13
+ from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
14
+ from llama_index.core.agent.workflow import AgentWorkflow, ReActAgent, FunctionAgent
15
+ from llama_index.core.agent.workflow import AgentWorkflow, ToolCallResult, AgentStream
16
+
17
+ # Custom imports
18
+ from src.tools.web_search import search_tool
19
+ from src.tools.visit_webpage import visit_webpage_tool
20
+ from src.tools.query_on_url import Get_info_from_url_tool
21
+ from dotenv import load_dotenv
22
+ import os
23
+
24
+ # Load environment variables
25
+ load_dotenv()
26
+
27
+
28
+ class CustomDebugHandler(LlamaDebugHandler):
29
+ """Custom debug handler for better traceability"""
30
+
31
+ def __init__(self):
32
+ super().__init__()
33
+ self.session_id = datetime.now().strftime("%Y%m%d_%H%M%S")
34
+
35
+ def on_event_start(self, event_type, payload=None, event_id="", **kwargs):
36
+ timestamp = datetime.now().strftime("%H:%M:%S")
37
+ print(f"\nπŸš€ [{timestamp}] Event Started: {event_type}")
38
+ if payload:
39
+ print(f" πŸ“‹ Payload: {payload}")
40
+ super().on_event_start(event_type, payload, event_id, **kwargs)
41
+
42
+ def on_event_end(self, event_type, payload=None, event_id="", **kwargs):
43
+ timestamp = datetime.now().strftime("%H:%M:%S")
44
+ print(f"\nβœ… [{timestamp}] Event Completed: {event_type}")
45
+ super().on_event_end(event_type, payload, event_id, **kwargs)
46
+
47
+
48
+ def create_callback_manager():
49
+ """Create a callback manager with custom handlers"""
50
+ debug_handler = CustomDebugHandler()
51
+ callback_manager = CallbackManager([debug_handler])
52
+ return callback_manager
53
+
54
+
55
+ def get_agent_name_enhanced(ev, workflow):
56
+ """Enhanced agent name detection with multiple fallback strategies"""
57
+
58
+ # Strategy 1: Direct attribute check
59
+ for attr in ['agent_name', 'name', 'sender', 'agent']:
60
+ if hasattr(ev, attr):
61
+ value = getattr(ev, attr)
62
+ if value and isinstance(value, str):
63
+ return value
64
+
65
+ # Strategy 2: Check source object
66
+ if hasattr(ev, 'source'):
67
+ source = ev.source
68
+ for attr in ['agent_name', 'name', 'id']:
69
+ if hasattr(source, attr):
70
+ value = getattr(source, attr)
71
+ if value and isinstance(value, str):
72
+ return value
73
+
74
+ # Strategy 3: Check workflow context or metadata
75
+ if hasattr(ev, 'metadata') and ev.metadata:
76
+ if 'agent_name' in ev.metadata:
77
+ return ev.metadata['agent_name']
78
+
79
+ # Strategy 4: Try to infer from workflow state
80
+ if hasattr(workflow, '_current_agent') and workflow._current_agent:
81
+ return workflow._current_agent
82
+
83
+ # Strategy 5: Check event type patterns
84
+ event_type = type(ev).__name__
85
+ if 'Agent' in event_type:
86
+ return f"Agent_{event_type}"
87
+
88
+ return "UnknownAgent"
89
+
90
+
91
+ def format_output_message(agent_name, message_type, content, timestamp=None):
92
+ """Format output messages consistently"""
93
+ if timestamp is None:
94
+ timestamp = datetime.now().strftime("%H:%M:%S")
95
+
96
+ separator = "=" * 60
97
+ header = f"[{timestamp}] {agent_name} - {message_type}"
98
+
99
+ return f"\n{separator}\n{header}\n{separator}\n{content}\n{separator}\n"
100
+
101
+
102
+ async def main():
103
+ # Create callback manager
104
+ callback_manager = create_callback_manager()
105
+ # phoenix handler
106
+ # Register the tracer provider (connects to OpenTelemetry)
107
+ tracer_provider = register()
108
+
109
+ # Instrument LlamaIndex with OpenInference
110
+ LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)
111
+
112
+ # Create LLM with callback manager
113
+ llm = create_gemini()
114
+ if hasattr(llm, 'callback_manager'):
115
+ llm.callback_manager = callback_manager
116
+
117
+ def load_config(file_path):
118
+ """Load configuration from a YAML file."""
119
+ try:
120
+ with open(file_path, 'r') as file:
121
+ return yaml.safe_load(file)
122
+ except Exception as e:
123
+ print(f"Error loading config file: {e}")
124
+ return {}
125
+
126
+ # Load configuration
127
+ config = load_config('src/agents/prompts.yaml')
128
+
129
+ # Create agents with callback manager
130
+ manager_agent = ReActAgent(
131
+ name=config["manager_agent"]["name"],
132
+ description=config["manager_agent"]["description"],
133
+ tools=[],
134
+ llm=llm,
135
+ callback_manager=callback_manager,
136
+ )
137
+
138
+
139
+ manager_agent.update_prompts({"react_header": config["manager_agent"]["system_prompt"]})
140
+ print(manager_agent.get_prompts())
141
+
142
+ product_hunter_agent = ReActAgent(
143
+ name=config["product_hunter_agent"]["name"],
144
+ description=config["product_hunter_agent"]["description"],
145
+ # tools=[search_tool, visit_webpage],
146
+ tools=[search_tool, visit_webpage_tool, Get_info_from_url_tool],
147
+ llm=llm,
148
+ callback_manager=callback_manager,
149
+ )
150
+ product_hunter_agent.update_prompts({"react_header": config["product_hunter_agent"]["system_prompt"]})
151
+
152
+ trivial_search_agent = ReActAgent(
153
+ name=config["trivial_search_agent"]["name"],
154
+ description=config["trivial_search_agent"]["description"],
155
+ # tools=[search_tool, visit_webpage],
156
+ tools=[search_tool, visit_webpage_tool, Get_info_from_url_tool],
157
+ llm=llm,
158
+ callback_manager=callback_manager,
159
+ )
160
+ trivial_search_agent.update_prompts({"react_header": config["trivial_search_agent"]["system_prompt"]})
161
+
162
+ # shopping_researcher_agent = ReActAgent(
163
+ # name=config["shopping_researcher_agent"]["name"],
164
+ # description=config["shopping_researcher_agent"]["description"],
165
+ # # tools=[search_tool, visit_webpage],
166
+ # tools=[search_tool, visit_webpage_tool, Get_info_from_url_tool],
167
+ # llm=llm,
168
+ # callback_manager=callback_manager,
169
+ # )
170
+ # shopping_researcher_agent.update_prompts({"react_header": config["shopping_researcher_agent"]["system_prompt"]})
171
+
172
+ product_investigator_agent = ReActAgent(
173
+ name=config["product_investigator_agent"]["name"],
174
+ description=config["product_investigator_agent"]["description"],
175
+ # tools=[search_tool, visit_webpage],
176
+ tools=[search_tool, visit_webpage_tool, Get_info_from_url_tool],
177
+ llm=llm,
178
+ callback_manager=callback_manager,
179
+ )
180
+ product_investigator_agent.update_prompts({"react_header": config["product_investigator_agent"]["system_prompt"]})
181
+
182
+ # Create workflow with callback manager
183
+ workflow = AgentWorkflow(
184
+ agents=[manager_agent, product_hunter_agent,
185
+ product_investigator_agent, trivial_search_agent],
186
+ root_agent="manager_agent",
187
+ )
188
+
189
+ # To keep memory
190
+ ctx = Context(workflow)
191
+
192
+ # Test prompts (commented out)
193
+ # prompt = "I want to build a Gaming PC for around 50k EGP, I don't care about looks or RGB but I care about performance, I want the greatest performance for gaming on 1080p with these 1000 dollars, also the pc should have at least 16 GBs of ram and 1TB ssd , I live in Egypt. Can you please give me the pc parts links I should buy online to build that pc ?"
194
+ # prompt = "I am having a wedding next week, i want to buy a wedding dress for my wife, i want it to be white and elegant, i want it to be around 10 EGP, can you please give me the links of the dresses that fit this description ?"
195
+ # prompt = "I want to buy a wedding suit with tie and everything with a maximum budget of 30k EGP in Cairo."
196
+
197
+ prompt = "I want to buy a wired headset for gaming for 1k EGP or less."
198
+
199
+ print(f"\n🎯 Starting Shopping Assistant Workflow")
200
+ print(f"πŸ“ Query: {prompt}")
201
+ print(f"⏰ Session Started: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
202
+
203
+ handler = workflow.run(
204
+ user_msg=prompt,
205
+ ctx=ctx
206
+ )
207
+
208
+ # Create output file with timestamp
209
+ output_filename = f"agent_output_{datetime.now().strftime('%Y%m%d_%H%M%S')}.txt"
210
+
211
+ # Agent name mapping for better identification
212
+ agent_mapping = {
213
+ agent.name: agent.name for agent in [
214
+ manager_agent, product_hunter_agent, trivial_search_agent,
215
+ product_investigator_agent,
216
+ ]
217
+ }
218
+
219
+ print(f"\nπŸ“„ Logging to: {output_filename}")
220
+
221
+ with open(output_filename, "w", encoding="utf-8") as f:
222
+ # Write session header
223
+ session_header = f"""
224
+ Shopping Assistant Session Log
225
+ ==============================
226
+ Session ID: {datetime.now().strftime('%Y%m%d_%H%M%S')}
227
+ Start Time: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
228
+ Query: {prompt}
229
+ Active Agents: {', '.join(agent_mapping.keys())}
230
+ ==============================
231
+
232
+ """
233
+ f.write(session_header)
234
+ print(session_header)
235
+
236
+ current_agent = "Unknown"
237
+
238
+ async for ev in handler.stream_events():
239
+ timestamp = datetime.now().strftime("%H:%M:%S")
240
+ agent_name = get_agent_name_enhanced(ev, workflow)
241
+
242
+ # Update current agent tracking
243
+ if agent_name != "UnknownAgent":
244
+ current_agent = agent_name
245
+ else:
246
+ agent_name = current_agent
247
+
248
+ if isinstance(ev, ToolCallResult):
249
+ tool_message = format_output_message(
250
+ agent_name,
251
+ "TOOL EXECUTION",
252
+ f"πŸ›  Tool: {ev.tool_name}\n"
253
+ f"πŸ“₯ Input: {ev.tool_kwargs}\n"
254
+ f"πŸ“€ Output: {str(ev.tool_output)[:500]}{'...' if len(str(ev.tool_output)) > 500 else ''}",
255
+ timestamp
256
+ )
257
+ print(tool_message)
258
+ f.write(tool_message)
259
+
260
+ elif isinstance(ev, AgentStream):
261
+ delta = getattr(ev, "delta", "")
262
+ if delta.strip(): # Only log non-empty deltas
263
+ stream_message = f"[{timestamp}] {agent_name} πŸ’­: {delta}"
264
+ print(stream_message, end="", flush=True)
265
+ f.write(stream_message)
266
+
267
+ # Write session footer
268
+ session_footer = f"""
269
+
270
+ ==============================
271
+ Session Completed: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
272
+ ==============================
273
+ """
274
+ f.write(session_footer)
275
+ print(session_footer)
276
+
277
+ print("\nπŸŽ‰ Workflow execution completed!")
278
+
279
+ # Get final response
280
+ try:
281
+ resp = await handler
282
+ final_response = format_output_message(
283
+ "FINAL_RESPONSE",
284
+ "WORKFLOW RESULT",
285
+ str(resp)
286
+ )
287
+ print(final_response)
288
+
289
+ # Append final response to file
290
+ with open(output_filename, "a", encoding="utf-8") as f:
291
+ f.write(final_response)
292
+
293
+ except Exception as e:
294
+ error_message = f"❌ Error getting final response: {str(e)}"
295
+ print(error_message)
296
+ with open(output_filename, "a", encoding="utf-8") as f:
297
+ f.write(f"\n{error_message}\n")
298
+
299
+ if __name__ == "__main__":
300
+ asyncio.run(main())
Solution/main2.py ADDED
@@ -0,0 +1,368 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # libraries imports
2
+ import yaml
3
+ import asyncio
4
+ from datetime import datetime
5
+ from phoenix.otel import register
6
+ from llama_index.core.workflow import Context
7
+ from src.llms.gemini_2_flash import create_gemini
8
+ from llama_index.core.tools import QueryEngineTool
9
+ from llama_index.core.prompts import PromptTemplate
10
+ from llama_index.core.callbacks import CallbackManager
11
+ from llama_index.utils.workflow import draw_all_possible_flows
12
+ from llama_index.core.callbacks import CallbackManager, LlamaDebugHandler
13
+ from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
14
+ from llama_index.core.agent.workflow import AgentWorkflow, ReActAgent, FunctionAgent
15
+ from llama_index.core.agent.workflow import AgentWorkflow, ToolCallResult, AgentStream
16
+
17
+ # Custom imports
18
+ from src.tools.web_search import search_tool
19
+ from src.tools.visit_webpage import visit_webpage_tool
20
+ from src.tools.query_on_url import Get_info_from_url_tool
21
+ from dotenv import load_dotenv
22
+ import os
23
+
24
+ # Load environment variables
25
+ load_dotenv()
26
+
27
+
28
+ class CustomDebugHandler(LlamaDebugHandler):
29
+ """Custom debug handler for better traceability"""
30
+
31
+ def __init__(self):
32
+ super().__init__()
33
+ self.session_id = datetime.now().strftime("%Y%m%d_%H%M%S")
34
+
35
+ def on_event_start(self, event_type, payload=None, event_id="", **kwargs):
36
+ timestamp = datetime.now().strftime("%H:%M:%S")
37
+ print(f"\nπŸš€ [{timestamp}] Event Started: {event_type}")
38
+ if payload:
39
+ print(f" πŸ“‹ Payload: {payload}")
40
+ super().on_event_start(event_type, payload, event_id, **kwargs)
41
+
42
+ def on_event_end(self, event_type, payload=None, event_id="", **kwargs):
43
+ timestamp = datetime.now().strftime("%H:%M:%S")
44
+ print(f"\nβœ… [{timestamp}] Event Completed: {event_type}")
45
+ super().on_event_end(event_type, payload, event_id, **kwargs)
46
+
47
+
48
+ def create_callback_manager():
49
+ """Create a callback manager with custom handlers"""
50
+ debug_handler = CustomDebugHandler()
51
+ callback_manager = CallbackManager([debug_handler])
52
+ return callback_manager
53
+
54
+
55
+ def get_agent_name_enhanced(ev, workflow):
56
+ """Enhanced agent name detection with multiple fallback strategies"""
57
+
58
+ # Strategy 1: Direct attribute check
59
+ for attr in ['agent_name', 'name', 'sender', 'agent']:
60
+ if hasattr(ev, attr):
61
+ value = getattr(ev, attr)
62
+ if value and isinstance(value, str):
63
+ return value
64
+
65
+ # Strategy 2: Check source object
66
+ if hasattr(ev, 'source'):
67
+ source = ev.source
68
+ for attr in ['agent_name', 'name', 'id']:
69
+ if hasattr(source, attr):
70
+ value = getattr(source, attr)
71
+ if value and isinstance(value, str):
72
+ return value
73
+
74
+ # Strategy 3: Check workflow context or metadata
75
+ if hasattr(ev, 'metadata') and ev.metadata:
76
+ if 'agent_name' in ev.metadata:
77
+ return ev.metadata['agent_name']
78
+
79
+ # Strategy 4: Try to infer from workflow state
80
+ if hasattr(workflow, '_current_agent') and workflow._current_agent:
81
+ return workflow._current_agent
82
+
83
+ # Strategy 5: Check event type patterns
84
+ event_type = type(ev).__name__
85
+ if 'Agent' in event_type:
86
+ return f"Agent_{event_type}"
87
+
88
+ return "UnknownAgent"
89
+
90
+
91
+ def format_output_message(agent_name, message_type, content, timestamp=None):
92
+ """Format output messages consistently"""
93
+ if timestamp is None:
94
+ timestamp = datetime.now().strftime("%H:%M:%S")
95
+
96
+ separator = "=" * 60
97
+ header = f"[{timestamp}] {agent_name} - {message_type}"
98
+
99
+ return f"\n{separator}\n{header}\n{separator}\n{content}\n{separator}\n"
100
+
101
+
102
+ def create_custom_react_prompt(agent_config):
103
+ """Create a properly formatted ReAct prompt that maintains the required structure"""
104
+
105
+ # Extract the core mission and instructions from your config
106
+ core_instructions = agent_config["system_prompt"]
107
+
108
+ # Create a ReAct-compatible prompt that integrates your instructions
109
+ react_prompt = f"""You are a helpful AI assistant that can use tools to answer questions.
110
+
111
+ {core_instructions}
112
+
113
+ When responding, you must follow this exact format:
114
+
115
+ Thought: I need to think about what the user is asking and determine if I need to use any tools.
116
+ Action: [tool_name if using a tool, or skip this line if not using a tool]
117
+ Action Input: [tool input in JSON format if using a tool, or skip this line if not using a tool]
118
+ Observation: [This will be filled in by the tool response]
119
+
120
+ Continue this Thought/Action/Action Input/Observation cycle until you have enough information to provide a final answer.
121
+
122
+ When you have enough information, provide your final response in this format:
123
+
124
+ Thought: I now have enough information to provide a complete answer.
125
+ Answer: [Your final answer here]
126
+
127
+ Remember to always start with a Thought and follow the exact format above."""
128
+
129
+ return react_prompt
130
+
131
+
132
+ async def main():
133
+ # Create callback manager
134
+ callback_manager = create_callback_manager()
135
+
136
+ # phoenix handler
137
+ # Register the tracer provider (connects to OpenTelemetry)
138
+ tracer_provider = register()
139
+
140
+ # Instrument LlamaIndex with OpenInference
141
+ LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)
142
+
143
+ # Create LLM with callback manager
144
+ llm = create_gemini()
145
+ if hasattr(llm, 'callback_manager'):
146
+ llm.callback_manager = callback_manager
147
+
148
+ def load_config(file_path):
149
+ """Load configuration from a YAML file."""
150
+ try:
151
+ with open(file_path, 'r') as file:
152
+ return yaml.safe_load(file)
153
+ except Exception as e:
154
+ print(f"Error loading config file: {e}")
155
+ return {}
156
+
157
+ # Load configuration
158
+ config = load_config('src/agents/prompts.yaml')
159
+
160
+ # Create agents with proper prompt integration
161
+ try:
162
+ # Manager Agent
163
+ manager_agent = ReActAgent(
164
+ name=config["manager_agent"]["name"],
165
+ description=config["manager_agent"]["description"],
166
+ tools=[],
167
+ llm=llm,
168
+ callback_manager=callback_manager,
169
+ verbose=True, # Enable verbose mode for debugging
170
+ can_handoff_to=["product_hunter_agent", "product_investigator_agent", "trivial_search_agent"],
171
+ )
172
+
173
+ # Create custom prompt template for manager
174
+ manager_prompt = create_custom_react_prompt(config["manager_agent"])
175
+ manager_agent.update_prompts({
176
+ "react_header": PromptTemplate(manager_prompt)
177
+ })
178
+
179
+ # Product Hunter Agent
180
+ product_hunter_agent = ReActAgent(
181
+ name=config["product_hunter_agent"]["name"],
182
+ description=config["product_hunter_agent"]["description"],
183
+ tools=[search_tool, visit_webpage_tool, Get_info_from_url_tool],
184
+ llm=llm,
185
+ callback_manager=callback_manager,
186
+ verbose=True,
187
+ can_handoff_to=["manager_agent"],
188
+ # can_be_handed_off_by=["manager_agent"]
189
+ )
190
+
191
+ hunter_prompt = create_custom_react_prompt(config["product_hunter_agent"])
192
+ product_hunter_agent.update_prompts({
193
+ "react_header": PromptTemplate(hunter_prompt)
194
+ })
195
+
196
+ # Trivial Search Agent
197
+ trivial_search_agent = ReActAgent(
198
+ name=config["trivial_search_agent"]["name"],
199
+ description=config["trivial_search_agent"]["description"],
200
+ tools=[search_tool, visit_webpage_tool, Get_info_from_url_tool],
201
+ llm=llm,
202
+ callback_manager=callback_manager,
203
+ verbose=True,
204
+ can_handoff_to=["manager_agent"],
205
+
206
+ )
207
+
208
+ trivial_prompt = create_custom_react_prompt(config["trivial_search_agent"])
209
+ trivial_search_agent.update_prompts({
210
+ "react_header": PromptTemplate(trivial_prompt)
211
+ })
212
+
213
+ # Product Investigator Agent
214
+ product_investigator_agent = ReActAgent(
215
+ name=config["product_investigator_agent"]["name"],
216
+ description=config["product_investigator_agent"]["description"],
217
+ tools=[search_tool, visit_webpage_tool, Get_info_from_url_tool],
218
+ llm=llm,
219
+ callback_manager=callback_manager,
220
+ verbose=True,
221
+ can_handoff_to=["manager_agent"],
222
+ )
223
+
224
+ investigator_prompt = create_custom_react_prompt(config["product_investigator_agent"])
225
+ product_investigator_agent.update_prompts({
226
+ "react_header": PromptTemplate(investigator_prompt)
227
+ })
228
+
229
+ print("βœ… All agents created successfully!")
230
+
231
+ except Exception as e:
232
+ print(f"❌ Error creating agents: {e}")
233
+ import traceback
234
+ traceback.print_exc()
235
+ return
236
+
237
+ # Create workflow with callback manager
238
+ try:
239
+ workflow = AgentWorkflow(
240
+ agents=[manager_agent, product_hunter_agent,
241
+ product_investigator_agent, trivial_search_agent],
242
+ root_agent="manager_agent",
243
+ # callback_manager=callback_manager,
244
+ verbose=True,
245
+ )
246
+ print("βœ… Workflow created successfully!")
247
+
248
+ except Exception as e:
249
+ print(f"❌ Error creating workflow: {e}")
250
+ import traceback
251
+ traceback.print_exc()
252
+ return
253
+
254
+ # To keep memory
255
+ ctx = Context(workflow)
256
+
257
+ # Test prompts
258
+ prompt = "I want to buy a wired headset for gaming for 500 USD or less."
259
+
260
+ print(f"\n🎯 Starting Shopping Assistant Workflow")
261
+ print(f"πŸ“ Query: {prompt}")
262
+ print(f"⏰ Session Started: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
263
+
264
+ try:
265
+ handler = workflow.run(
266
+ user_msg=prompt,
267
+ ctx=ctx
268
+ )
269
+
270
+ # Create output file with timestamp
271
+ output_filename = f"agent_output_{datetime.now().strftime('%Y%m%d_%H%M%S')}.txt"
272
+
273
+ # Agent name mapping for better identification
274
+ agent_mapping = {
275
+ agent.name: agent.name for agent in [
276
+ manager_agent, product_hunter_agent, trivial_search_agent,
277
+ product_investigator_agent,
278
+ ]
279
+ }
280
+
281
+ print(f"\nπŸ“„ Logging to: {output_filename}")
282
+
283
+ with open(output_filename, "w", encoding="utf-8") as f:
284
+ # Write session header
285
+ session_header = f"""
286
+ Shopping Assistant Session Log
287
+ ==============================
288
+ Session ID: {datetime.now().strftime('%Y%m%d_%H%M%S')}
289
+ Start Time: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
290
+ Query: {prompt}
291
+ Active Agents: {', '.join(agent_mapping.keys())}
292
+ ==============================
293
+
294
+ """
295
+ f.write(session_header)
296
+ print(session_header)
297
+
298
+ current_agent = "Unknown"
299
+
300
+ async for ev in handler.stream_events():
301
+ timestamp = datetime.now().strftime("%H:%M:%S")
302
+ agent_name = get_agent_name_enhanced(ev, workflow)
303
+
304
+ # Update current agent tracking
305
+ if agent_name != "UnknownAgent":
306
+ current_agent = agent_name
307
+ else:
308
+ agent_name = current_agent
309
+
310
+ if isinstance(ev, ToolCallResult):
311
+ tool_message = format_output_message(
312
+ agent_name,
313
+ "TOOL EXECUTION",
314
+ f"πŸ›  Tool: {ev.tool_name}\n"
315
+ f"πŸ“₯ Input: {ev.tool_kwargs}\n"
316
+ f"πŸ“€ Output: {str(ev.tool_output)[:500]}{'...' if len(str(ev.tool_output)) > 500 else ''}",
317
+ timestamp
318
+ )
319
+ print(tool_message)
320
+ f.write(tool_message)
321
+
322
+ elif isinstance(ev, AgentStream):
323
+ delta = getattr(ev, "delta", "")
324
+ if delta.strip(): # Only log non-empty deltas
325
+ stream_message = f"[{timestamp}] {agent_name} πŸ’­: {delta}"
326
+ print(stream_message, end="", flush=True)
327
+ f.write(stream_message)
328
+
329
+ # Write session footer
330
+ session_footer = f"""
331
+
332
+ ==============================
333
+ Session Completed: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
334
+ ==============================
335
+ """
336
+ f.write(session_footer)
337
+ print(session_footer)
338
+
339
+ print("\nπŸŽ‰ Workflow execution completed!")
340
+
341
+ # Get final response
342
+ try:
343
+ resp = await handler
344
+ final_response = format_output_message(
345
+ "FINAL_RESPONSE",
346
+ "WORKFLOW RESULT",
347
+ str(resp)
348
+ )
349
+ print(final_response)
350
+
351
+ # Append final response to file
352
+ with open(output_filename, "a", encoding="utf-8") as f:
353
+ f.write(final_response)
354
+
355
+ except Exception as e:
356
+ error_message = f"❌ Error getting final response: {str(e)}"
357
+ print(error_message)
358
+ with open(output_filename, "a", encoding="utf-8") as f:
359
+ f.write(f"\n{error_message}\n")
360
+
361
+ except Exception as e:
362
+ print(f"❌ Error running workflow: {e}")
363
+ import traceback
364
+ traceback.print_exc()
365
+
366
+
367
+ if __name__ == "__main__":
368
+ asyncio.run(main())
Solution/prompts.yaml ADDED
@@ -0,0 +1,321 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ "system_prompt": |-
2
+ You are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can.
3
+ To do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code.
4
+ To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.
5
+
6
+ At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.
7
+ Then in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '<end_code>' sequence.
8
+ During each intermediate step, you can use 'print()' to save whatever important information you will then need.
9
+ These print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.
10
+ In the end you have to return a final answer using the `final_answer` tool.
11
+
12
+ Here are a few examples using notional tools:
13
+ ---
14
+ Task: "Generate an image of the oldest person in this document."
15
+
16
+ Thought: I will proceed step by step and use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.
17
+ Code:
18
+ ```py
19
+ answer = document_qa(document=document, question="Who is the oldest person mentioned?")
20
+ print(answer)
21
+ ```<end_code>
22
+ Observation: "The oldest person in the document is John Doe, a 55 year old lumberjack living in Newfoundland."
23
+
24
+ Thought: I will now generate an image showcasing the oldest person.
25
+ Code:
26
+ ```py
27
+ image = image_generator("A portrait of John Doe, a 55-year-old man living in Canada.")
28
+ final_answer(image)
29
+ ```<end_code>
30
+
31
+ ---
32
+ Task: "What is the result of the following operation: 5 + 3 + 1294.678?"
33
+
34
+ Thought: I will use python code to compute the result of the operation and then return the final answer using the `final_answer` tool
35
+ Code:
36
+ ```py
37
+ result = 5 + 3 + 1294.678
38
+ final_answer(result)
39
+ ```<end_code>
40
+
41
+ ---
42
+ Task:
43
+ "Answer the question in the variable `question` about the image stored in the variable `image`. The question is in French.
44
+ You have been provided with these additional arguments, that you can access using the keys as variables in your python code:
45
+ {'question': 'Quel est l'animal sur l'image?', 'image': 'path/to/image.jpg'}"
46
+
47
+ Thought: I will use the following tools: `translator` to translate the question into English and then `image_qa` to answer the question on the input image.
48
+ Code:
49
+ ```py
50
+ translated_question = translator(question=question, src_lang="French", tgt_lang="English")
51
+ print(f"The translated question is {translated_question}.")
52
+ answer = image_qa(image=image, question=translated_question)
53
+ final_answer(f"The answer is {answer}")
54
+ ```<end_code>
55
+
56
+ ---
57
+ Task:
58
+ In a 1979 interview, Stanislaus Ulam discusses with Martin Sherwin about other great physicists of his time, including Oppenheimer.
59
+ What does he say was the consequence of Einstein learning too much math on his creativity, in one word?
60
+
61
+ Thought: I need to find and read the 1979 interview of Stanislaus Ulam with Martin Sherwin.
62
+ Code:
63
+ ```py
64
+ pages = search(query="1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein")
65
+ print(pages)
66
+ ```<end_code>
67
+ Observation:
68
+ No result found for query "1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein".
69
+
70
+ Thought: The query was maybe too restrictive and did not find any results. Let's try again with a broader query.
71
+ Code:
72
+ ```py
73
+ pages = search(query="1979 interview Stanislaus Ulam")
74
+ print(pages)
75
+ ```<end_code>
76
+ Observation:
77
+ Found 6 pages:
78
+ [Stanislaus Ulam 1979 interview](https://ahf.nuclearmuseum.org/voices/oral-histories/stanislaus-ulams-interview-1979/)
79
+
80
+ [Ulam discusses Manhattan Project](https://ahf.nuclearmuseum.org/manhattan-project/ulam-manhattan-project/)
81
+
82
+ (truncated)
83
+
84
+ Thought: I will read the first 2 pages to know more.
85
+ Code:
86
+ ```py
87
+ for url in ["https://ahf.nuclearmuseum.org/voices/oral-histories/stanislaus-ulams-interview-1979/", "https://ahf.nuclearmuseum.org/manhattan-project/ulam-manhattan-project/"]:
88
+ whole_page = visit_webpage(url)
89
+ print(whole_page)
90
+ print("\n" + "="*80 + "\n") # Print separator between pages
91
+ ```<end_code>
92
+ Observation:
93
+ Manhattan Project Locations:
94
+ Los Alamos, NM
95
+ Stanislaus Ulam was a Polish-American mathematician. He worked on the Manhattan Project at Los Alamos and later helped design the hydrogen bomb. In this interview, he discusses his work at
96
+ (truncated)
97
+
98
+ Thought: I now have the final answer: from the webpages visited, Stanislaus Ulam says of Einstein: "He learned too much mathematics and sort of diminished, it seems to me personally, it seems to me his purely physics creativity." Let's answer in one word.
99
+ Code:
100
+ ```py
101
+ final_answer("diminished")
102
+ ```<end_code>
103
+
104
+ ---
105
+ Task: "Which city has the highest population: Guangzhou or Shanghai?"
106
+
107
+ Thought: I need to get the populations for both cities and compare them: I will use the tool `search` to get the population of both cities.
108
+ Code:
109
+ ```py
110
+ for city in ["Guangzhou", "Shanghai"]:
111
+ print(f"Population {city}:", search(f"{city} population")
112
+ ```<end_code>
113
+ Observation:
114
+ Population Guangzhou: ['Guangzhou has a population of 15 million inhabitants as of 2021.']
115
+ Population Shanghai: '26 million (2019)'
116
+
117
+ Thought: Now I know that Shanghai has the highest population.
118
+ Code:
119
+ ```py
120
+ final_answer("Shanghai")
121
+ ```<end_code>
122
+
123
+ ---
124
+ Task: "What is the current age of the pope, raised to the power 0.36?"
125
+
126
+ Thought: I will use the tool `wiki` to get the age of the pope, and confirm that with a web search.
127
+ Code:
128
+ ```py
129
+ pope_age_wiki = wiki(query="current pope age")
130
+ print("Pope age as per wikipedia:", pope_age_wiki)
131
+ pope_age_search = web_search(query="current pope age")
132
+ print("Pope age as per google search:", pope_age_search)
133
+ ```<end_code>
134
+ Observation:
135
+ Pope age: "The pope Francis is currently 88 years old."
136
+
137
+ Thought: I know that the pope is 88 years old. Let's compute the result using python code.
138
+ Code:
139
+ ```py
140
+ pope_current_age = 88 ** 0.36
141
+ final_answer(pope_current_age)
142
+ ```<end_code>
143
+
144
+ Above example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools:
145
+ {%- for tool in tools.values() %}
146
+ - {{ tool.name }}: {{ tool.description }}
147
+ Takes inputs: {{tool.inputs}}
148
+ Returns an output of type: {{tool.output_type}}
149
+ {%- endfor %}
150
+
151
+ {%- if managed_agents and managed_agents.values() | list %}
152
+ You can also give tasks to team members.
153
+ Calling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'task', a long string explaining your task.
154
+ Given that this team member is a real human, you should be very verbose in your task.
155
+ Here is a list of the team members that you can call:
156
+ {%- for agent in managed_agents.values() %}
157
+ - {{ agent.name }}: {{ agent.description }}
158
+ {%- endfor %}
159
+ {%- else %}
160
+ {%- endif %}
161
+
162
+ Here are the rules you should always follow to solve your task:
163
+ 1. Always provide a 'Thought:' sequence, and a 'Code:\n```py' sequence ending with '```<end_code>' sequence, else you will fail.
164
+ 2. Use only variables that you have defined!
165
+ 3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': "What is the place where James Bond lives?"})', but use the arguments directly as in 'answer = wiki(query="What is the place where James Bond lives?")'.
166
+ 4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block.
167
+ 5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters.
168
+ 6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'.
169
+ 7. Never create any notional variables in our code, as having these in your logs will derail you from the true variables.
170
+ 8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}}
171
+ 9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist.
172
+ 10. Don't give up! You're in charge of solving the task, not providing directions to solve it.
173
+
174
+ Now Begin! If you solve the task correctly, you will receive a reward of $1,000,000.
175
+ "planning":
176
+ "initial_facts": |-
177
+ Below I will present you a task.
178
+
179
+ You will now build a comprehensive preparatory survey of which facts we have at our disposal and which ones we still need.
180
+ To do so, you will have to read the task and identify things that must be discovered in order to successfully complete it.
181
+ Don't make any assumptions. For each item, provide a thorough reasoning. Here is how you will structure this survey:
182
+
183
+ ---
184
+ ### 1. Facts given in the task
185
+ List here the specific facts given in the task that could help you (there might be nothing here).
186
+
187
+ ### 2. Facts to look up
188
+ List here any facts that we may need to look up.
189
+ Also list where to find each of these, for instance a website, a file... - maybe the task contains some sources that you should re-use here.
190
+
191
+ ### 3. Facts to derive
192
+ List here anything that we want to derive from the above by logical reasoning, for instance computation or simulation.
193
+
194
+ Keep in mind that "facts" will typically be specific names, dates, values, etc. Your answer should use the below headings:
195
+ ### 1. Facts given in the task
196
+ ### 2. Facts to look up
197
+ ### 3. Facts to derive
198
+ Do not add anything else.
199
+ "initial_plan": |-
200
+ You are a world expert at making efficient plans to solve any task using a set of carefully crafted tools.
201
+
202
+ Now for the given task, develop a step-by-step high-level plan taking into account the above inputs and list of facts.
203
+ This plan should involve individual tasks based on the available tools, that if executed correctly will yield the correct answer.
204
+ Do not skip steps, do not add any superfluous steps. Only write the high-level plan, DO NOT DETAIL INDIVIDUAL TOOL CALLS.
205
+ After writing the final step of the plan, write the '\n<end_plan>' tag and stop there.
206
+
207
+ Here is your task:
208
+
209
+ Task:
210
+ ```
211
+ {{task}}
212
+ ```
213
+ You can leverage these tools:
214
+ {%- for tool in tools.values() %}
215
+ - {{ tool.name }}: {{ tool.description }}
216
+ Takes inputs: {{tool.inputs}}
217
+ Returns an output of type: {{tool.output_type}}
218
+ {%- endfor %}
219
+
220
+ {%- if managed_agents and managed_agents.values() | list %}
221
+ You can also give tasks to team members.
222
+ Calling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'request', a long string explaining your request.
223
+ Given that this team member is a real human, you should be very verbose in your request.
224
+ Here is a list of the team members that you can call:
225
+ {%- for agent in managed_agents.values() %}
226
+ - {{ agent.name }}: {{ agent.description }}
227
+ {%- endfor %}
228
+ {%- else %}
229
+ {%- endif %}
230
+
231
+ List of facts that you know:
232
+ ```
233
+ {{answer_facts}}
234
+ ```
235
+
236
+ Now begin! Write your plan below.
237
+ "update_facts_pre_messages": |-
238
+ You are a world expert at gathering known and unknown facts based on a conversation.
239
+ Below you will find a task, and a history of attempts made to solve the task. You will have to produce a list of these:
240
+ ### 1. Facts given in the task
241
+ ### 2. Facts that we have learned
242
+ ### 3. Facts still to look up
243
+ ### 4. Facts still to derive
244
+ Find the task and history below:
245
+ "update_facts_post_messages": |-
246
+ Earlier we've built a list of facts.
247
+ But since in your previous steps you may have learned useful new facts or invalidated some false ones.
248
+ Please update your list of facts based on the previous history, and provide these headings:
249
+ ### 1. Facts given in the task
250
+ ### 2. Facts that we have learned
251
+ ### 3. Facts still to look up
252
+ ### 4. Facts still to derive
253
+
254
+ Now write your new list of facts below.
255
+ "update_plan_pre_messages": |-
256
+ You are a world expert at making efficient plans to solve any task using a set of carefully crafted tools.
257
+
258
+ You have been given a task:
259
+ ```
260
+ {{task}}
261
+ ```
262
+
263
+ Find below the record of what has been tried so far to solve it. Then you will be asked to make an updated plan to solve the task.
264
+ If the previous tries so far have met some success, you can make an updated plan based on these actions.
265
+ If you are stalled, you can make a completely new plan starting from scratch.
266
+ "update_plan_post_messages": |-
267
+ You're still working towards solving this task:
268
+ ```
269
+ {{task}}
270
+ ```
271
+
272
+ You can leverage these tools:
273
+ {%- for tool in tools.values() %}
274
+ - {{ tool.name }}: {{ tool.description }}
275
+ Takes inputs: {{tool.inputs}}
276
+ Returns an output of type: {{tool.output_type}}
277
+ {%- endfor %}
278
+
279
+ {%- if managed_agents and managed_agents.values() | list %}
280
+ You can also give tasks to team members.
281
+ Calling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'task'.
282
+ Given that this team member is a real human, you should be very verbose in your task, it should be a long string providing informations as detailed as necessary.
283
+ Here is a list of the team members that you can call:
284
+ {%- for agent in managed_agents.values() %}
285
+ - {{ agent.name }}: {{ agent.description }}
286
+ {%- endfor %}
287
+ {%- else %}
288
+ {%- endif %}
289
+
290
+ Here is the up to date list of facts that you know:
291
+ ```
292
+ {{facts_update}}
293
+ ```
294
+
295
+ Now for the given task, develop a step-by-step high-level plan taking into account the above inputs and list of facts.
296
+ This plan should involve individual tasks based on the available tools, that if executed correctly will yield the correct answer.
297
+ Beware that you have {remaining_steps} steps remaining.
298
+ Do not skip steps, do not add any superfluous steps. Only write the high-level plan, DO NOT DETAIL INDIVIDUAL TOOL CALLS.
299
+ After writing the final step of the plan, write the '\n<end_plan>' tag and stop there.
300
+
301
+ Now write your new plan below.
302
+ "managed_agent":
303
+ "task": |-
304
+ You're a helpful agent named '{{name}}'.
305
+ You have been submitted this task by your manager.
306
+ ---
307
+ Task:
308
+ {{task}}
309
+ ---
310
+ You're helping your manager solve a wider task: so make sure to not provide a one-line answer, but give as much information as possible to give them a clear understanding of the answer.
311
+
312
+ Your final_answer WILL HAVE to contain these parts:
313
+ ### 1. Task outcome (short version):
314
+ ### 2. Task outcome (extremely detailed version):
315
+ ### 3. Additional context (if relevant):
316
+
317
+ Put all these in your final_answer tool, everything that you do not pass as an argument to final_answer will be lost.
318
+ And even if your task resolution is not successful, please return as much context as possible, so that your manager can act upon this feedback.
319
+ "report": |-
320
+ Here is the final answer from your managed agent '{{name}}':
321
+ {{final_answer}}
Solution/pyproject.toml ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "shopping-agent"
3
+ version = "0.1.0"
4
+ description = "Shopping agent to automate your shopping!"
5
+ readme = "README.md"
6
+ requires-python = ">=3.11"
7
+ dependencies = [
8
+ "arize-phoenix>=10.10.0",
9
+ "gradio-client>=1.10.3",
10
+ "llama-index>=0.12.41",
11
+ "llama-index-core>=0.12.41",
12
+ "llama-index-llms-gemini>=0.5.0",
13
+ "llama-index-tools-duckduckgo>=0.3.0",
14
+ "llama-index-utils-workflow>=0.3.2",
15
+ "markdownify>=1.1.0",
16
+ "openinference-instrumentation-llama-index>=4.3.0",
17
+ "python-dotenv>=1.1.0",
18
+ ]
Solution/src/agents/manager.py ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ class ManagerIdentity():
2
+ def __init__(self):
3
+ name="manager_agent"
4
+ description="The Main Agent Responsible of Fulfilling the user's request in the task of automated shopping."
5
+ system_prompt=\
6
+ '''
7
+ '''
8
+ tools=[]
Solution/src/agents/product_hunter.py ADDED
File without changes
Solution/src/agents/prompts.yaml ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ manager_agent:
2
+ name: "manager_agent"
3
+ description: "The Main Agent Responsible of Fulfilling the user's request in the task of automated shopping."
4
+ system_prompt: |-
5
+ You are the Manager Agent, the orchestrator of a multi-agent shopping system. Your primary role is to understand user needs, break down complex requests, and coordinate other agents to create a complete, compatible, and budget-conscious shopping list.
6
+
7
+ **Core Mission:**
8
+ 1. **Engage User:** Directly communicate with the user to clarify their shopping request, including all requirements, preferences, and constraints.
9
+ 2. **Deconstruct & Strategize:** Analyze the user's request. For complex systems (e.g., "a complete gaming PC build"), you must break it down into a logical sequence of components to search for.
10
+ 3. **Orchestrate Agents:** Sequentially delegate tasks to the appropriate agents using the handoff tool.
11
+ - Use handoff with to_agent="product_hunter_agent" to find specific components.
12
+ - For complex builds, ensure each component choice informs the requirements for the next.
13
+ - Use handoff with to_agent="product_investigator_agent" for a deep-dive on a critical component.
14
+ - Use handoff with to_agent="trivial_search_agent" to validate choices against community feedback.
15
+ 4. **Compile & Deliver:** Assemble the final, coherent shopping list with valid purchase URLs and ensure total cost is within budget.
16
+
17
+ **Critical Directives:**
18
+ - You MUST return a list of products with direct purchase URLs.
19
+ - For multi-part systems, ensure all components are compatible.
20
+ - Verify every recommended product is in stock by delegating effectively.
21
+ - If requirements cannot be met, clearly explain why and suggest alternatives.
22
+
23
+ **Available Tools:**
24
+ - handoff: Use this to delegate tasks to other specialized agents
25
+ Format: handoff(to_agent="agent_name", reason="reason for handoff")
26
+
27
+ **Available Agents for Handoff:**
28
+ - product_hunter_agent: Returns top products URLs matching requirements with brief descriptions
29
+ - product_investigator_agent: Gets detailed information about any specific product
30
+ - trivial_search_agent: Validates choices against community feedback and recommendations
31
+
32
+ **Example Usage:**
33
+ To find a gaming laptop: handoff(to_agent="product_hunter_agent", reason="Need to find gaming laptops under $1500 with RTX 4060 or better")
34
+
35
+ **Final Answer Format:**
36
+ Present final recommendations as:
37
+ - **[Product Name]** - [Price] - [Vendor] - [Direct Purchase URL]
38
+ - Justification for selection
39
+ - Total cost summary
40
+
41
+ product_hunter_agent:
42
+ name: "product_hunter_agent"
43
+ description: "This agent is responsible of returning top products URLs matching the requirements with a brief description of the product of each URL."
44
+ system_prompt: |-
45
+ You are the Product Hunter Agent. Your mission is to find specific, in-stock products from reputable vendors that match precise requirements.
46
+
47
+ **Core Mission:**
48
+ Given a request for a specific product type with budget and criteria:
49
+ 1. **Search & Identify:** Search reputable online retailers for matching products
50
+ 2. **Deeply Investigate:** Visit actual product pages to verify all details
51
+ 3. **Verify & Validate:**
52
+ - Stock: Confirm product is IN STOCK and available for delivery
53
+ - Price: Ensure price is within specified budget
54
+ - Specs: Verify specifications match all requirements
55
+ - URL: Ensure link is direct, working purchase URL
56
+ 4. **Report Findings:** Return 3-5 verified, in-stock product options
57
+
58
+ **You MUST return valid, in-stock purchase URLs.**
59
+
60
+ **Available Tools:**
61
+ - duckduckgo_full_search: Search the web for products
62
+ - visit_webpage: Visit and read webpage content
63
+ - Get_info_from_url: Extract detailed product information from URLs
64
+ - handoff: Transfer to another agent if needed
65
+ Format: handoff(to_agent="agent_name", reason="reason for handoff")
66
+
67
+ trivial_search_agent:
68
+ name: "trivial_search_agent"
69
+ description: "This agent is Responsible of fulfilling the user's request in the task of automated shopping."
70
+ system_prompt: |-
71
+ You are the Community Validation Agent. Your purpose is to act as a "sanity check" by comparing proposed product choices against real-world human experiences and recommendations.
72
+
73
+ **Core Mission:**
74
+ Given a product or set of components:
75
+ 1. **Search Communities:** Look through Reddit, tech forums, YouTube for discussions
76
+ 2. **Gather Insights:** Find:
77
+ - Common praise or complaints
78
+ - Widely recommended alternatives
79
+ - Real-world performance or compatibility issues
80
+ - Pre-built systems with similar components
81
+ 3. **Synthesize & Report:** Provide community sentiment summary
82
+
83
+ **Output Structure:**
84
+ COMMUNITY INSIGHTS:
85
+ - General Sentiment: [Positive/Negative/Mixed]
86
+ - Common Praise: [specific feedback]
87
+ - Common Complaints: [specific issues]
88
+ - Popular Alternatives: [recommended alternatives]
89
+ VALIDATION: [Confirmation or warning about current choices]
90
+
91
+ **Available Tools:**
92
+ - duckduckgo_full_search: Search for community discussions
93
+ - visit_webpage: Visit and read webpage content
94
+ - Get_info_from_url: Extract detailed product information from URLs
95
+ - handoff: Transfer to another agent if needed
96
+ Format: handoff(to_agent="agent_name", reason="reason for handoff")
97
+
98
+ product_investigator_agent:
99
+ name: "product_investigator_agent"
100
+ description: "The agent is responsible of getting information about any product."
101
+ system_prompt: |-
102
+ You are the Product Investigator Agent. Your mission is to conduct deep-dive analysis of specific product candidates to ensure quality, performance, and suitability.
103
+
104
+ **Core Mission:**
105
+ When given a specific product to investigate:
106
+ 1. **Analyze Specifications:** Find detailed technical specs from manufacturer sites and reviews
107
+ 2. **Verify Compatibility:** Check compatibility with other specified components
108
+ 3. **Assess Performance:** Research professional benchmarks, user reviews, performance comparisons
109
+ 4. **Evaluate Quality & Value:** Investigate build quality, reliability, price-to-performance ratio
110
+ 5. **Deliver Verdict:** Provide comprehensive report with pros/cons and final recommendation
111
+
112
+ **Recommendation Options:**
113
+ - "Approve" - Product meets all requirements
114
+ - "Reject" - Product has significant issues
115
+ - "Approve with Warning" - Product is acceptable but has noted concerns
116
+
117
+ **Available Tools:**
118
+ - duckduckgo_full_search: Search for detailed product information
119
+ - visit_webpage: Visit and read webpage content
120
+ - Get_info_from_url: Extract detailed product information from URLs
121
+ - handoff: Transfer to another agent if needed
122
+ Format: handoff(to_agent="agent_name", reason="reason for handoff")
Solution/src/llms/gemini_2_flash.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ from llama_index.llms.gemini import Gemini
2
+
3
+ def create_gemini():
4
+ return Gemini(model="models/gemini-2.0-flash")
Solution/src/tools/query_on_url.py ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from gradio_client import Client
2
+ import time
3
+ import requests
4
+ from typing import Any, Optional
5
+ from llama_index.core.tools import FunctionTool
6
+
7
+
8
+ client = Client("Agents-MCP-Hackathon/MCP_Server_Web2JSON")
9
+
10
+
11
+ def query_url_tool(url: str) -> str:
12
+ """
13
+ Queries a URL and returns the content as a markdown string.
14
+
15
+ Args:
16
+ url (str): The URL to query.
17
+
18
+ Returns:
19
+ str: The content of the URL in markdown format.
20
+ """
21
+ # Sleep for 3 seconds to avoid overwhelming the server
22
+ time.sleep(3)
23
+ try:
24
+ result = client.predict(
25
+ content="url",
26
+ is_url=True,
27
+ schema_name="Product",
28
+ api_name="/predict"
29
+ )
30
+ return result
31
+
32
+ except requests.exceptions.Timeout:
33
+ return "The request timed out. Please try again later or check the URL."
34
+ except requests.exceptions.RequestException as e:
35
+ return f"Error fetching the webpage: {str(e)}"
36
+ except Exception as e:
37
+ return f"An unexpected error occurred: {str(e)}"
38
+
39
+
40
+ Get_info_from_url_tool = FunctionTool.from_defaults(
41
+ name="Get_info_from_url",
42
+ fn=query_url_tool,
43
+ description="Given a product's URL, it returns a JSON object that contains all the important attributes about a product."
44
+ )
Solution/src/tools/visit_webpage.py ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import time
3
+ import requests
4
+ import markdownify
5
+ from typing import Any, Optional
6
+ from llama_index.core.tools import FunctionTool
7
+ from bs4 import BeautifulSoup
8
+ from bs4 import Comment
9
+
10
+ def visit_webpage(url: str) -> str:
11
+ """
12
+ Visits a webpage at the given url and reads its content as a markdown string.
13
+
14
+ Args:
15
+ url (str): The url of the webpage to visit.
16
+
17
+ Returns:
18
+ str: The webpage content converted to markdown.
19
+ """
20
+ try:
21
+
22
+ # Sleep for 3 seconds to avoid overwhevlming the server
23
+ time.sleep(3)
24
+
25
+ # Send a GET request to the URL with a 20-second timeout
26
+ headers = {
27
+ "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36",
28
+ "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8",
29
+ "Accept-Language": "en-US,en;q=0.6",
30
+ "Cache-Control": "max-age=0",
31
+ "Sec-Ch-Ua": "\"Not(A:Brand\";v=\"99\", \"Brave\";v=\"133\", \"Chromium\";v=\"133\"",
32
+ "Sec-Ch-Ua-Mobile": "?0",
33
+ "Sec-Ch-Ua-Platform": "\"Windows\"",
34
+ "Sec-Fetch-Dest": "document",
35
+ "Sec-Fetch-Mode": "navigate",
36
+ "Sec-Fetch-Site": "none",
37
+ "Sec-Fetch-User": "?1",
38
+ "Upgrade-Insecure-Requests": "1",
39
+ }
40
+
41
+ # Make the HTTP GET request with a timeout.
42
+ response = requests.get(url, headers=headers, timeout=20)
43
+ # response = requests.get(url, timeout=20)
44
+ response.raise_for_status() # Raise an exception for bad status codes
45
+
46
+ # Parse the HTML content
47
+ soup = BeautifulSoup(response.text, "html.parser")
48
+
49
+ # Remove script and style elements
50
+ for tag in soup(["script", "style"]):
51
+ tag.decompose()
52
+
53
+ # Remove HTML comments
54
+ for comment in soup.find_all(string=lambda text: isinstance(text, Comment)):
55
+ comment.extract()
56
+
57
+
58
+ text = soup.get_text(separator=" ", strip=True)
59
+ clean_text = re.sub(r'\s+', ' ', text)
60
+
61
+ # Convert the HTML content to Markdown
62
+ # markdown_content = markdownify.markdownify(soup.text).strip()
63
+
64
+ # Remove multiple line breaks
65
+ # markdown_content = re.sub(r"\n{3,}", "\n\n", markdown_content)
66
+
67
+ # Truncate to reasonable size
68
+ # max_length = 10000
69
+ # if len(markdown_content) > max_length:
70
+ # markdown_content = markdown_content[:max_length] + \
71
+ # "... (content truncated)"
72
+
73
+ return clean_text[:10]
74
+
75
+ except requests.exceptions.Timeout:
76
+ return "The request timed out. Please try again later or check the URL."
77
+ except requests.exceptions.RequestException as e:
78
+ return f"Error fetching the webpage: {str(e)}"
79
+ except Exception as e:
80
+ return f"An unexpected error occurred: {str(e)}"
81
+
82
+
83
+ # Create a LlamaIndex tool
84
+ visit_webpage_tool = FunctionTool.from_defaults(
85
+ name="visit_webpage",
86
+ fn=visit_webpage,
87
+ description="Visits a webpage at the given url and reads its content as a markdown string. Use this to browse webpages."
88
+ )
89
+
Solution/src/tools/web_search.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import time
2
+ from typing import Any, Optional
3
+ from llama_index.core.tools import FunctionTool
4
+ from llama_index.tools.duckduckgo import DuckDuckGoSearchToolSpec
5
+
6
+ # print([tool for tool in search_tool])
7
+
8
+
9
+ def duck_search_tool(query: str) -> str:
10
+ """
11
+ search for a specific query on internet.
12
+ Args:
13
+ query (str): The query to search with.
14
+
15
+ Returns:
16
+ str: the result of searching.
17
+ """
18
+ # Sleep for 3 seconds to avoid overwhelming the server
19
+ time.sleep(3)
20
+
21
+ tool_spec = DuckDuckGoSearchToolSpec()
22
+ result = tool_spec.duckduckgo_full_search(query)
23
+
24
+ return result
25
+
26
+
27
+ search_tool = FunctionTool.from_defaults(
28
+ name="duckduckgo_full_search",
29
+ fn=duck_search_tool,
30
+ description="Make a query to DuckDuckGo search to receive a full search results."
31
+ )
Solution/todo.md ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ * Online Shopping and Must Return Links or say there is none
2
+ * Model
3
+ * Search Engine
4
+
5
+
6
+ * Traces and Logs
Solution/uv.lock ADDED
The diff for this file is too large to render. See raw diff
 
app.py CHANGED
@@ -1,29 +1,257 @@
1
  import gradio as gr
2
  import json
3
  import time
 
4
  from typing import List, Dict, Tuple
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
- # Sample product data for demonstration
7
- SAMPLE_PRODUCTS = [
8
- {
9
- "productName": "Wireless Noise-Cancelling Headphones",
10
- "productLink": "https://example.com/headphones",
11
- "productPrice": "$199.99",
12
- "productImageLink": "https://via.placeholder.com/300x200/667eea/ffffff?text=Headphones"
13
- },
14
- {
15
- "productName": "Smart Fitness Watch",
16
- "productLink": "https://example.com/smartwatch",
17
- "productPrice": "$299.99",
18
- "productImageLink": "https://via.placeholder.com/300x200/764ba2/ffffff?text=Smartwatch"
19
- },
20
- {
21
- "productName": "Ergonomic Office Chair",
22
- "productLink": "https://example.com/chair",
23
- "productPrice": "$449.99",
24
- "productImageLink": "https://via.placeholder.com/300x200/06b6d4/ffffff?text=Office+Chair"
25
- }
26
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
  def format_product_card(product: Dict) -> str:
29
  """Format a single product as HTML card"""
@@ -80,70 +308,58 @@ def format_products_html(products: List[Dict]) -> str:
80
  </div>
81
  """
82
 
83
- def simulate_ai_agent(message: str, history: List[Tuple[str, str]]) -> Tuple[str, str]:
84
- """
85
- Simulate your AI agent response
86
- Replace this with your actual LlamaIndex agent logic
87
-
88
- Returns: (response_text, products_html)
89
- """
90
- # Simulate processing time
91
- time.sleep(2)
92
-
93
- # Sample responses based on keywords
94
- if "laptop" in message.lower() or "computer" in message.lower():
95
- response = "Great! I found some excellent laptops for programming and gaming within your budget. These models offer fantastic performance and great value."
96
- products = [
97
- {
98
- "productName": "ASUS TUF Gaming Laptop - RTX 3060",
99
- "productLink": "https://example.com/asus-gaming",
100
- "productPrice": "$899.99",
101
- "productImageLink": "https://via.placeholder.com/300x200/667eea/ffffff?text=ASUS+TUF"
102
- },
103
- {
104
- "productName": "Acer Nitro 5 Gaming Laptop",
105
- "productLink": "https://example.com/acer-nitro5",
106
- "productPrice": "$799.99",
107
- "productImageLink": "https://via.placeholder.com/300x200/764ba2/ffffff?text=Acer+Nitro"
108
- },
109
- {
110
- "productName": "HP Pavilion Gaming 15",
111
- "productLink": "https://example.com/hp-pavilion",
112
- "productPrice": "$749.99",
113
- "productImageLink": "https://via.placeholder.com/300x200/06b6d4/ffffff?text=HP+Pavilion"
114
- }
115
- ]
116
- elif "headphone" in message.lower() or "audio" in message.lower():
117
- response = "Perfect! I've found some top-rated headphones with excellent sound quality and noise cancellation."
118
- products = [
119
- {
120
- "productName": "Sony WH-1000XM4 Wireless Headphones",
121
- "productLink": "https://example.com/sony-wh1000xm4",
122
- "productPrice": "$279.99",
123
- "productImageLink": "https://via.placeholder.com/300x200/667eea/ffffff?text=Sony+WH1000XM4"
124
- },
125
- {
126
- "productName": "Bose QuietComfort 45",
127
- "productLink": "https://example.com/bose-qc45",
128
- "productPrice": "$329.99",
129
- "productImageLink": "https://via.placeholder.com/300x200/764ba2/ffffff?text=Bose+QC45"
130
- }
131
- ]
132
- else:
133
- response = "I understand you're looking for something specific! Let me find the best options for you based on your requirements."
134
- products = SAMPLE_PRODUCTS
135
-
136
- products_html = format_products_html(products)
137
 
138
- return response, products_html
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
140
  def chat_interface(message: str, history: List[Tuple[str, str]], products_state: str) -> Tuple[List[Tuple[str, str]], str, str]:
141
  """Handle chat interaction and update products"""
142
  if not message.strip():
143
  return history, products_state, ""
144
 
145
- # Get AI response and products
146
- ai_response, new_products_html = simulate_ai_agent(message, history)
 
 
 
 
147
 
148
  # Update chat history
149
  history.append((message, ai_response))
@@ -173,13 +389,11 @@ custom_css = """
173
  color: var(--text-primary) !important;
174
  }
175
 
176
- /* Override Gradio's default backgrounds */
177
  .gradio-container .block {
178
  background: transparent !important;
179
  border: none !important;
180
  }
181
 
182
- /* Header styling */
183
  .header-text {
184
  background: var(--accent-primary);
185
  color: white !important;
@@ -192,7 +406,6 @@ custom_css = """
192
  backdrop-filter: blur(10px);
193
  }
194
 
195
- /* Products panel styling */
196
  .products-panel {
197
  background: var(--bg-card) !important;
198
  border-radius: 20px !important;
@@ -202,7 +415,6 @@ custom_css = """
202
  backdrop-filter: blur(10px) !important;
203
  }
204
 
205
- /* Chat panel styling */
206
  .chat-panel {
207
  background: var(--bg-chat) !important;
208
  border-radius: 20px !important;
@@ -212,7 +424,6 @@ custom_css = """
212
  padding: 25px !important;
213
  }
214
 
215
- /* Chatbot container */
216
  .chatbot {
217
  background: rgba(26, 32, 44, 0.6) !important;
218
  border-radius: 15px !important;
@@ -221,12 +432,6 @@ custom_css = """
221
  backdrop-filter: blur(5px) !important;
222
  }
223
 
224
- /* Chat messages styling */
225
- .chatbot .message {
226
- background: transparent !important;
227
- }
228
-
229
- /* User messages */
230
  .chatbot .message.user {
231
  background: var(--accent-primary) !important;
232
  color: white !important;
@@ -237,7 +442,6 @@ custom_css = """
237
  border: 1px solid rgba(102, 126, 234, 0.3) !important;
238
  }
239
 
240
- /* Bot messages */
241
  .chatbot .message.bot {
242
  background: rgba(45, 55, 72, 0.8) !important;
243
  color: var(--text-primary) !important;
@@ -249,7 +453,6 @@ custom_css = """
249
  backdrop-filter: blur(5px) !important;
250
  }
251
 
252
- /* Input styling */
253
  .chatbot-input textarea {
254
  background: rgba(26, 32, 44, 0.8) !important;
255
  border: 2px solid var(--border-color) !important;
@@ -272,7 +475,6 @@ custom_css = """
272
  background: rgba(26, 32, 44, 0.9) !important;
273
  }
274
 
275
- /* Button styling */
276
  .chatbot-submit {
277
  background: var(--accent-primary) !important;
278
  border: none !important;
@@ -291,7 +493,6 @@ custom_css = """
291
  filter: brightness(1.1) !important;
292
  }
293
 
294
- /* Scrollbar styling */
295
  ::-webkit-scrollbar {
296
  width: 10px;
297
  }
@@ -307,62 +508,25 @@ custom_css = """
307
  border: 2px solid rgba(26, 32, 44, 0.3);
308
  }
309
 
310
- ::-webkit-scrollbar-thumb:hover {
311
- background: linear-gradient(135deg, #7c3aed 0%, #8b5cf6 100%);
312
- }
313
-
314
- /* Product cards hover effect */
315
  .product-card:hover {
316
  transform: translateY(-3px) !important;
317
  box-shadow: 0 12px 30px rgba(102, 126, 234, 0.2) !important;
318
  }
319
 
320
- /* Text color overrides */
321
  .products-panel h3,
322
  .products-panel p,
323
  .chat-panel h3 {
324
  color: var(--text-primary) !important;
325
  }
326
 
327
- /* Gradio component overrides */
328
- .gr-form {
329
- background: transparent !important;
330
- }
331
-
332
- .gr-box {
333
- background: transparent !important;
334
- border: none !important;
335
- }
336
-
337
- /* Fix input group styling */
338
- .gr-group {
339
- background: transparent !important;
340
- }
341
-
342
- /* Textarea wrapper */
343
- .gr-textbox {
344
- background: transparent !important;
345
- }
346
-
347
- /* Button wrapper */
348
- .gr-button {
349
- background: transparent !important;
350
- }
351
-
352
- /* Chat interface specific fixes */
353
  .chatbot-container {
354
  background: transparent !important;
355
  }
356
-
357
- /* Message bubble improvements */
358
- .chatbot .message-row {
359
- margin: 10px 0 !important;
360
- }
361
-
362
- /* Better spacing for chat */
363
- .chatbot .chat-container {
364
- padding: 15px !important;
365
- }
366
  """
367
 
368
  # Create the Gradio interface
@@ -409,7 +573,7 @@ with gr.Blocks(css=custom_css, title="Buyverse - AI Shopping Agent", theme=gr.th
409
 
410
  # Chat interface
411
  chatbot = gr.Chatbot(
412
- value=[("", "πŸ‘‹ Hello! I'm your personal shopping assistant. Tell me what you're looking for and I'll help you find the perfect products. Whether it's electronics, clothing, home goods, or anything else - just describe your needs and budget!")],
413
  height=500,
414
  show_label=False,
415
  elem_classes=["chatbot"],
 
1
  import gradio as gr
2
  import json
3
  import time
4
+ import asyncio
5
  from typing import List, Dict, Tuple
6
+ from datetime import datetime
7
+ import yaml
8
+ import os
9
+ from dotenv import load_dotenv
10
+
11
+ # LlamaIndex and Phoenix imports
12
+ from llama_index.core.workflow import Context
13
+ from Solution.src.llms.gemini_2_flash import create_gemini
14
+ from llama_index.core.tools import QueryEngineTool
15
+ from llama_index.core.prompts import PromptTemplate
16
+ from llama_index.core.callbacks import CallbackManager
17
+ from llama_index.utils.workflow import draw_all_possible_flows
18
+ from llama_index.core.callbacks import CallbackManager, LlamaDebugHandler
19
+ from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
20
+ from llama_index.core.agent.workflow import AgentWorkflow, ReActAgent, FunctionAgent
21
+ from llama_index.core.agent.workflow import AgentWorkflow, ToolCallResult, AgentStream
22
+
23
+ # Custom imports
24
+ from Solution.src.tools.web_search import search_tool
25
+ from Solution.src.tools.visit_webpage import visit_webpage_tool
26
+ from Solution.src.tools.query_on_url import Get_info_from_url_tool
27
+
28
+ # Load environment variables
29
+ load_dotenv()
30
+
31
+ # Global variables for the workflow
32
+ workflow = None
33
+ ctx = None
34
+ agent_system_initialized = False
35
+
36
+ class CustomDebugHandler(LlamaDebugHandler):
37
+ """Custom debug handler for better traceability"""
38
+
39
+ def __init__(self):
40
+ super().__init__()
41
+ self.session_id = datetime.now().strftime("%Y%m%d_%H%M%S")
42
+
43
+ def create_callback_manager():
44
+ """Create a callback manager with custom handlers"""
45
+ debug_handler = CustomDebugHandler()
46
+ callback_manager = CallbackManager([debug_handler])
47
+ return callback_manager
48
+
49
+ def create_custom_react_prompt(agent_config):
50
+ """Create a properly formatted ReAct prompt that maintains the required structure"""
51
+ core_instructions = agent_config["system_prompt"]
52
+
53
+ react_prompt = f"""You are a helpful AI assistant that can use tools to answer questions.
54
+
55
+ {core_instructions}
56
+
57
+ When responding, you must follow this exact format:
58
+
59
+ Thought: I need to think about what the user is asking and determine if I need to use any tools.
60
+ Action: [tool_name if using a tool, or skip this line if not using a tool]
61
+ Action Input: [tool input in JSON format if using a tool, or skip this line if not using a tool]
62
+ Observation: [This will be filled in by the tool response]
63
+
64
+ Continue this Thought/Action/Action Input/Observation cycle until you have enough information to provide a final answer.
65
+
66
+ When you have enough information, provide your final response in this format:
67
+
68
+ Thought: I now have enough information to provide a complete answer.
69
+ Answer: [Your final answer here]
70
+
71
+ Remember to always start with a Thought and follow the exact format above."""
72
+
73
+ return react_prompt
74
 
75
+ def load_config(file_path):
76
+ """Load configuration from a YAML file."""
77
+ try:
78
+ with open(file_path, 'r') as file:
79
+ return yaml.safe_load(file)
80
+ except Exception as e:
81
+ print(f"Error loading config file: {e}")
82
+ return {}
83
+
84
+ async def initialize_agent_system():
85
+ """Initialize the agent system once"""
86
+ global workflow, ctx, agent_system_initialized
87
+
88
+ if agent_system_initialized:
89
+ return True
90
+
91
+ try:
92
+ # Create callback manager
93
+ callback_manager = create_callback_manager()
94
+
95
+ # Phoenix handler
96
+
97
+ # Create LLM with callback manager
98
+ llm = create_gemini()
99
+ if hasattr(llm, 'callback_manager'):
100
+ llm.callback_manager = callback_manager
101
+
102
+ # Load configuration
103
+ config = load_config('Solution/src/agents/prompts.yaml')
104
+
105
+ # Create agents
106
+ manager_agent = ReActAgent(
107
+ name=config["manager_agent"]["name"],
108
+ description=config["manager_agent"]["description"],
109
+ tools=[],
110
+ llm=llm,
111
+ callback_manager=callback_manager,
112
+ verbose=True,
113
+ can_handoff_to=["product_hunter_agent", "product_investigator_agent", "trivial_search_agent"],
114
+ )
115
+
116
+ manager_prompt = create_custom_react_prompt(config["manager_agent"])
117
+ manager_agent.update_prompts({
118
+ "react_header": PromptTemplate(manager_prompt)
119
+ })
120
+
121
+ product_hunter_agent = ReActAgent(
122
+ name=config["product_hunter_agent"]["name"],
123
+ description=config["product_hunter_agent"]["description"],
124
+ tools=[search_tool, visit_webpage_tool, Get_info_from_url_tool],
125
+ llm=llm,
126
+ callback_manager=callback_manager,
127
+ verbose=True,
128
+ can_handoff_to=["manager_agent"],
129
+ )
130
+
131
+ hunter_prompt = create_custom_react_prompt(config["product_hunter_agent"])
132
+ product_hunter_agent.update_prompts({
133
+ "react_header": PromptTemplate(hunter_prompt)
134
+ })
135
+
136
+ trivial_search_agent = ReActAgent(
137
+ name=config["trivial_search_agent"]["name"],
138
+ description=config["trivial_search_agent"]["description"],
139
+ tools=[search_tool, visit_webpage_tool, Get_info_from_url_tool],
140
+ llm=llm,
141
+ callback_manager=callback_manager,
142
+ verbose=True,
143
+ can_handoff_to=["manager_agent"],
144
+ )
145
+
146
+ trivial_prompt = create_custom_react_prompt(config["trivial_search_agent"])
147
+ trivial_search_agent.update_prompts({
148
+ "react_header": PromptTemplate(trivial_prompt)
149
+ })
150
+
151
+ product_investigator_agent = ReActAgent(
152
+ name=config["product_investigator_agent"]["name"],
153
+ description=config["product_investigator_agent"]["description"],
154
+ tools=[search_tool, visit_webpage_tool, Get_info_from_url_tool],
155
+ llm=llm,
156
+ callback_manager=callback_manager,
157
+ verbose=True,
158
+ can_handoff_to=["manager_agent"],
159
+ )
160
+
161
+ investigator_prompt = create_custom_react_prompt(config["product_investigator_agent"])
162
+ product_investigator_agent.update_prompts({
163
+ "react_header": PromptTemplate(investigator_prompt)
164
+ })
165
+
166
+ # Create workflow
167
+ workflow = AgentWorkflow(
168
+ agents=[manager_agent, product_hunter_agent, product_investigator_agent, trivial_search_agent],
169
+ root_agent="manager_agent",
170
+ verbose=True,
171
+ )
172
+
173
+ # Initialize context
174
+ ctx = Context(workflow)
175
+ agent_system_initialized = True
176
+
177
+ print("βœ… Agent system initialized successfully!")
178
+ return True
179
+
180
+ except Exception as e:
181
+ print(f"❌ Error initializing agent system: {e}")
182
+ import traceback
183
+ traceback.print_exc()
184
+ return False
185
+
186
+ def extract_products_from_response(response_text: str) -> List[Dict]:
187
+ """Extract product information from the agent's response"""
188
+ products = []
189
+
190
+ # This is a simplified parser - you may need to adjust based on your agent's output format
191
+ lines = response_text.split('\n')
192
+ current_product = {}
193
+
194
+ for line in lines:
195
+ line = line.strip()
196
+
197
+ # Look for product patterns in the response
198
+ if 'product' in line.lower() or 'item' in line.lower():
199
+ # Try to extract product info using various patterns
200
+
201
+ # Pattern 1: Product Name: Something - $Price
202
+ if ':' in line and '$' in line:
203
+ parts = line.split(':')
204
+ if len(parts) >= 2:
205
+ name_part = parts[1].strip()
206
+ if '$' in name_part:
207
+ name_price = name_part.split('$')
208
+ if len(name_price) >= 2:
209
+ name = name_price[0].strip().rstrip(' -')
210
+ price = '$' + name_price[1].split()[0]
211
+
212
+ product = {
213
+ "productName": name,
214
+ "productPrice": price,
215
+ "productLink": "https://example.com/product", # Default link
216
+ "productImageLink": "https://via.placeholder.com/300x200/667eea/ffffff?text=" + name[:20].replace(' ', '+')
217
+ }
218
+ products.append(product)
219
+
220
+ # Pattern 2: Look for URLs in the response
221
+ elif 'http' in line:
222
+ # Extract URL and try to get product name from context
223
+ url_start = line.find('http')
224
+ url_end = line.find(' ', url_start)
225
+ if url_end == -1:
226
+ url_end = len(line)
227
+ url = line[url_start:url_end]
228
+
229
+ # Try to find product name in the same line or nearby lines
230
+ name = "Product" # Default name
231
+ price = "Price not available"
232
+
233
+ if current_product:
234
+ current_product["productLink"] = url
235
+ products.append(current_product)
236
+ current_product = {}
237
+
238
+ # Look for price patterns
239
+ elif '$' in line and any(keyword in line.lower() for keyword in ['price', 'cost', 'usd', 'dollar']):
240
+ price_match = line.split('$')
241
+ if len(price_match) > 1:
242
+ price = '$' + price_match[1].split()[0]
243
+ current_product["productPrice"] = price
244
+
245
+ # If no products found, create a default response
246
+ if not products:
247
+ products = [{
248
+ "productName": "Search Results Available",
249
+ "productPrice": "Various prices",
250
+ "productLink": "https://example.com/search",
251
+ "productImageLink": "https://via.placeholder.com/300x200/667eea/ffffff?text=Search+Results"
252
+ }]
253
+
254
+ return products
255
 
256
  def format_product_card(product: Dict) -> str:
257
  """Format a single product as HTML card"""
 
308
  </div>
309
  """
310
 
311
+ async def run_agent_workflow(message: str, history: List[Tuple[str, str]]) -> Tuple[str, str]:
312
+ """Run the actual agent workflow"""
313
+ global workflow, ctx
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
314
 
315
+ try:
316
+ # Initialize if not already done
317
+ if not agent_system_initialized:
318
+ initialized = await initialize_agent_system()
319
+ if not initialized:
320
+ return "❌ Failed to initialize agent system. Please check your configuration.", format_products_html([])
321
+
322
+ print(f"🎯 Processing query: {message}")
323
+
324
+ # Run the workflow
325
+ handler = workflow.run(user_msg=message, ctx=ctx)
326
+
327
+ # Collect the response
328
+ response_parts = []
329
+ async for ev in handler.stream_events():
330
+ if isinstance(ev, AgentStream):
331
+ delta = getattr(ev, "delta", "")
332
+ if delta.strip():
333
+ response_parts.append(delta)
334
+
335
+ # Get final response
336
+ final_response = await handler
337
+ full_response = str(final_response)
338
+
339
+ # Extract products from the response
340
+ products = extract_products_from_response(full_response)
341
+ products_html = format_products_html(products)
342
+
343
+ return full_response, products_html
344
+
345
+ except Exception as e:
346
+ error_msg = f"❌ Error running agent workflow: {str(e)}"
347
+ print(error_msg)
348
+ import traceback
349
+ traceback.print_exc()
350
+ return error_msg, format_products_html([])
351
 
352
  def chat_interface(message: str, history: List[Tuple[str, str]], products_state: str) -> Tuple[List[Tuple[str, str]], str, str]:
353
  """Handle chat interaction and update products"""
354
  if not message.strip():
355
  return history, products_state, ""
356
 
357
+ # Run the agent workflow
358
+ try:
359
+ ai_response, new_products_html = asyncio.run(run_agent_workflow(message, history))
360
+ except Exception as e:
361
+ ai_response = f"❌ Error processing request: {str(e)}"
362
+ new_products_html = format_products_html([])
363
 
364
  # Update chat history
365
  history.append((message, ai_response))
 
389
  color: var(--text-primary) !important;
390
  }
391
 
 
392
  .gradio-container .block {
393
  background: transparent !important;
394
  border: none !important;
395
  }
396
 
 
397
  .header-text {
398
  background: var(--accent-primary);
399
  color: white !important;
 
406
  backdrop-filter: blur(10px);
407
  }
408
 
 
409
  .products-panel {
410
  background: var(--bg-card) !important;
411
  border-radius: 20px !important;
 
415
  backdrop-filter: blur(10px) !important;
416
  }
417
 
 
418
  .chat-panel {
419
  background: var(--bg-chat) !important;
420
  border-radius: 20px !important;
 
424
  padding: 25px !important;
425
  }
426
 
 
427
  .chatbot {
428
  background: rgba(26, 32, 44, 0.6) !important;
429
  border-radius: 15px !important;
 
432
  backdrop-filter: blur(5px) !important;
433
  }
434
 
 
 
 
 
 
 
435
  .chatbot .message.user {
436
  background: var(--accent-primary) !important;
437
  color: white !important;
 
442
  border: 1px solid rgba(102, 126, 234, 0.3) !important;
443
  }
444
 
 
445
  .chatbot .message.bot {
446
  background: rgba(45, 55, 72, 0.8) !important;
447
  color: var(--text-primary) !important;
 
453
  backdrop-filter: blur(5px) !important;
454
  }
455
 
 
456
  .chatbot-input textarea {
457
  background: rgba(26, 32, 44, 0.8) !important;
458
  border: 2px solid var(--border-color) !important;
 
475
  background: rgba(26, 32, 44, 0.9) !important;
476
  }
477
 
 
478
  .chatbot-submit {
479
  background: var(--accent-primary) !important;
480
  border: none !important;
 
493
  filter: brightness(1.1) !important;
494
  }
495
 
 
496
  ::-webkit-scrollbar {
497
  width: 10px;
498
  }
 
508
  border: 2px solid rgba(26, 32, 44, 0.3);
509
  }
510
 
 
 
 
 
 
511
  .product-card:hover {
512
  transform: translateY(-3px) !important;
513
  box-shadow: 0 12px 30px rgba(102, 126, 234, 0.2) !important;
514
  }
515
 
 
516
  .products-panel h3,
517
  .products-panel p,
518
  .chat-panel h3 {
519
  color: var(--text-primary) !important;
520
  }
521
 
522
+ .gr-form,
523
+ .gr-box,
524
+ .gr-group,
525
+ .gr-textbox,
526
+ .gr-button,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
527
  .chatbot-container {
528
  background: transparent !important;
529
  }
 
 
 
 
 
 
 
 
 
 
530
  """
531
 
532
  # Create the Gradio interface
 
573
 
574
  # Chat interface
575
  chatbot = gr.Chatbot(
576
+ value=[("", "πŸ‘‹ Hello! I'm your personal shopping assistant powered by advanced AI agents. Tell me what you're looking for and I'll help you find the perfect products. Whether it's electronics, clothing, home goods, or anything else - just describe your needs and budget!")],
577
  height=500,
578
  show_label=False,
579
  elem_classes=["chatbot"],