Thanh Vinh Vo
commited on
Commit
·
d6e8476
1
Parent(s):
09756ae
update
Browse files
NOTES
CHANGED
|
@@ -2,3 +2,4 @@
|
|
| 2 |
- Provide tool is better than prompt
|
| 3 |
- Don't give the master any tool, since it will try to delegate smaller work to the code agent, miss context
|
| 4 |
- Temperature to 0
|
|
|
|
|
|
| 2 |
- Provide tool is better than prompt
|
| 3 |
- Don't give the master any tool, since it will try to delegate smaller work to the code agent, miss context
|
| 4 |
- Temperature to 0
|
| 5 |
+
- BeautifulSoup too bad
|
app.py
CHANGED
|
@@ -240,7 +240,6 @@ class BasicAgent:
|
|
| 240 |
1. Take the question literally! Do not add any additional information or assumptions.
|
| 241 |
2. `wikipedia` Python library is provided that makes it easy to to interact with Wikipedia pages.
|
| 242 |
3. `pandas` Python package is provided that makes it easy to extract table data from Wikipedia HTML pages.
|
| 243 |
-
4. Only use BeautifulSoup to parse HTML at last resort!
|
| 244 |
""",
|
| 245 |
verbosity_level=0,
|
| 246 |
max_steps=10,
|
|
@@ -251,7 +250,7 @@ class BasicAgent:
|
|
| 251 |
model_id="Qwen/Qwen2.5-32B-Instruct",
|
| 252 |
temperature=0.0,
|
| 253 |
),
|
| 254 |
-
tools=[get_file, audio_to_text],
|
| 255 |
managed_agents=[
|
| 256 |
self.multimodal_agent,
|
| 257 |
self.code_agent],
|
|
@@ -289,7 +288,6 @@ class BasicAgent:
|
|
| 289 |
Please follow rules below:
|
| 290 |
1. Take the question literally! Do not add any additional information or assumptions.
|
| 291 |
2. `pandas` Python package is provided that makes it easy to extract table data from Wikipedia HTML pages.
|
| 292 |
-
3. Only use BeautifulSoup to parse HTML at last resort!
|
| 293 |
"""
|
| 294 |
result = self.manager_agent.run(prompt)
|
| 295 |
print(f"Agent responded with: {result}")
|
|
|
|
| 240 |
1. Take the question literally! Do not add any additional information or assumptions.
|
| 241 |
2. `wikipedia` Python library is provided that makes it easy to to interact with Wikipedia pages.
|
| 242 |
3. `pandas` Python package is provided that makes it easy to extract table data from Wikipedia HTML pages.
|
|
|
|
| 243 |
""",
|
| 244 |
verbosity_level=0,
|
| 245 |
max_steps=10,
|
|
|
|
| 250 |
model_id="Qwen/Qwen2.5-32B-Instruct",
|
| 251 |
temperature=0.0,
|
| 252 |
),
|
| 253 |
+
tools=[VisitWebpageTool(), GoogleSearchTool("serper"), get_file, audio_to_text, WikipediaSearchTool(), extract_table_from_html],
|
| 254 |
managed_agents=[
|
| 255 |
self.multimodal_agent,
|
| 256 |
self.code_agent],
|
|
|
|
| 288 |
Please follow rules below:
|
| 289 |
1. Take the question literally! Do not add any additional information or assumptions.
|
| 290 |
2. `pandas` Python package is provided that makes it easy to extract table data from Wikipedia HTML pages.
|
|
|
|
| 291 |
"""
|
| 292 |
result = self.manager_agent.run(prompt)
|
| 293 |
print(f"Agent responded with: {result}")
|