Spaces:
Runtime error
Runtime error
Update app.py
Browse files
app.py
CHANGED
|
@@ -160,7 +160,7 @@ def run_and_submit_all( profile: gr.OAuthProfile | None):
|
|
| 160 |
|
| 161 |
travily_api_search_tool = get_travily_api_search_tool(tavily_api_key)
|
| 162 |
#tools = [travily_api_search_tool, repl_tool, file_saver_tool,audio_transcriber_tool,wikipedia_search_tool,wikipedia_full_content_tool]
|
| 163 |
-
tools = [ repl_tool, file_saver_tool,audio_transcriber_tool,
|
| 164 |
|
| 165 |
# Pull a predefined prompt from LangChain Hub
|
| 166 |
# "hwchase17/react-chat" is a prompt template designed for ReAct-style conversational agents.
|
|
@@ -193,8 +193,8 @@ def run_and_submit_all( profile: gr.OAuthProfile | None):
|
|
| 193 |
|
| 194 |
|
| 195 |
IMPORTANT: When processing audio files (like .mp3) that have been saved using 'file_saver', the 'audio_transcriber_tool' MUST be used with the 'local_filename' of the saved audio file as its Action Input. Do NOT pass URLs or remote paths directly to 'audio_transcriber_tool'.
|
| 196 |
-
|
| 197 |
-
|
| 198 |
if you can use a LLM to answer the question, think step-by-step and then answer the question.
|
| 199 |
Example: given a chess board image and asked to predict the next best move, if Multi-modal LLM is available, you can use it to answer the question.
|
| 200 |
|
|
@@ -223,7 +223,7 @@ def run_and_submit_all( profile: gr.OAuthProfile | None):
|
|
| 223 |
|
| 224 |
Example 3:
|
| 225 |
Question: How many studio albums were published by Mercedes Sosa between 2000 and 2009 (included)? You can use the latest 2022 version of english wikipedia.
|
| 226 |
-
Thought: The user is asking for specific information from Wikipedia, likely requiring a list or discography. The `
|
| 227 |
Action: serpapi_Google Search
|
| 228 |
Action Input: Mercedes Sosa section: Discography
|
| 229 |
Observation: [Discography text content]
|
|
@@ -254,9 +254,9 @@ def run_and_submit_all( profile: gr.OAuthProfile | None):
|
|
| 254 |
"""
|
| 255 |
)
|
| 256 |
#'''
|
| 257 |
-
|
| 258 |
-
summary_memory = ConversationSummaryBufferMemory(llm=llm_client, memory_key="chat_history",
|
| 259 |
-
max_token_limit=4000) # Adjust this value based on your observations and model's context window
|
| 260 |
|
| 261 |
|
| 262 |
# Initialize gemini model with streaming enabled
|
|
|
|
| 160 |
|
| 161 |
travily_api_search_tool = get_travily_api_search_tool(tavily_api_key)
|
| 162 |
#tools = [travily_api_search_tool, repl_tool, file_saver_tool,audio_transcriber_tool,wikipedia_search_tool,wikipedia_full_content_tool]
|
| 163 |
+
tools = [ repl_tool, file_saver_tool,audio_transcriber_tool,travily_api_search_tool]
|
| 164 |
|
| 165 |
# Pull a predefined prompt from LangChain Hub
|
| 166 |
# "hwchase17/react-chat" is a prompt template designed for ReAct-style conversational agents.
|
|
|
|
| 193 |
|
| 194 |
|
| 195 |
IMPORTANT: When processing audio files (like .mp3) that have been saved using 'file_saver', the 'audio_transcriber_tool' MUST be used with the 'local_filename' of the saved audio file as its Action Input. Do NOT pass URLs or remote paths directly to 'audio_transcriber_tool'.
|
| 196 |
+
For any incoming image files (e.g., .jpg, .png), it's crucial to download and save them locally using the 'file_saver' tool. Once the image is saved, you should then analyze its content and decide whether to utilize other available tools or your LLM to formulate a response. If you have sufficient information and can provide a CONCISE response, or if no tool is needed, you MUST use this precise format:
|
| 197 |
+
|
| 198 |
if you can use a LLM to answer the question, think step-by-step and then answer the question.
|
| 199 |
Example: given a chess board image and asked to predict the next best move, if Multi-modal LLM is available, you can use it to answer the question.
|
| 200 |
|
|
|
|
| 223 |
|
| 224 |
Example 3:
|
| 225 |
Question: How many studio albums were published by Mercedes Sosa between 2000 and 2009 (included)? You can use the latest 2022 version of english wikipedia.
|
| 226 |
+
Thought: The user is asking for specific information from Wikipedia, likely requiring a list or discography. The `travily_api_search_tool` is best for this to get the detailed section. After getting the content, I will need to parse it using `python_repl` to count the albums within the specified years.
|
| 227 |
Action: serpapi_Google Search
|
| 228 |
Action Input: Mercedes Sosa section: Discography
|
| 229 |
Observation: [Discography text content]
|
|
|
|
| 254 |
"""
|
| 255 |
)
|
| 256 |
#'''
|
| 257 |
+
summary_memory = ConversationSummaryMemory(llm=llm_client, memory_key="chat_history")
|
| 258 |
+
'''summary_memory = ConversationSummaryBufferMemory(llm=llm_client, memory_key="chat_history",
|
| 259 |
+
max_token_limit=4000) # Adjust this value based on your observations and model's context window'''
|
| 260 |
|
| 261 |
|
| 262 |
# Initialize gemini model with streaming enabled
|