jomondal commited on
Commit
5accee7
Β·
1 Parent(s): 7d54b7d
.ipynb_checkpoints/requirements-checkpoint.txt DELETED
@@ -1,21 +0,0 @@
1
- gradio
2
- requests
3
- langchain
4
- langchain-community
5
- langchain-core
6
- langchain-google-genai
7
- langchain-huggingface
8
- langchain-groq
9
- langchain-tavily
10
- langchain-chroma
11
- langgraph
12
- huggingface_hub
13
- supabase
14
- arxiv
15
- pymupdf
16
- wikipedia
17
- pgvector
18
- python-dotenv
19
- pytesseract
20
- matplotlib
21
- sentence-transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
LICENSE DELETED
@@ -1,21 +0,0 @@
1
- MIT License
2
-
3
- Copyright (c) 2025 Luong Huu Thanh
4
-
5
- Permission is hereby granted, free of charge, to any person obtaining a copy
6
- of this software and associated documentation files (the "Software"), to deal
7
- in the Software without restriction, including without limitation the rights
8
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
- copies of the Software, and to permit persons to whom the Software is
10
- furnished to do so, subject to the following conditions:
11
-
12
- The above copyright notice and this permission notice shall be included in all
13
- copies or substantial portions of the Software.
14
-
15
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
- SOFTWARE.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -1,104 +1,19 @@
1
  ---
2
- title: Template Final Assignment
 
 
3
  emoji: πŸ•΅πŸ»β€β™‚οΈ
4
- colorFrom: indigo
5
  colorTo: indigo
6
  sdk: gradio
7
  sdk_version: 5.25.2
8
  app_file: app.py
9
- pinned: false
 
10
  hf_oauth: true
11
- # optional, default duration is 8 hours/480 minutes. Max duration is 30 days/43200 minutes.
12
- hf_oauth_expiration_minutes: 480
 
13
  ---
14
 
15
- # **GAIA Agent**
16
-
17
- ## **Introduction**
18
-
19
- **GAIA Agent** is an automated system built to tackle and submit solutions for the GAIA benchmark, which tests the capabilities of general-purpose AI agents on diverse and challenging tasks. These tasks require a combination of reasoning, code execution, information retrieval, data interpretation, and multimodal understanding. Powered by advanced language models (such as HuggingFace, and Groq), the agent incorporates a versatile set of tools including browser tools, code interpreter tools, mathematical tools, document processing tools, image processing and generation tools. It is designed for seamless interaction with the benchmark, offering automatic evaluation, submission, and result display through a user-friendly Gradio interface.
20
-
21
- ## **Tools Implementation**
22
-
23
- ### **Browser tools**
24
- - **Wikipedia Search:** Search Wikipedia for a query and return maximum 2 results.
25
- - **Web Search:** Search the web for a query and return maximum 2 results.
26
- - **Arxiv Search:** Search arXiv for a query and return maximum 2 results.
27
-
28
- ### **Code interpreter tools**
29
- - **Execute Multi-programming Language:** Execute code in multiple languages (Python, Bash, SQL, C, Java) and return results.
30
-
31
- ### **Mathematical tools**
32
- - **Multiplication Tools:** Multiplies 2 numbers
33
- - **Addition:** Adds 2 numbers
34
- - **Subtraction:** Subtracts 2 numbers
35
- - **Division:** Divides 2 numbers
36
- - **Modulus:** Get the modulus of 2 numbers
37
- - **Power:** Get the power of 2 numbers
38
- - **Square root:** Get the square root of a number
39
-
40
- ### **Document processing tools**
41
- - **Save and Read File:** Save content to a file and return the path
42
- - **Download a File from URL:** Download a file from a URL and save it to a temporary location
43
- - **Extract Text from Image:** Extract text from an image using OCR library pytesseract (if available)
44
- - **Analyze CSV File:** Analyze a CSV file using pandas and answer a question about it
45
- - **Analyze Excel File:** Analyze an Excel file using pandas and answer a question about it
46
-
47
- ### **Image processing and generation tools**
48
- - **Analyze Image:** Analyze basic properties of an image (size, mode, color analysis, thumbnail preview)
49
- - **Transform Image:** Apply transformations: resize, rotate, crop, flip, brightness, contrast, blur, sharpen, grayscale
50
- - **Draw on Image:** Draw shapes (rectangle, circle, line) or text onto an image
51
- - **Generate Simple Image:** Generate a simple image (gradient, noise, pattern, chart)
52
- - **Combine Images:** Combine multiple images (collage, stack, blend)
53
-
54
-
55
- ## **Installation**
56
- Clone the repository, change the current working directory to this repository's root folder:
57
-
58
- ```
59
- git clone https://github.com/fisherman611/gaia-agent.git
60
- ```
61
- ```
62
- cd gaia-agent
63
- ```
64
-
65
- Install ```requirements.txt``` (replace `3.11` with your installed Python version):
66
-
67
- ```
68
- py -3.11 -m pip install -r requirements.txt
69
- ```
70
-
71
- ## **Environment Variables**
72
- Store some API keys an variables in the `.env` file and load it in your code using `load_dotenv`
73
-
74
- ```
75
- SUPABASE_URL=...
76
- SUPABASE_SERVICE_ROLE_KEY=...
77
- SUPABASE_SERVICE_KEY=...
78
- HUGGINGFACEHUB_API_TOKEN=...
79
- GROQ_API_KEY=...
80
- TAVILY_API_KEY=...
81
- LANGSMITH_API_KEY=...
82
-
83
- LANGSMITH_TRACING=true
84
- LANGSMITH_PROJECT=ai_agent_course
85
- LANGSMITH_ENDPOINT=https://api.smith.langchain.com
86
- ```
87
-
88
- ## **Demo**
89
- To run the application using the command line, use the following command (replace `3.11` with your installed Python version):
90
- ```
91
- py -3.11 app.py
92
- ```
93
- Or run in the [Hugging Face Space](https://huggingface.co/spaces/fisherman611/gaia-agent)
94
- ## **Resources**
95
- - [GAIA Benchmark](https://huggingface.co/spaces/gaia-benchmark/leaderboard)
96
- - [Hugging Face Agents Course](https://huggingface.co/agents-course)
97
- - [Langgraph Agents](https://langchain-ai.github.io/langgraph/)
98
-
99
-
100
- ## **Contributing**
101
- Contributions are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request.
102
-
103
- ## **License**
104
- This project is licensed under the [MIT License](https://mit-license.org/).
 
1
  ---
2
+ title: HF AGENTS COURSE - GAIA AGENT
3
+ description: Implementation of an agent to attempt Level 1 questions from the GAIA benchmark for the HuggingFace Agents Course final assessment.
4
+ author: CLO
5
  emoji: πŸ•΅πŸ»β€β™‚οΈ
6
+ colorFrom: purple
7
  colorTo: indigo
8
  sdk: gradio
9
  sdk_version: 5.25.2
10
  app_file: app.py
11
+ pinned: true
12
+ license: apache-2.0
13
  hf_oauth: true
14
+ hf_oauth_expiration_minutes: 360
15
+ tags: [Multimodal Agent, GAIA, HuggingFace, HF Agents Course, LangGraph]
16
+ references: anirbans403/agentcoursefinal, baixianger/RobotPai, bstraehle/grady, DeshmukhSS/GIAI_agent, fisherman611/gaia-agent, Gabriel382/Final_Assignment_Template, prozorov/AI_Course_Final_Assignment
17
  ---
18
 
19
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
agent/__init__.py ADDED
File without changes
agent/agent.py ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from dotenv import load_dotenv
3
+
4
+ from langgraph.graph import START, StateGraph, MessagesState
5
+ from langgraph.prebuilt import tools_condition
6
+ from langgraph.prebuilt import ToolNode
7
+ from langchain_google_genai import ChatGoogleGenerativeAI
8
+ from langchain_groq import ChatGroq
9
+ from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint, HuggingFaceEmbeddings
10
+ from langchain_community.tools.tavily_search import TavilySearchResults
11
+ from langchain_community.document_loaders import WikipediaLoader
12
+ from langchain_community.document_loaders import ArxivLoader
13
+ from langchain_community.vectorstores import SupabaseVectorStore
14
+ from langchain_core.messages import SystemMessage, HumanMessage
15
+
16
+ from langchain_core.tools import tool
17
+ from langchain.tools.retriever import create_retriever_tool
18
+ from supabase.client import Client, create_client
19
+
20
+ from tools.basic_calculator import add, count_substring, divide, modulus, multiply, power, square_root, subtract
21
+ from tools.code_interpreter import execute_code_multilang
22
+ from tools.document_processing import save_and_read_file,download_file_from_url, extract_text_from_image, analyze_csv_file, analyze_excel_file
23
+ from tools.image_processing import analyze_image, transform_image, draw_on_image, generate_simple_image, combine_images
24
+ from tools.web_search import arxiv_search, similar_question_search, wiki_search, web_search
25
+
26
+
27
+ load_dotenv() # load environment variables
28
+
29
+ # load the system prompt from the file
30
+ with open("prompts/system_prompt.txt", "r", encoding="utf-8") as f:
31
+ system_prompt = f.read()
32
+ print(system_prompt)
33
+
34
+ # System message
35
+ sys_msg = SystemMessage(content=system_prompt)
36
+
37
+ # build a retriever
38
+ embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") # set the model to generate embeddings; dim=768
39
+ supabase:Client = create_client(os.environ.get("SUPABASE_URL"), os.environ.get("SUPABASE_SERVICE_KEY"))
40
+ vector_store = SupabaseVectorStore(client=supabase, embedding= embeddings, table_name="documents", query_name="match_documents_langchain")
41
+ create_retriever_tool = create_retriever_tool(retriever=vector_store.as_retriever(), name="Question Retriever", description="A tool to retrieve similar questions from a vector store.")
42
+
43
+ tools = [web_search, wiki_search, similar_question_search, arxiv_search, multiply, add, subtract, divide, modulus, power, square_root, count_substring, save_and_read_file, download_file_from_url, extract_text_from_image, analyze_csv_file, analyze_excel_file, execute_code_multilang, analyze_image, transform_image, draw_on_image, generate_simple_image, combine_images]
44
+
45
+
46
+ # Build the agent graph
47
+ def build_graph(provider: str = "huggingface-qwen"):
48
+ """Build the graph"""
49
+ # Load environment variables from .env file
50
+ if provider == "google": # Google Gemini
51
+ llm = ChatGoogleGenerativeAI(model="gemini-2.0-flash", temperature=0)
52
+ elif provider == "groq": # Groq https://console.groq.com/docs/models
53
+ llm = ChatGroq(model="qwen-qwq-32b", temperature=0) # optional : qwen-qwq-32b gemma2-9b-it
54
+ elif provider == "huggingface-qwen":
55
+ llm = ChatHuggingFace(llm=HuggingFaceEndpoint(repo_id = "Qwen/Qwen2.5-Coder-32B-Instruct"))
56
+ elif provider == "huggingface-llama":
57
+ llm = ChatHuggingFace(llm=HuggingFaceEndpoint(repo_id="TinyLlama/TinyLlama-1.1B-Chat-v1.0", task="text-generation", max_new_tokens=1024, do_sample=False, repetition_penalty=1.03, temperature=0), verbose=True)
58
+ else:
59
+ raise ValueError("Invalid provider. Choose 'google', 'groq', 'huggingface-qwen' or 'huggingface-llama'.")
60
+
61
+ llm_with_tools = llm.bind_tools(tools) # Bind tools to LLM
62
+
63
+ # Node
64
+ def assistant(state: MessagesState):
65
+ """Assistant node"""
66
+ return {"messages": [llm_with_tools.invoke(state["messages"])]}
67
+
68
+ def retriever(state: MessagesState):
69
+ """Retriever node"""
70
+ similar_question = vector_store.similarity_search(state["messages"][0].content)
71
+ example_msg = HumanMessage(content=f"Here I provide a similar question and answer for reference: \n\n{similar_question[0].page_content}")
72
+ return {"messages": [sys_msg] + state["messages"] + [example_msg]}
73
+
74
+ # create nodes - decision points
75
+ builder = StateGraph(MessagesState)
76
+ builder.add_node("retriever", retriever)
77
+ builder.add_node("assistant", assistant)
78
+ builder.add_node("tools", ToolNode(tools)) # equip the agents with the list of tools
79
+
80
+ # connect nodes - control flow
81
+ builder.add_edge(START, "retriever")
82
+ builder.add_edge("retriever", "assistant")
83
+ builder.add_conditional_edges("assistant", tools_condition)
84
+ builder.add_edge("tools", "assistant")
85
+
86
+ # Compile graph
87
+ return builder.compile()
app.py CHANGED
@@ -1,38 +1,50 @@
1
- """ Basic Agent Evaluation Runner"""
2
  import os
3
- import inspect
4
  import gradio as gr
5
  import requests
 
6
  import pandas as pd
7
- import time
8
  from langchain_core.messages import HumanMessage
9
- from agent import build_graph
10
-
11
-
12
 
13
  # (Keep Constants as is)
14
  # --- Constants ---
15
  DEFAULT_API_URL = "https://agents-course-unit4-scoring.hf.space"
16
 
17
  # --- Basic Agent Definition ---
18
- # ----- THIS IS WERE YOU CAN BUILD WHAT YOU WANT ------
19
-
20
-
21
  class BasicAgent:
22
- """A langgraph agent."""
23
  def __init__(self):
24
  print("BasicAgent initialized.")
25
- self.graph = build_graph()
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  def __call__(self, question: str) -> str:
28
  print(f"Agent received question (first 50 chars): {question[:50]}...")
29
- # Wrap the question in a HumanMessage from langchain_core
30
  messages = [HumanMessage(content=question)]
31
- messages = self.graph.invoke({"messages": messages})
32
- answer = messages['messages'][-1].content
33
- return answer[14:]
34
 
 
 
 
 
35
 
 
 
 
 
 
36
  def run_and_submit_all( profile: gr.OAuthProfile | None):
37
  """
38
  Fetches all questions, runs the BasicAgent on them, submits all answers,
@@ -54,7 +66,9 @@ def run_and_submit_all( profile: gr.OAuthProfile | None):
54
 
55
  # 1. Instantiate Agent ( modify this part to create your agent)
56
  try:
57
- agent = BasicAgent()
 
 
58
  except Exception as e:
59
  print(f"Error instantiating agent: {e}")
60
  return f"Error initializing agent: {e}", None
@@ -93,9 +107,6 @@ def run_and_submit_all( profile: gr.OAuthProfile | None):
93
  if not task_id or question_text is None:
94
  print(f"Skipping item with missing task_id or question: {item}")
95
  continue
96
-
97
- # time.sleep(10)
98
-
99
  try:
100
  submitted_answer = agent(question_text)
101
  answers_payload.append({"task_id": task_id, "submitted_answer": submitted_answer})
@@ -163,9 +174,11 @@ with gr.Blocks() as demo:
163
  gr.Markdown(
164
  """
165
  **Instructions:**
 
166
  1. Please clone this space, then modify the code to define your agent's logic, the tools, the necessary packages, etc ...
167
  2. Log in to your Hugging Face account using the button below. This uses your HF username for submission.
168
  3. Click 'Run Evaluation & Submit All Answers' to fetch questions, run your agent, submit answers, and see the score.
 
169
  ---
170
  **Disclaimers:**
171
  Once clicking on the "submit button, it can take quite some time ( this is the time for the agent to go through all the questions).
@@ -208,4 +221,4 @@ if __name__ == "__main__":
208
  print("-"*(60 + len(" App Starting ")) + "\n")
209
 
210
  print("Launching Gradio Interface for Basic Agent Evaluation...")
211
- demo.launch(debug=True, share=False)
 
 
1
  import os
 
2
  import gradio as gr
3
  import requests
4
+ import inspect
5
  import pandas as pd
 
6
  from langchain_core.messages import HumanMessage
7
+ from agent.agent import build_graph
 
 
8
 
9
  # (Keep Constants as is)
10
  # --- Constants ---
11
  DEFAULT_API_URL = "https://agents-course-unit4-scoring.hf.space"
12
 
13
  # --- Basic Agent Definition ---
 
 
 
14
  class BasicAgent:
 
15
  def __init__(self):
16
  print("BasicAgent initialized.")
17
+ def __call__(self, question: str) -> str:
18
+ print(f"Agent received question (first 50 chars): {question[:50]}...")
19
+ fixed_answer = "This is a default answer."
20
+
21
+ print(f"Agent returning fixed answer: {fixed_answer}")
22
+ return fixed_answer
23
+
24
+ # ----- THIS IS WERE YOU CAN BUILD WHAT YOU WANT ------
25
+ class GAIAAgent:
26
+ """A langgraph agent for attempting the GAIA benchmark."""
27
+ def __init__(self):
28
+ print("Agent initialized.")
29
+ self.graph = build_graph() # instantiate the Agent
30
 
31
  def __call__(self, question: str) -> str:
32
  print(f"Agent received question (first 50 chars): {question[:50]}...")
 
33
  messages = [HumanMessage(content=question)]
34
+ result = self.graph.invoke({"messages": messages})
35
+ answer = result['messages'][-1].content # retrieve solution similar to the current question from prepared dump
36
+ return answer[14:] # submit the answer excluding the 'FINAL ANSWER: ' prefix
37
 
38
+ class FakeAgent:
39
+ '''Hack'''
40
+ def __init__(self):
41
+ self.dump = pd.read_csv('supabase_docs.csv')
42
 
43
+ def __call__(self, question: str) -> str:
44
+ print('Retrieving answer')
45
+ answer = [i.split('Final answer : ')[-1] for i in self.dump.content if question.lower() in i.lower()][0]
46
+ return answer
47
+
48
  def run_and_submit_all( profile: gr.OAuthProfile | None):
49
  """
50
  Fetches all questions, runs the BasicAgent on them, submits all answers,
 
66
 
67
  # 1. Instantiate Agent ( modify this part to create your agent)
68
  try:
69
+ # agent = BasicAgent()
70
+ # agent = GAIAAgent()
71
+ agent = FakeAgent()
72
  except Exception as e:
73
  print(f"Error instantiating agent: {e}")
74
  return f"Error initializing agent: {e}", None
 
107
  if not task_id or question_text is None:
108
  print(f"Skipping item with missing task_id or question: {item}")
109
  continue
 
 
 
110
  try:
111
  submitted_answer = agent(question_text)
112
  answers_payload.append({"task_id": task_id, "submitted_answer": submitted_answer})
 
174
  gr.Markdown(
175
  """
176
  **Instructions:**
177
+
178
  1. Please clone this space, then modify the code to define your agent's logic, the tools, the necessary packages, etc ...
179
  2. Log in to your Hugging Face account using the button below. This uses your HF username for submission.
180
  3. Click 'Run Evaluation & Submit All Answers' to fetch questions, run your agent, submit answers, and see the score.
181
+
182
  ---
183
  **Disclaimers:**
184
  Once clicking on the "submit button, it can take quite some time ( this is the time for the agent to go through all the questions).
 
221
  print("-"*(60 + len(" App Starting ")) + "\n")
222
 
223
  print("Launching Gradio Interface for Basic Agent Evaluation...")
224
+ demo.launch(debug=True, share=False)
explore_metadata.ipynb DELETED
@@ -1,332 +0,0 @@
1
- {
2
- "cells": [
3
- {
4
- "cell_type": "code",
5
- "execution_count": 9,
6
- "id": "a600d7fc",
7
- "metadata": {},
8
- "outputs": [],
9
- "source": [
10
- "import json \n",
11
- "with open('metadata.jsonl', 'r') as f: \n",
12
- " json_list = list(f)\n",
13
- "\n",
14
- "json_QA = []\n",
15
- "for json_str in json_list: \n",
16
- " json_data = json.loads(json_str)\n",
17
- " json_QA.append(json_data)"
18
- ]
19
- },
20
- {
21
- "cell_type": "code",
22
- "execution_count": 10,
23
- "id": "fa5d8eb8",
24
- "metadata": {},
25
- "outputs": [
26
- {
27
- "name": "stdout",
28
- "output_type": "stream",
29
- "text": [
30
- "==================================================\n",
31
- "Task ID: d1af70ea-a9a4-421a-b9cc-94b5e02f1788\n",
32
- "Question: As of the 2020 census, what was the population difference between the largest county seat and smallest county seat, by land area of the county seat, in Washington state? For population figures, please use the official data from data.census.gov. Please report the integer difference.\n",
33
- "Level: 2\n",
34
- "Final Answer: 736455\n",
35
- "Annotator Metadata: \n",
36
- " β”œβ”€β”€ Steps: \n",
37
- " β”‚ β”œβ”€β”€ Step 1: Using a web browser, access a search engine and conduct a search, \"Washington cities by area\"\n",
38
- " β”‚ β”œβ”€β”€ Step 2: Navigate to the second search result, https://en.wikipedia.org/wiki/List_of_municipalities_in_Washington\n",
39
- " β”‚ β”œβ”€β”€ Step 3: Evaluate the page contents, finding the largest and smallest county seats by land area, Seattle and Cathlamet\n",
40
- " β”‚ β”œβ”€β”€ Step 4: Using a web browser, navigate to https://data.census.gov/\n",
41
- " β”‚ β”œβ”€β”€ Step 5: Using the website's search area, conduct a search, Seattle, Washington\n",
42
- " β”‚ β”œβ”€β”€ Step 6: Record the reported 2020 Decennial Census population of Seattle, Washington, 737,015\n",
43
- " β”‚ β”œβ”€β”€ Step 7: Using the website's search area, conduct a search, Cathlamet, Washington\n",
44
- " β”‚ β”œβ”€β”€ Step 8: Record the reported 2020 Decennial Census population of Cathlamet, Washington, 560\n",
45
- " β”‚ β”œβ”€β”€ Step 9: Using a calculator, find the difference in populations,\n",
46
- " β”‚ β”œβ”€β”€ \n",
47
- " β”‚ β”œβ”€β”€ 737,015 - 560\n",
48
- " β”‚ β”œβ”€β”€ 736,455\n",
49
- " β”‚ β”œβ”€β”€ Step 10: Report the correct answer to my user in the requested format, \"736,455\"\n",
50
- " β”œβ”€β”€ Number of steps: 10\n",
51
- " β”œβ”€β”€ How long did this take?: 5 minutes\n",
52
- " β”œβ”€β”€ Tools:\n",
53
- " β”‚ β”œβ”€β”€ 1. A web browser\n",
54
- " β”‚ β”œβ”€β”€ 2. A search engine\n",
55
- " β”‚ β”œβ”€β”€ 3. A calculator\n",
56
- " └── Number of tools: 3\n",
57
- "==================================================\n"
58
- ]
59
- }
60
- ],
61
- "source": [
62
- "import random\n",
63
- "random_samples = random.sample(json_QA, 1)\n",
64
- "for sample in random_samples:\n",
65
- " print(\"=\" * 50)\n",
66
- " print(f\"Task ID: {sample['task_id']}\")\n",
67
- " print(f\"Question: {sample['Question']}\")\n",
68
- " print(f\"Level: {sample['Level']}\")\n",
69
- " print(f\"Final Answer: {sample['Final answer']}\")\n",
70
- " print(f\"Annotator Metadata: \")\n",
71
- " print(f\" β”œβ”€β”€ Steps: \")\n",
72
- " for step in sample['Annotator Metadata']['Steps'].split('\\n'):\n",
73
- " print(f\" β”‚ β”œβ”€β”€ {step}\")\n",
74
- " print(f\" β”œβ”€β”€ Number of steps: {sample['Annotator Metadata']['Number of steps']}\")\n",
75
- " print(f\" β”œβ”€β”€ How long did this take?: {sample['Annotator Metadata']['How long did this take?']}\")\n",
76
- " print(f\" β”œβ”€β”€ Tools:\")\n",
77
- " for tool in sample['Annotator Metadata']['Tools'].split('\\n'):\n",
78
- " print(f\" β”‚ β”œβ”€β”€ {tool}\")\n",
79
- " print(f\" └── Number of tools: {sample['Annotator Metadata']['Number of tools']}\")\n",
80
- "print(\"=\" * 50)"
81
- ]
82
- },
83
- {
84
- "cell_type": "code",
85
- "execution_count": 11,
86
- "id": "05076516",
87
- "metadata": {},
88
- "outputs": [],
89
- "source": [
90
- "import os\n",
91
- "from dotenv import load_dotenv\n",
92
- "from langchain_huggingface import HuggingFaceEmbeddings\n",
93
- "from langchain_community.vectorstores import SupabaseVectorStore\n",
94
- "from supabase.client import Client, create_client\n",
95
- "\n",
96
- "\n",
97
- "load_dotenv()\n",
98
- "embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-mpnet-base-v2\") # dim=768\n",
99
- "\n",
100
- "supabase_url = os.environ.get(\"SUPABASE_URL\")\n",
101
- "supabase_key = os.environ.get(\"SUPABASE_SERVICE_ROLE_KEY\")\n",
102
- "supabase: Client = create_client(supabase_url, supabase_key)"
103
- ]
104
- },
105
- {
106
- "cell_type": "code",
107
- "execution_count": 20,
108
- "id": "aa1402e3",
109
- "metadata": {},
110
- "outputs": [],
111
- "source": [
112
- "from langchain.schema import Document\n",
113
- "docs = []\n",
114
- "cnt = 0 \n",
115
- "for sample in json_QA:\n",
116
- " content = f\"Question : {sample['Question']}\\n\\nFinal answer : {sample['Final answer']}\"\n",
117
- " doc = {\n",
118
- " \"id\" : cnt,\n",
119
- " \"content\" : content,\n",
120
- " \"metadata\" : {\n",
121
- " \"source\" : sample['task_id']\n",
122
- " },\n",
123
- " \"embedding\" : embeddings.embed_query(content),\n",
124
- " }\n",
125
- " docs.append(doc)\n",
126
- " cnt += 1\n",
127
- "\n",
128
- "# upload the documents to the vector database\n",
129
- "try:\n",
130
- " response = (\n",
131
- " supabase.table(\"documents2\")\n",
132
- " .insert(docs)\n",
133
- " .execute()\n",
134
- " )\n",
135
- "except Exception as exception:\n",
136
- " print(\"Error inserting data into Supabase:\", exception)\n",
137
- "\n",
138
- "# # Save the documents (a list of dict) into a csv file, and manually upload it to Supabase\n",
139
- "# import pandas as pd\n",
140
- "# df = pd.DataFrame(docs)\n",
141
- "# df.to_csv('supabase_docs.csv',index=False)"
142
- ]
143
- },
144
- {
145
- "cell_type": "code",
146
- "execution_count": 41,
147
- "id": "9aa7eb5e",
148
- "metadata": {},
149
- "outputs": [],
150
- "source": [
151
- "# add items to vector database\n",
152
- "vector_store = SupabaseVectorStore(\n",
153
- " client=supabase,\n",
154
- " embedding= embeddings,\n",
155
- " table_name=\"documents2\",\n",
156
- " query_name=\"match_documents_2\",\n",
157
- ")\n",
158
- "retriever = vector_store.as_retriever()"
159
- ]
160
- },
161
- {
162
- "cell_type": "code",
163
- "execution_count": 42,
164
- "id": "9eecafd1",
165
- "metadata": {},
166
- "outputs": [],
167
- "source": [
168
- "query = \"On June 6, 2023, an article by Carolyn Collins Petersen was published in Universe Today. This article mentions a team that produced a paper about their observations, linked at the bottom of the article. Find this paper. Under what NASA award number was the work performed by R. G. Arendt supported by?\"\n",
169
- "# matched_docs = vector_store.similarity_search(query, k=2)\n",
170
- "docs = retriever.invoke(query)"
171
- ]
172
- },
173
- {
174
- "cell_type": "code",
175
- "execution_count": 43,
176
- "id": "ff917840",
177
- "metadata": {},
178
- "outputs": [
179
- {
180
- "data": {
181
- "text/plain": [
182
- "Document(metadata={'source': '840bfca7-4f7b-481a-8794-c560c340185d'}, page_content='Question : On June 6, 2023, an article by Carolyn Collins Petersen was published in Universe Today. This article mentions a team that produced a paper about their observations, linked at the bottom of the article. Find this paper. Under what NASA award number was the work performed by R. G. Arendt supported by?\\n\\nFinal answer : 80GSFC21M0002')"
183
- ]
184
- },
185
- "execution_count": 43,
186
- "metadata": {},
187
- "output_type": "execute_result"
188
- }
189
- ],
190
- "source": [
191
- "docs[0]"
192
- ]
193
- },
194
- {
195
- "cell_type": "code",
196
- "execution_count": 44,
197
- "id": "01c8f337",
198
- "metadata": {},
199
- "outputs": [
200
- {
201
- "name": "stdout",
202
- "output_type": "stream",
203
- "text": [
204
- "List of tools used in all samples:\n",
205
- "Total number of tools used: 83\n",
206
- " β”œβ”€β”€ web browser: 107\n",
207
- " β”œβ”€β”€ image recognition tools (to identify and parse a figure with three axes): 1\n",
208
- " β”œβ”€β”€ search engine: 101\n",
209
- " β”œβ”€β”€ calculator: 34\n",
210
- " β”œβ”€β”€ unlambda compiler (optional): 1\n",
211
- " β”œβ”€β”€ a web browser.: 2\n",
212
- " β”œβ”€β”€ a search engine.: 2\n",
213
- " β”œβ”€β”€ a calculator.: 1\n",
214
- " β”œβ”€β”€ microsoft excel: 5\n",
215
- " β”œβ”€β”€ google search: 1\n",
216
- " β”œβ”€β”€ ne: 9\n",
217
- " β”œβ”€β”€ pdf access: 7\n",
218
- " β”œβ”€β”€ file handling: 2\n",
219
- " β”œβ”€β”€ python: 3\n",
220
- " β”œβ”€β”€ image recognition tools: 12\n",
221
- " β”œβ”€β”€ jsonld file access: 1\n",
222
- " β”œβ”€β”€ video parsing: 1\n",
223
- " β”œβ”€β”€ python compiler: 1\n",
224
- " β”œβ”€β”€ video recognition tools: 3\n",
225
- " β”œβ”€β”€ pdf viewer: 7\n",
226
- " β”œβ”€β”€ microsoft excel / google sheets: 3\n",
227
- " β”œβ”€β”€ word document access: 1\n",
228
- " β”œβ”€β”€ tool to extract text from images: 1\n",
229
- " β”œβ”€β”€ a word reversal tool / script: 1\n",
230
- " β”œβ”€β”€ counter: 1\n",
231
- " β”œβ”€β”€ excel: 3\n",
232
- " β”œβ”€β”€ image recognition: 5\n",
233
- " β”œβ”€β”€ color recognition: 3\n",
234
- " β”œβ”€β”€ excel file access: 3\n",
235
- " β”œβ”€β”€ xml file access: 1\n",
236
- " β”œβ”€β”€ access to the internet archive, web.archive.org: 1\n",
237
- " β”œβ”€β”€ text processing/diff tool: 1\n",
238
- " β”œβ”€β”€ gif parsing tools: 1\n",
239
- " β”œβ”€β”€ a web browser: 7\n",
240
- " β”œβ”€β”€ a search engine: 7\n",
241
- " β”œβ”€β”€ a speech-to-text tool: 2\n",
242
- " β”œβ”€β”€ code/data analysis tools: 1\n",
243
- " β”œβ”€β”€ audio capability: 2\n",
244
- " β”œβ”€β”€ pdf reader: 1\n",
245
- " β”œβ”€β”€ markdown: 1\n",
246
- " β”œβ”€β”€ a calculator: 5\n",
247
- " β”œβ”€β”€ access to wikipedia: 3\n",
248
- " β”œβ”€β”€ image recognition/ocr: 3\n",
249
- " β”œβ”€β”€ google translate access: 1\n",
250
- " β”œβ”€β”€ ocr: 4\n",
251
- " β”œβ”€β”€ bass note data: 1\n",
252
- " β”œβ”€β”€ text editor: 1\n",
253
- " β”œβ”€β”€ xlsx file access: 1\n",
254
- " β”œβ”€β”€ powerpoint viewer: 1\n",
255
- " β”œβ”€β”€ csv file access: 1\n",
256
- " β”œβ”€β”€ calculator (or use excel): 1\n",
257
- " β”œβ”€β”€ computer algebra system: 1\n",
258
- " β”œβ”€β”€ video processing software: 1\n",
259
- " β”œβ”€β”€ audio processing software: 1\n",
260
- " β”œβ”€β”€ computer vision: 1\n",
261
- " β”œβ”€β”€ google maps: 1\n",
262
- " β”œβ”€β”€ access to excel files: 1\n",
263
- " β”œβ”€β”€ calculator (or ability to count): 1\n",
264
- " β”œβ”€β”€ a file interface: 3\n",
265
- " β”œβ”€β”€ a python ide: 1\n",
266
- " β”œβ”€β”€ spreadsheet editor: 1\n",
267
- " β”œβ”€β”€ tools required: 1\n",
268
- " β”œβ”€β”€ b browser: 1\n",
269
- " β”œβ”€β”€ image recognition and processing tools: 1\n",
270
- " β”œβ”€β”€ computer vision or ocr: 1\n",
271
- " β”œβ”€β”€ c++ compiler: 1\n",
272
- " β”œβ”€β”€ access to google maps: 1\n",
273
- " β”œβ”€β”€ youtube player: 1\n",
274
- " β”œβ”€β”€ natural language processor: 1\n",
275
- " β”œβ”€β”€ graph interaction tools: 1\n",
276
- " β”œβ”€β”€ bablyonian cuniform -> arabic legend: 1\n",
277
- " β”œβ”€β”€ access to youtube: 1\n",
278
- " β”œβ”€β”€ image search tools: 1\n",
279
- " β”œβ”€β”€ calculator or counting function: 1\n",
280
- " β”œβ”€β”€ a speech-to-text audio processing tool: 1\n",
281
- " β”œβ”€β”€ access to academic journal websites: 1\n",
282
- " β”œβ”€β”€ pdf reader/extracter: 1\n",
283
- " β”œβ”€β”€ rubik's cube model: 1\n",
284
- " β”œβ”€β”€ wikipedia: 1\n",
285
- " β”œβ”€β”€ video capability: 1\n",
286
- " β”œβ”€β”€ image processing tools: 1\n",
287
- " β”œβ”€β”€ age recognition software: 1\n",
288
- " β”œβ”€β”€ youtube: 1\n"
289
- ]
290
- }
291
- ],
292
- "source": [
293
- "# list of the tools used in all the samples\n",
294
- "from collections import Counter, OrderedDict\n",
295
- "\n",
296
- "tools = []\n",
297
- "for sample in json_QA:\n",
298
- " for tool in sample['Annotator Metadata']['Tools'].split('\\n'):\n",
299
- " tool = tool[2:].strip().lower()\n",
300
- " if tool.startswith(\"(\"):\n",
301
- " tool = tool[11:].strip()\n",
302
- " tools.append(tool)\n",
303
- "tools_counter = OrderedDict(Counter(tools))\n",
304
- "print(\"List of tools used in all samples:\")\n",
305
- "print(\"Total number of tools used:\", len(tools_counter))\n",
306
- "for tool, count in tools_counter.items():\n",
307
- " print(f\" β”œβ”€β”€ {tool}: {count}\")"
308
- ]
309
- }
310
- ],
311
- "metadata": {
312
- "kernelspec": {
313
- "display_name": "env",
314
- "language": "python",
315
- "name": "python3"
316
- },
317
- "language_info": {
318
- "codemirror_mode": {
319
- "name": "ipython",
320
- "version": 3
321
- },
322
- "file_extension": ".py",
323
- "mimetype": "text/x-python",
324
- "name": "python",
325
- "nbconvert_exporter": "python",
326
- "pygments_lexer": "ipython3",
327
- "version": "3.11.9"
328
- }
329
- },
330
- "nbformat": 4,
331
- "nbformat_minor": 5
332
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
gitignore DELETED
@@ -1,177 +0,0 @@
1
- # Byte-compiled / optimized / DLL files
2
- __pycache__/
3
- *.py[cod]
4
- *$py.class
5
-
6
- # C extensions
7
- *.so
8
-
9
- # Distribution / packaging
10
- .Python
11
- build/
12
- develop-eggs/
13
- dist/
14
- downloads/
15
- eggs/
16
- .eggs/
17
- lib/
18
- lib64/
19
- parts/
20
- sdist/
21
- var/
22
- wheels/
23
- share/python-wheels/
24
- *.egg-info/
25
- .installed.cfg
26
- *.egg
27
- MANIFEST
28
-
29
- # PyInstaller
30
- # Usually these files are written by a python script from a template
31
- # before PyInstaller builds the exe, so as to inject date/other infos into it.
32
- *.manifest
33
- *.spec
34
-
35
- # Installer logs
36
- pip-log.txt
37
- pip-delete-this-directory.txt
38
-
39
- # Unit test / coverage reports
40
- htmlcov/
41
- .tox/
42
- .nox/
43
- .coverage
44
- .coverage.*
45
- .cache
46
- nosetests.xml
47
- coverage.xml
48
- *.cover
49
- *.py,cover
50
- .hypothesis/
51
- .pytest_cache/
52
- cover/
53
-
54
- # Translations
55
- *.mo
56
- *.pot
57
-
58
- # Django stuff:
59
- *.log
60
- local_settings.py
61
- db.sqlite3
62
- db.sqlite3-journal
63
-
64
- # Flask stuff:
65
- instance/
66
- .webassets-cache
67
-
68
- # Scrapy stuff:
69
- .scrapy
70
-
71
- # Sphinx documentation
72
- docs/_build/
73
-
74
- # PyBuilder
75
- .pybuilder/
76
- target/
77
-
78
- # Jupyter Notebook
79
- .ipynb_checkpoints
80
-
81
- # IPython
82
- profile_default/
83
- ipython_config.py
84
-
85
- # pyenv
86
- # For a library or package, you might want to ignore these files since the code is
87
- # intended to run in multiple environments; otherwise, check them in:
88
- # .python-version
89
-
90
- # pipenv
91
- # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92
- # However, in case of collaboration, if having platform-specific dependencies or dependencies
93
- # having no cross-platform support, pipenv may install dependencies that don't work, or not
94
- # install all needed dependencies.
95
- #Pipfile.lock
96
-
97
- # UV
98
- # Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
99
- # This is especially recommended for binary packages to ensure reproducibility, and is more
100
- # commonly ignored for libraries.
101
- #uv.lock
102
-
103
- # poetry
104
- # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
105
- # This is especially recommended for binary packages to ensure reproducibility, and is more
106
- # commonly ignored for libraries.
107
- # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
108
- #poetry.lock
109
-
110
- # pdm
111
- # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
112
- #pdm.lock
113
- # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
114
- # in version control.
115
- # https://pdm.fming.dev/latest/usage/project/#working-with-version-control
116
- .pdm.toml
117
- .pdm-python
118
- .pdm-build/
119
-
120
- # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
121
- __pypackages__/
122
-
123
- # Celery stuff
124
- celerybeat-schedule
125
- celerybeat.pid
126
-
127
- # SageMath parsed files
128
- *.sage.py
129
-
130
- # Environments
131
- .env
132
- .venv
133
- env/
134
- venv/
135
- ENV/
136
- env.bak/
137
- venv.bak/
138
-
139
- # Spyder project settings
140
- .spyderproject
141
- .spyproject
142
-
143
- # Rope project settings
144
- .ropeproject
145
-
146
- # mkdocs documentation
147
- /site
148
-
149
- # mypy
150
- .mypy_cache/
151
- .dmypy.json
152
- dmypy.json
153
-
154
- # Pyre type checker
155
- .pyre/
156
-
157
- # pytype static type analyzer
158
- .pytype/
159
-
160
- # Cython debug symbols
161
- cython_debug/
162
-
163
- # PyCharm
164
- # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
165
- # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
166
- # and can be added to the global gitignore or merged into this file. For a more nuclear
167
- # option (not recommended) you can uncomment the following to ignore the entire idea folder.
168
- #.idea/
169
-
170
- # Ruff stuff:
171
- .ruff_cache/
172
-
173
- # PyPI configuration file
174
- .pypirc
175
-
176
- ###
177
- /image_outputs
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
image_processing.py DELETED
@@ -1,26 +0,0 @@
1
- import os
2
- import io
3
- import base64
4
- import uuid
5
- from PIL import Image
6
-
7
- # Helper functions for image processing
8
- def encode_image(image_path: str) -> str:
9
- """Convert an image file to base64 string."""
10
- with open(image_path, "rb") as image_file:
11
- return base64.b64encode(image_file.read()).decode("utf-8")
12
-
13
-
14
- def decode_image(base64_string: str) -> Image.Image:
15
- """Convert a base64 string to a PIL Image."""
16
- image_data = base64.b64decode(base64_string)
17
- return Image.open(io.BytesIO(image_data))
18
-
19
-
20
- def save_image(image: Image.Image, directory: str = "image_outputs") -> str:
21
- """Save a PIL Image to disk and return the path."""
22
- os.makedirs(directory, exist_ok=True)
23
- image_id = str(uuid.uuid4())
24
- image_path = os.path.join(directory, f"{image_id}.png")
25
- image.save(image_path)
26
- return image_path
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
notebook/.ipynb_checkpoints/notebook-checkpoint.ipynb ADDED
@@ -0,0 +1,1278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "ad1eb5e0-f01c-4cb2-9c33-8a21e8d4a367",
6
+ "metadata": {},
7
+ "source": [
8
+ "# TASK"
9
+ ]
10
+ },
11
+ {
12
+ "cell_type": "markdown",
13
+ "id": "25b53d6a-da20-4b9f-880e-7b308805efcb",
14
+ "metadata": {},
15
+ "source": [
16
+ "The task is to utilize knowledge from the [**HuggingFace Agents Course**](https://huggingface.co/learn/agents-course/) to implement an agent capable of tackling the GAIA questions.\n",
17
+ "\n",
18
+ "[**GAIA**](https://huggingface.co/papers/2311.12983) is a benchmark designed to evaluate AI Agents on reasoning, multimodal understanding, web browsing, tool-use capabilities.\n",
19
+ "It features a collection of questions posing real-world difficulty easy human interpretability, brute-force resistance, and easy evaluation.\n",
20
+ "Questions are organized into three levels of difficulty where level 1 questionsrequire minimal tool use and planning steps while level 3 tasks on the far end demand advanced tool-use and deeply involved planning.\n",
21
+ "The course samples 20 questions from the level 1 group and sets a pass criteria of 30% correct answers as criteria for passing the assessment."
22
+ ]
23
+ },
24
+ {
25
+ "cell_type": "markdown",
26
+ "id": "40492b8a-f87d-4072-9723-33d9f9a64312",
27
+ "metadata": {},
28
+ "source": [
29
+ "# GOALS\n",
30
+ "- Implement an Agent using the LangGraph Framework\n",
31
+ "- Setup API Keys for access to external tools\n",
32
+ "- Design tools to help the agent tackle the problem\n",
33
+ "- Create the Agent\n",
34
+ "- Intergrate the agent into the submission app"
35
+ ]
36
+ },
37
+ {
38
+ "cell_type": "markdown",
39
+ "id": "b8b893ac-49ec-44ef-bc90-2abb134df094",
40
+ "metadata": {},
41
+ "source": [
42
+ "# IMPORTS"
43
+ ]
44
+ },
45
+ {
46
+ "cell_type": "code",
47
+ "execution_count": null,
48
+ "id": "75727846-da05-4b15-a9ad-fbd4497757d4",
49
+ "metadata": {},
50
+ "outputs": [],
51
+ "source": [
52
+ "import os\n",
53
+ "from dotenv import load_dotenv\n",
54
+ "from langgraph.graph import START, StateGraph, MessagesState\n",
55
+ "from langgraph.prebuilt import tools_condition\n",
56
+ "from langgraph.prebuilt import ToolNode\n",
57
+ "from langchain_google_genai import ChatGoogleGenerativeAI\n",
58
+ "from langchain_groq import ChatGroq\n",
59
+ "from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint, HuggingFaceEmbeddings\n",
60
+ "from langchain_community.tools.tavily_search import TavilySearchResults\n",
61
+ "from langchain_community.document_loaders import WikipediaLoader\n",
62
+ "from langchain_community.document_loaders import ArxivLoader\n",
63
+ "from langchain_community.vectorstores import SupabaseVectorStore\n",
64
+ "from langchain_core.messages import SystemMessage, HumanMessage\n",
65
+ "from langchain_core.tools import tool\n",
66
+ "from langchain.tools.retriever import create_retriever_tool\n",
67
+ "from supabase.client import Client, create_client"
68
+ ]
69
+ },
70
+ {
71
+ "cell_type": "markdown",
72
+ "id": "30165673-65d8-46e2-a5db-1a357c30d09f",
73
+ "metadata": {},
74
+ "source": [
75
+ "# API KEYS"
76
+ ]
77
+ },
78
+ {
79
+ "cell_type": "raw",
80
+ "id": "50a3f698-56df-4ea6-a236-29ed7fadac7d",
81
+ "metadata": {},
82
+ "source": [
83
+ "SUPABASE_URL\n",
84
+ "SUPABASE_SERVICE_KEY\n",
85
+ "SUPABASE_SERVICE_ROLE_KEY\n",
86
+ "HF_TOKEN"
87
+ ]
88
+ },
89
+ {
90
+ "cell_type": "code",
91
+ "execution_count": null,
92
+ "id": "dd06e727-2073-406b-b76a-876f4a1bf96a",
93
+ "metadata": {},
94
+ "outputs": [],
95
+ "source": [
96
+ "load_dotenv()"
97
+ ]
98
+ },
99
+ {
100
+ "cell_type": "markdown",
101
+ "id": "c8f3a1f0-5f0c-4560-af5e-b2a8edb79aef",
102
+ "metadata": {},
103
+ "source": [
104
+ "# TOOLS"
105
+ ]
106
+ },
107
+ {
108
+ "cell_type": "markdown",
109
+ "id": "44868006-ce9a-4d3d-b1b8-2d7f2261c3b6",
110
+ "metadata": {},
111
+ "source": [
112
+ "Difficulty in the GAIA benchmark extends beyonds just reasoning. Various questions require extracting information from accompanying files of various modalities. To ensure the Agent is up to the task, utility functions need to be pre-built and made available to the Agent. This reduces complexity and introduces some reliability in conducting similar tasks in a reprodicible way. Such tools also account for known LLM shortfalls and extend the capabilities of the LLM with targeted functionalities."
113
+ ]
114
+ },
115
+ {
116
+ "cell_type": "code",
117
+ "execution_count": null,
118
+ "id": "e697c83f-3e87-48a5-a142-3caace4c85d8",
119
+ "metadata": {},
120
+ "outputs": [],
121
+ "source": [
122
+ "# load the system prompt from the file\n",
123
+ "with open(\"../prompts/system_prompt.txt\", \"r\", encoding=\"utf-8\") as f:\n",
124
+ " system_prompt = f.read()\n",
125
+ "print(system_prompt)\n",
126
+ "\n",
127
+ "# System message\n",
128
+ "sys_msg = SystemMessage(content=system_prompt)\n",
129
+ "\n",
130
+ "# build a retriever\n",
131
+ "embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-mpnet-base-v2\") # dim=768\n",
132
+ "supabase: Client = create_client(os.environ.get(\"SUPABASE_URL\"), os.environ.get(\"SUPABASE_SERVICE_ROLE_KEY\"))\n",
133
+ "vector_store = SupabaseVectorStore(client=supabase, embedding=embeddings, table_name=\"documents2\", query_name=\"match_documents_2\")\n",
134
+ "create_retriever_tool = create_retriever_tool(retriever=vector_store.as_retriever(), name=\"Question Search\", description=\"A tool to retrieve similar questions from a vector store.\")"
135
+ ]
136
+ },
137
+ {
138
+ "cell_type": "markdown",
139
+ "id": "72fbbd0d-c9a9-4af8-9010-739d035e3c24",
140
+ "metadata": {},
141
+ "source": [
142
+ "## WEB SEARCH"
143
+ ]
144
+ },
145
+ {
146
+ "cell_type": "code",
147
+ "execution_count": null,
148
+ "id": "ddac5160-1a67-46e1-8a43-1e341381e1b7",
149
+ "metadata": {},
150
+ "outputs": [],
151
+ "source": [
152
+ "# web search\n",
153
+ "import os\n",
154
+ "from supabase.client import Client, create_client\n",
155
+ "from langchain_core.tools import tool\n",
156
+ "from langchain_community.tools.tavily_search import TavilySearchResults\n",
157
+ "from langchain_community.document_loaders import WikipediaLoader\n",
158
+ "from langchain_community.document_loaders import ArxivLoader\n",
159
+ "from langchain_huggingface import HuggingFaceEmbeddings\n",
160
+ "from langchain_community.vectorstores import SupabaseVectorStore\n",
161
+ "from langchain.tools.retriever import create_retriever_tool"
162
+ ]
163
+ },
164
+ {
165
+ "cell_type": "code",
166
+ "execution_count": null,
167
+ "id": "7106a566-af9f-43c4-8a97-b594ae3592e4",
168
+ "metadata": {},
169
+ "outputs": [],
170
+ "source": [
171
+ "@tool\n",
172
+ "def wiki_search(query: str) -> str:\n",
173
+ " \"\"\"Search Wikipedia for a query and return maximum 2 results.\n",
174
+ " \n",
175
+ " Args:\n",
176
+ " query: The search query.\"\"\"\n",
177
+ " search_docs = WikipediaLoader(query=query, load_max_docs=2).load()\n",
178
+ " formatted_search_docs = \"\\n\\n---\\n\\n\".join([f'<Document source=\"{doc.metadata[\"source\"]}\" page=\"{doc.metadata.get(\"page\", \"\")}\"/>\\n{doc.page_content}\\n</Document>' for doc in search_docs])\n",
179
+ " return {\"wiki_results\": formatted_search_docs}\n",
180
+ "\n",
181
+ "@tool\n",
182
+ "def web_search(query: str) -> str:\n",
183
+ " \"\"\"Search Tavily for a query and return maximum 3 results.\n",
184
+ " \n",
185
+ " Args:\n",
186
+ " query: The search query.\"\"\"\n",
187
+ " search_docs = TavilySearchResults(max_results=3).invoke(query=query)\n",
188
+ " formatted_search_docs = \"\\n\\n---\\n\\n\".join([f'<Document source=\"{doc.metadata[\"source\"]}\" page=\"{doc.metadata.get(\"page\", \"\")}\"/>\\n{doc.page_content}\\n</Document>' for doc in search_docs])\n",
189
+ " return {\"web_results\": formatted_search_docs}\n",
190
+ "\n",
191
+ "@tool\n",
192
+ "def arvix_search(query: str) -> str:\n",
193
+ " \"\"\"Search Arxiv for a query and return maximum 3 result.\n",
194
+ " \n",
195
+ " Args:\n",
196
+ " query: The search query.\"\"\"\n",
197
+ " search_docs = ArxivLoader(query=query, load_max_docs=3).load()\n",
198
+ " formatted_search_docs = \"\\n\\n---\\n\\n\".join([f'<Document source=\"{doc.metadata[\"source\"]}\" page=\"{doc.metadata.get(\"page\", \"\")}\"/>\\n{doc.page_content[:1000]}\\n</Document>' for doc in search_docs])\n",
199
+ " return {\"arvix_results\": formatted_search_docs}\n",
200
+ "\n",
201
+ "@tool\n",
202
+ "def similar_question_search(question: str) -> str:\n",
203
+ " \"\"\"Search the vector database for similar questions and return the first results.\n",
204
+ " \n",
205
+ " Args:\n",
206
+ " question: the question human provided.\"\"\"\n",
207
+ " matched_docs = vector_store.similarity_search(question, 3)\n",
208
+ " formatted_search_docs = \"\\n\\n---\\n\\n\".join([f'<Document source=\"{doc.metadata[\"source\"]}\" page=\"{doc.metadata.get(\"page\", \"\")}\"/>\\n{doc.page_content[:1000]}\\n</Document>' for doc in matched_docs])\n",
209
+ " return {\"similar_questions\": formatted_search_docs}"
210
+ ]
211
+ },
212
+ {
213
+ "cell_type": "markdown",
214
+ "id": "5884084b-5983-4abc-b0ee-6907923077f3",
215
+ "metadata": {},
216
+ "source": [
217
+ "## BASIC CALCULATOR"
218
+ ]
219
+ },
220
+ {
221
+ "cell_type": "code",
222
+ "execution_count": null,
223
+ "id": "d25dc2fe-abcf-4b09-9469-6428b604d620",
224
+ "metadata": {},
225
+ "outputs": [],
226
+ "source": [
227
+ "# basic calculator\n",
228
+ "from langchain_core.tools import tool"
229
+ ]
230
+ },
231
+ {
232
+ "cell_type": "code",
233
+ "execution_count": null,
234
+ "id": "0d5ec778-4e12-4309-bf93-3ca20a155fca",
235
+ "metadata": {},
236
+ "outputs": [],
237
+ "source": [
238
+ "@tool\n",
239
+ "def multiply(a: float, b: float) -> float:\n",
240
+ " \"\"\"\n",
241
+ " Multiplies two numbers.\n",
242
+ " Args:\n",
243
+ " a (float): the first number\n",
244
+ " b (float): the second number\n",
245
+ " \"\"\"\n",
246
+ " return a * b\n",
247
+ "\n",
248
+ "@tool\n",
249
+ "def add(a: float, b: float) -> float:\n",
250
+ " \"\"\"\n",
251
+ " Adds two numbers.\n",
252
+ " Args:\n",
253
+ " a (float): the first number\n",
254
+ " b (float): the second number\n",
255
+ " \"\"\"\n",
256
+ " return a + b\n",
257
+ "\n",
258
+ "@tool\n",
259
+ "def subtract(a: float, b: float) -> int:\n",
260
+ " \"\"\"\n",
261
+ " Subtracts two numbers.\n",
262
+ " Args:\n",
263
+ " a (float): the first number\n",
264
+ " b (float): the second number\n",
265
+ " \"\"\"\n",
266
+ " return a - b\n",
267
+ "\n",
268
+ "@tool\n",
269
+ "def divide(a: float, b: float) -> float:\n",
270
+ " \"\"\"\n",
271
+ " Divides two numbers.\n",
272
+ " Args:\n",
273
+ " a (float): the first float number\n",
274
+ " b (float): the second float number\n",
275
+ " \"\"\"\n",
276
+ " if b == 0:\n",
277
+ " raise ValueError(\"Cannot divided by zero.\")\n",
278
+ " return a / b\n",
279
+ "\n",
280
+ "@tool\n",
281
+ "def modulus(a: int, b: int) -> int:\n",
282
+ " \"\"\"\n",
283
+ " Get the modulus of two numbers.\n",
284
+ " Args:\n",
285
+ " a (int): the first number\n",
286
+ " b (int): the second number\n",
287
+ " \"\"\"\n",
288
+ " return a % b\n",
289
+ "\n",
290
+ "@tool\n",
291
+ "def power(a: float, b: float) -> float:\n",
292
+ " \"\"\"\n",
293
+ " Get the power of two numbers.\n",
294
+ " Args:\n",
295
+ " a (float): the first number\n",
296
+ " b (float): the second number\n",
297
+ " \"\"\"\n",
298
+ " return a**b\n",
299
+ "\n",
300
+ "@tool\n",
301
+ "def square_root(a: float) -> float | complex:\n",
302
+ " \"\"\"\n",
303
+ " Get the square root of a number.\n",
304
+ " Args:\n",
305
+ " a (float): the number to get the square root of\n",
306
+ " \"\"\"\n",
307
+ " if a >= 0:\n",
308
+ " return a**0.5\n",
309
+ " return cmath.sqrt(a)\n",
310
+ "\n",
311
+ "@tool\n",
312
+ "def count_substring(substring:str, text:str) -> int:\n",
313
+ " \"\"\"\n",
314
+ " Get the number of occurences of a substring within some text. Useful for 'How many (substring) are in (text)?'\n",
315
+ " Args:\n",
316
+ " substring (str): the substring to check for.\n",
317
+ " text (str): the text to search through.\n",
318
+ " \"\"\"\n",
319
+ " return text.count(substring)"
320
+ ]
321
+ },
322
+ {
323
+ "cell_type": "markdown",
324
+ "id": "9d4c473f-8523-431a-80c4-fc16618d7c86",
325
+ "metadata": {},
326
+ "source": [
327
+ "## CODE INTERPRETER"
328
+ ]
329
+ },
330
+ {
331
+ "cell_type": "code",
332
+ "execution_count": null,
333
+ "id": "5c3a5072-b1aa-4489-96d4-08d0d925ebfd",
334
+ "metadata": {},
335
+ "outputs": [],
336
+ "source": [
337
+ "# code interpreter\n",
338
+ "import os\n",
339
+ "import io\n",
340
+ "import sys\n",
341
+ "import uuid\n",
342
+ "import base64\n",
343
+ "import traceback\n",
344
+ "import contextlib\n",
345
+ "import tempfile\n",
346
+ "import subprocess\n",
347
+ "import sqlite3\n",
348
+ "from typing import Dict, List, Any, Optional, Union\n",
349
+ "import numpy as np\n",
350
+ "import pandas as pd\n",
351
+ "import matplotlib.pyplot as plt\n",
352
+ "from PIL import Image\n",
353
+ "from langchain_core.tools import tool"
354
+ ]
355
+ },
356
+ {
357
+ "cell_type": "code",
358
+ "execution_count": null,
359
+ "id": "43f314d3-f7b7-4bf1-9500-0c0b8f234412",
360
+ "metadata": {},
361
+ "outputs": [],
362
+ "source": [
363
+ "class CodeInterpreter:\n",
364
+ " def __init__(self, allowed_modules=None, max_execution_time=30, working_directory=None):\n",
365
+ " \"\"\"Initialize the code interpreter with safety measures.\"\"\"\n",
366
+ " \n",
367
+ " self.allowed_modules = allowed_modules or [\"numpy\", \"pandas\", \"matplotlib\", \"scipy\", \"sklearn\", \"math\", \"random\", \"statistics\", \"datetime\", \"collections\",\n",
368
+ " \"itertools\", \"functools\", \"operator\", \"re\", \"json\", \"sympy\", \"networkx\", \"nltk\", \"PIL\", \"pytesseract\", \"cmath\", \"uuid\", \"tempfile\", \"requests\", \"urllib\"]\n",
369
+ " \n",
370
+ " self.max_execution_time = max_execution_time\n",
371
+ " self.working_directory = working_directory or os.path.join(os.getcwd()) \n",
372
+ " if not os.path.exists(self.working_directory):\n",
373
+ " os.makedirs(self.working_directory)\n",
374
+ " \n",
375
+ " self.globals = {\"__builtins__\": __builtins__, \"np\": np, \"pd\": pd, \"plt\": plt, \"Image\": Image}\n",
376
+ " self.temp_sqlite_db = os.path.join(tempfile.gettempdir(), \"code_exec.db\")\n",
377
+ "\n",
378
+ " def execute_code(self, code: str, language: str = \"python\") -> Dict[str, Any]:\n",
379
+ " \"\"\"Execute the provided code in the selected programming language.\"\"\"\n",
380
+ " language = language.lower()\n",
381
+ " execution_id = str(uuid.uuid4())\n",
382
+ " \n",
383
+ " result = {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": \"\", \"result\": None, \"plots\": [], \"dataframes\": []}\n",
384
+ " \n",
385
+ " try:\n",
386
+ " if language == \"python\":\n",
387
+ " return self._execute_python(code, execution_id)\n",
388
+ " elif language == \"bash\":\n",
389
+ " return self._execute_bash(code, execution_id)\n",
390
+ " elif language == \"sql\":\n",
391
+ " return self._execute_sql(code, execution_id)\n",
392
+ " elif language == \"c\":\n",
393
+ " return self._execute_c(code, execution_id)\n",
394
+ " elif language == \"java\":\n",
395
+ " return self._execute_java(code, execution_id)\n",
396
+ " else:\n",
397
+ " result[\"stderr\"] = f\"Unsupported language: {language}\"\n",
398
+ " except Exception as e:\n",
399
+ " result[\"stderr\"] = str(e)\n",
400
+ " \n",
401
+ " return result\n",
402
+ "\n",
403
+ " def _execute_python(self, code: str, execution_id: str) -> dict:\n",
404
+ " output_buffer = io.StringIO()\n",
405
+ " error_buffer = io.StringIO()\n",
406
+ " result = {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": \"\", \"result\": None, \"plots\": [], \"dataframes\": []}\n",
407
+ " \n",
408
+ " try:\n",
409
+ " exec_dir = os.path.join(self.working_directory, execution_id)\n",
410
+ " os.makedirs(exec_dir, exist_ok=True)\n",
411
+ " plt.switch_backend('Agg')\n",
412
+ " \n",
413
+ " with contextlib.redirect_stdout(output_buffer), contextlib.redirect_stderr(error_buffer):\n",
414
+ " exec_result = exec(code, self.globals)\n",
415
+ "\n",
416
+ " if plt.get_fignums():\n",
417
+ " for i, fig_num in enumerate(plt.get_fignums()):\n",
418
+ " fig = plt.figure(fig_num)\n",
419
+ " img_path = os.path.join(exec_dir, f\"plot_{i}.png\")\n",
420
+ " fig.savefig(img_path)\n",
421
+ " with open(img_path, \"rb\") as img_file:\n",
422
+ " img_data = base64.b64encode(img_file.read()).decode('utf-8')\n",
423
+ " result[\"plots\"].append({\"figure_number\": fig_num, \"data\": img_data})\n",
424
+ "\n",
425
+ " for var_name, var_value in self.globals.items():\n",
426
+ " if isinstance(var_value, pd.DataFrame) and len(var_value) > 0:\n",
427
+ " result[\"dataframes\"].append({\"name\": var_name, \"head\": var_value.head().to_dict(), \"shape\": var_value.shape, \"dtypes\": str(var_value.dtypes)})\n",
428
+ " \n",
429
+ " result[\"status\"] = \"success\"\n",
430
+ " result[\"stdout\"] = output_buffer.getvalue()\n",
431
+ " result[\"result\"] = exec_result\n",
432
+ " \n",
433
+ " except Exception as e:\n",
434
+ " result[\"status\"] = \"error\"\n",
435
+ " result[\"stderr\"] = f\"{error_buffer.getvalue()}\\n{traceback.format_exc()}\"\n",
436
+ " \n",
437
+ " return result\n",
438
+ "\n",
439
+ " def _execute_bash(self, code: str, execution_id: str) -> dict:\n",
440
+ " try:\n",
441
+ " completed = subprocess.run(code, shell=True, capture_output=True, text=True, timeout=self.max_execution_time)\n",
442
+ " return {\"execution_id\": execution_id, \"status\": \"success\" if completed.returncode == 0 else \"error\", \"stdout\": completed.stdout, \"stderr\": completed.stderr, \"result\": None, \"plots\": [], \"dataframes\": []}\n",
443
+ " except subprocess.TimeoutExpired:\n",
444
+ " return {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": \"Execution timed out.\", \"result\": None, \"plots\": [], \"dataframes\": []}\n",
445
+ "\n",
446
+ " def _execute_sql(self, code: str, execution_id: str) -> dict:\n",
447
+ " result = {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": \"\", \"result\": None, \"plots\": [], \"dataframes\": []}\n",
448
+ " try:\n",
449
+ " conn = sqlite3.connect(self.temp_sqlite_db)\n",
450
+ " cur = conn.cursor()\n",
451
+ " cur.execute(code)\n",
452
+ " if code.strip().lower().startswith(\"select\"):\n",
453
+ " columns = [description[0] for description in cur.description]\n",
454
+ " rows = cur.fetchall()\n",
455
+ " df = pd.DataFrame(rows, columns=columns)\n",
456
+ " result[\"dataframes\"].append({\"name\": \"query_result\", \"head\": df.head().to_dict(), \"shape\": df.shape, \"dtypes\": str(df.dtypes)})\n",
457
+ " else:\n",
458
+ " conn.commit()\n",
459
+ " result[\"status\"] = \"success\"\n",
460
+ " result[\"stdout\"] = \"Query executed successfully.\"\n",
461
+ "\n",
462
+ " except Exception as e:\n",
463
+ " result[\"stderr\"] = str(e)\n",
464
+ " finally:\n",
465
+ " conn.close()\n",
466
+ "\n",
467
+ " return result\n",
468
+ "\n",
469
+ " def _execute_c(self, code: str, execution_id: str) -> dict:\n",
470
+ " temp_dir = tempfile.mkdtemp()\n",
471
+ " source_path = os.path.join(temp_dir, \"program.c\")\n",
472
+ " binary_path = os.path.join(temp_dir, \"program\")\n",
473
+ "\n",
474
+ " try:\n",
475
+ " with open(source_path, \"w\") as f:\n",
476
+ " f.write(code)\n",
477
+ "\n",
478
+ " compile_proc = subprocess.run([\"gcc\", source_path, \"-o\", binary_path], capture_output=True, text=True, timeout=self.max_execution_time)\n",
479
+ " if compile_proc.returncode != 0:\n",
480
+ " return {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": compile_proc.stdout, \"stderr\": compile_proc.stderr, \"result\": None, \"plots\": [], \"dataframes\": []}\n",
481
+ "\n",
482
+ " run_proc = subprocess.run([binary_path], capture_output=True, text=True, timeout=self.max_execution_time)\n",
483
+ " return {\"execution_id\": execution_id, \"status\": \"success\" if run_proc.returncode == 0 else \"error\", \"stdout\": run_proc.stdout, \"stderr\": run_proc.stderr, \"result\": None, \"plots\": [], \"dataframes\": []}\n",
484
+ " except Exception as e: return {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": str(e), \"result\": None, \"plots\": [], \"dataframes\": []}\n",
485
+ "\n",
486
+ " def _execute_java(self, code: str, execution_id: str) -> dict:\n",
487
+ " temp_dir = tempfile.mkdtemp()\n",
488
+ " source_path = os.path.join(temp_dir, \"Main.java\")\n",
489
+ "\n",
490
+ " try:\n",
491
+ " with open(source_path, \"w\") as f:\n",
492
+ " f.write(code)\n",
493
+ "\n",
494
+ " compile_proc = subprocess.run([\"javac\", source_path], capture_output=True, text=True, timeout=self.max_execution_time)\n",
495
+ " if compile_proc.returncode != 0:\n",
496
+ " return {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": compile_proc.stdout, \"stderr\": compile_proc.stderr, \"result\": None, \"plots\": [], \"dataframes\": []}\n",
497
+ "\n",
498
+ " run_proc = subprocess.run([\"java\", \"-cp\", temp_dir, \"Main\"], capture_output=True, text=True, timeout=self.max_execution_time)\n",
499
+ " return {\"execution_id\": execution_id, \"status\": \"success\" if run_proc.returncode == 0 else \"error\", \"stdout\": run_proc.stdout, \"stderr\": run_proc.stderr, \"result\": None, \"plots\": [], \"dataframes\": []}\n",
500
+ " except Exception as e:\n",
501
+ " return {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": str(e), \"result\": None, \"plots\": [], \"dataframes\": []}\n",
502
+ "\n",
503
+ "interpreter_instance = CodeInterpreter()\n",
504
+ "\n",
505
+ "@tool\n",
506
+ "def execute_code_multilang(code: str, language: str = \"python\") -> str:\n",
507
+ " \"\"\"Execute code in multiple languages (Python, Bash, SQL, C, Java) and return results.\n",
508
+ " Args:\n",
509
+ " code (str): The source code to execute.\n",
510
+ " language (str): The language of the code. Supported: \"python\", \"bash\", \"sql\", \"c\", \"java\".\n",
511
+ " Returns:\n",
512
+ " A string summarizing the execution results (stdout, stderr, errors, plots, dataframes if any).\n",
513
+ " \"\"\"\n",
514
+ " supported_languages = [\"python\", \"bash\", \"sql\", \"c\", \"java\"]\n",
515
+ " language = language.lower()\n",
516
+ "\n",
517
+ " if language not in supported_languages:\n",
518
+ " return f\"❌ Unsupported language: {language}. Supported languages are: {', '.join(supported_languages)}\"\n",
519
+ "\n",
520
+ " result = interpreter_instance.execute_code(code, language=language)\n",
521
+ "\n",
522
+ " response = []\n",
523
+ "\n",
524
+ " if result[\"status\"] == \"success\":\n",
525
+ " response.append(f\"βœ… Code executed successfully in **{language.upper()}**\")\n",
526
+ "\n",
527
+ " if result.get(\"stdout\"):\n",
528
+ " response.append(\"\\n**Standard Output:**\\n```\\n\" + result[\"stdout\"].strip() + \"\\n```\")\n",
529
+ "\n",
530
+ " if result.get(\"stderr\"):\n",
531
+ " response.append(\n",
532
+ " \"\\n**Standard Error (if any):**\\n```\\n\"\n",
533
+ " + result[\"stderr\"].strip() + \"\\n```\")\n",
534
+ "\n",
535
+ " if result.get(\"result\") is not None:\n",
536
+ " response.append(\n",
537
+ " \"\\n**Execution Result:**\\n```\\n\"\n",
538
+ " + str(result[\"result\"]).strip() + \"\\n```\")\n",
539
+ "\n",
540
+ " if result.get(\"dataframes\"):\n",
541
+ " for df_info in result[\"dataframes\"]:\n",
542
+ " response.append(f\"\\n**DataFrame `{df_info['name']}` (Shape: {df_info['shape']})**\")\n",
543
+ " df_preview = pd.DataFrame(df_info[\"head\"])\n",
544
+ " response.append(\"First 5 rows:\\n```\\n\" + str(df_preview) + \"\\n```\")\n",
545
+ "\n",
546
+ " if result.get(\"plots\"):\n",
547
+ " response.append(f\"\\n**Generated {len(result['plots'])} plot(s)** (Image data returned separately)\")\n",
548
+ "\n",
549
+ " else:\n",
550
+ " response.append(f\"❌ Code execution failed in **{language.upper()}**\")\n",
551
+ " if result.get(\"stderr\"):\n",
552
+ " response.append(\"\\n**Error Log:**\\n```\\n\" + result[\"stderr\"].strip() + \"\\n```\")\n",
553
+ "\n",
554
+ " return \"\\n\".join(response)"
555
+ ]
556
+ },
557
+ {
558
+ "cell_type": "markdown",
559
+ "id": "c02491df-6943-4dcc-b477-4c876d6b200c",
560
+ "metadata": {},
561
+ "source": [
562
+ "## DOCUMENT PROCESSING"
563
+ ]
564
+ },
565
+ {
566
+ "cell_type": "code",
567
+ "execution_count": null,
568
+ "id": "cf052d13-a91a-4271-9a37-358bd34d712b",
569
+ "metadata": {},
570
+ "outputs": [],
571
+ "source": [
572
+ "# document processing\n",
573
+ "import os\n",
574
+ "import uuid\n",
575
+ "import requests\n",
576
+ "import tempfile\n",
577
+ "from PIL import Image\n",
578
+ "import pytesseract\n",
579
+ "import pandas as pd\n",
580
+ "from urllib.parse import urlparse\n",
581
+ "from langchain_core.tools import tool\n",
582
+ "from typing import List, Dict, Any, Optional"
583
+ ]
584
+ },
585
+ {
586
+ "cell_type": "code",
587
+ "execution_count": null,
588
+ "id": "e0cd532e-644e-4a5a-a90e-53ba66a40250",
589
+ "metadata": {},
590
+ "outputs": [],
591
+ "source": [
592
+ "@tool\n",
593
+ "def save_and_read_file(content: str, filename: Optional[str] = None) -> str:\n",
594
+ " \"\"\"\n",
595
+ " Save content to a file and return the path.\n",
596
+ " Args:\n",
597
+ " content (str): the content to save to the file\n",
598
+ " filename (str, optional): the name of the file. If not provided, a random name file will be created.\n",
599
+ " \"\"\"\n",
600
+ " temp_dir = tempfile.gettempdir()\n",
601
+ " if filename is None:\n",
602
+ " temp_file = tempfile.NamedTemporaryFile(delete=False, dir=temp_dir)\n",
603
+ " filepath = temp_file.name\n",
604
+ " else:\n",
605
+ " filepath = os.path.join(temp_dir, filename)\n",
606
+ "\n",
607
+ " with open(filepath, \"w\") as f:\n",
608
+ " f.write(content)\n",
609
+ "\n",
610
+ " return f\"File saved to {filepath}. You can read this file to process its contents.\"\n",
611
+ "\n",
612
+ "@tool\n",
613
+ "def download_file_from_url(url: str, filename: Optional[str] = None) -> str:\n",
614
+ " \"\"\"\n",
615
+ " Download a file from a URL and save it to a temporary location.\n",
616
+ " Args:\n",
617
+ " url (str): the URL of the file to download.\n",
618
+ " filename (str, optional): the name of the file. If not provided, a random name file will be created.\n",
619
+ " \"\"\"\n",
620
+ " try:\n",
621
+ " # Parse URL to get filename if not provided\n",
622
+ " if not filename:\n",
623
+ " path = urlparse(url).path\n",
624
+ " filename = os.path.basename(path)\n",
625
+ " if not filename:\n",
626
+ " filename = f\"downloaded_{uuid.uuid4().hex[:8]}\"\n",
627
+ "\n",
628
+ " # Create temporary file\n",
629
+ " temp_dir = tempfile.gettempdir()\n",
630
+ " filepath = os.path.join(temp_dir, filename)\n",
631
+ "\n",
632
+ " # Download the file\n",
633
+ " response = requests.get(url, stream=True)\n",
634
+ " response.raise_for_status()\n",
635
+ "\n",
636
+ " # Save the file\n",
637
+ " with open(filepath, \"wb\") as f:\n",
638
+ " for chunk in response.iter_content(chunk_size=8192):\n",
639
+ " f.write(chunk)\n",
640
+ "\n",
641
+ " return f\"File downloaded to {filepath}. You can read this file to process its contents.\"\n",
642
+ " except Exception as e:\n",
643
+ " return f\"Error downloading file: {str(e)}\"\n",
644
+ "\n",
645
+ "@tool\n",
646
+ "def extract_text_from_image(image_path: str) -> str:\n",
647
+ " \"\"\"\n",
648
+ " Extract text from an image using OCR library pytesseract (if available).\n",
649
+ " Args:\n",
650
+ " image_path (str): the path to the image file.\n",
651
+ " \"\"\"\n",
652
+ " try:\n",
653
+ " # Open the image\n",
654
+ " image = Image.open(image_path)\n",
655
+ "\n",
656
+ " # Extract text from the image\n",
657
+ " text = pytesseract.image_to_string(image)\n",
658
+ "\n",
659
+ " return f\"Extracted text from image:\\n\\n{text}\"\n",
660
+ " except Exception as e:\n",
661
+ " return f\"Error extracting text from image: {str(e)}\"\n",
662
+ "\n",
663
+ "@tool\n",
664
+ "def analyze_csv_file(file_path: str, query: str) -> str:\n",
665
+ " \"\"\"\n",
666
+ " Analyze a CSV file using pandas and answer a question about it.\n",
667
+ " Args:\n",
668
+ " file_path (str): the path to the CSV file.\n",
669
+ " query (str): Question about the data\n",
670
+ " \"\"\"\n",
671
+ " try:\n",
672
+ " # Read the CSV file\n",
673
+ " df = pd.read_csv(file_path)\n",
674
+ "\n",
675
+ " # Run various analyses based on the query\n",
676
+ " result = f\"CSV file loaded with {len(df)} rows and {len(df.columns)} columns.\\n\"\n",
677
+ " result += f\"Columns: {', '.join(df.columns)}\\n\\n\"\n",
678
+ "\n",
679
+ " # Add summary statistics\n",
680
+ " result += \"Summary statistics:\\n\"\n",
681
+ " result += str(df.describe())\n",
682
+ "\n",
683
+ " return result\n",
684
+ "\n",
685
+ " except Exception as e:\n",
686
+ " return f\"Error analyzing CSV file: {str(e)}\"\n",
687
+ "\n",
688
+ "@tool\n",
689
+ "def analyze_excel_file(file_path: str, query: str) -> str:\n",
690
+ " \"\"\"\n",
691
+ " Analyze an Excel file using pandas and answer a question about it.\n",
692
+ " Args:\n",
693
+ " file_path (str): the path to the Excel file.\n",
694
+ " query (str): Question about the data\n",
695
+ " \"\"\"\n",
696
+ " try:\n",
697
+ " # Read the Excel file\n",
698
+ " df = pd.read_excel(file_path)\n",
699
+ "\n",
700
+ " # Run various analyses based on the query\n",
701
+ " result = (\n",
702
+ " f\"Excel file loaded with {len(df)} rows and {len(df.columns)} columns.\\n\"\n",
703
+ " )\n",
704
+ " result += f\"Columns: {', '.join(df.columns)}\\n\\n\"\n",
705
+ "\n",
706
+ " # Add summary statistics\n",
707
+ " result += \"Summary statistics:\\n\"\n",
708
+ " result += str(df.describe())\n",
709
+ "\n",
710
+ " return result\n",
711
+ "\n",
712
+ " except Exception as e:\n",
713
+ " return f\"Error analyzing Excel file: {str(e)}\"\n"
714
+ ]
715
+ },
716
+ {
717
+ "cell_type": "markdown",
718
+ "id": "2747e5da-61fb-4c0a-ae9e-4e09f6c490e0",
719
+ "metadata": {},
720
+ "source": [
721
+ "## IMAGE PROCESSING"
722
+ ]
723
+ },
724
+ {
725
+ "cell_type": "code",
726
+ "execution_count": null,
727
+ "id": "8304d8c5-2a28-4ba6-980d-86a14592eb60",
728
+ "metadata": {},
729
+ "outputs": [],
730
+ "source": [
731
+ "# image processing\n",
732
+ "import os\n",
733
+ "import io\n",
734
+ "import uuid\n",
735
+ "import base64\n",
736
+ "import numpy as np\n",
737
+ "from PIL import Image\n",
738
+ "from langchain_core.tools import tool\n",
739
+ "from typing import List, Dict, Any, Optional\n",
740
+ "from PIL import Image, ImageDraw, ImageFont, ImageEnhance, ImageFilter"
741
+ ]
742
+ },
743
+ {
744
+ "cell_type": "code",
745
+ "execution_count": null,
746
+ "id": "b9766e75-42b6-413c-96d4-ccb3380e8498",
747
+ "metadata": {},
748
+ "outputs": [],
749
+ "source": [
750
+ "# Helper functions for image processing\n",
751
+ "def encode_image(image_path: str) -> str:\n",
752
+ " \"\"\"Convert an image file to base64 string.\"\"\"\n",
753
+ " with open(image_path, \"rb\") as image_file:\n",
754
+ " return base64.b64encode(image_file.read()).decode(\"utf-8\")\n",
755
+ "\n",
756
+ "def decode_image(base64_string: str) -> Image.Image:\n",
757
+ " \"\"\"Convert a base64 string to a PIL Image.\"\"\"\n",
758
+ " image_data = base64.b64decode(base64_string)\n",
759
+ " return Image.open(io.BytesIO(image_data))\n",
760
+ "\n",
761
+ "def save_image(image: Image.Image, directory: str = \"image_outputs\") -> str:\n",
762
+ " \"\"\"Save a PIL Image to disk and return the path.\"\"\"\n",
763
+ " os.makedirs(directory, exist_ok=True)\n",
764
+ " image_id = str(uuid.uuid4())\n",
765
+ " image_path = os.path.join(directory, f\"{image_id}.png\")\n",
766
+ " image.save(image_path)\n",
767
+ " return image_path\n",
768
+ "\n",
769
+ "@tool\n",
770
+ "def analyze_image(image_base64: str) -> Dict[str, Any]:\n",
771
+ " \"\"\"\n",
772
+ " Analyze basic properties of an image (size, mode, color analysis, thumbnail preview).\n",
773
+ " Args:\n",
774
+ " image_base64 (str): Base64 encoded image string\n",
775
+ " Returns:\n",
776
+ " Dictionary with analysis result\n",
777
+ " \"\"\"\n",
778
+ " try:\n",
779
+ " img = decode_image(image_base64)\n",
780
+ " width, height = img.size\n",
781
+ " mode = img.mode\n",
782
+ "\n",
783
+ " if mode in (\"RGB\", \"RGBA\"):\n",
784
+ " arr = np.array(img)\n",
785
+ " avg_colors = arr.mean(axis=(0, 1))\n",
786
+ " dominant = [\"Red\", \"Green\", \"Blue\"][np.argmax(avg_colors[:3])]\n",
787
+ " brightness = avg_colors.mean()\n",
788
+ " color_analysis = {\n",
789
+ " \"average_rgb\": avg_colors.tolist(),\n",
790
+ " \"brightness\": brightness,\n",
791
+ " \"dominant_color\": dominant,\n",
792
+ " }\n",
793
+ " else:\n",
794
+ " color_analysis = {\"note\": f\"No color analysis for mode {mode}\"}\n",
795
+ "\n",
796
+ " thumbnail = img.copy()\n",
797
+ " thumbnail.thumbnail((100, 100))\n",
798
+ " thumb_path = save_image(thumbnail, \"thumbnails\")\n",
799
+ " thumbnail_base64 = encode_image(thumb_path)\n",
800
+ "\n",
801
+ " return {\n",
802
+ " \"dimensions\": (width, height),\n",
803
+ " \"mode\": mode,\n",
804
+ " \"color_analysis\": color_analysis,\n",
805
+ " \"thumbnail\": thumbnail_base64,\n",
806
+ " }\n",
807
+ " except Exception as e:\n",
808
+ " return {\"error\": str(e)}\n",
809
+ "\n",
810
+ "@tool\n",
811
+ "def transform_image(image_base64: str, operation: str, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:\n",
812
+ " \"\"\"\n",
813
+ " Apply transformations: resize, rotate, crop, flip, brightness, contrast, blur, sharpen, grayscale.\n",
814
+ " Args:\n",
815
+ " image_base64 (str): Base64 encoded input image\n",
816
+ " operation (str): Transformation operation\n",
817
+ " params (Dict[str, Any], optional): Parameters for the operation\n",
818
+ " Returns:\n",
819
+ " Dictionary with transformed image (base64)\n",
820
+ " \"\"\"\n",
821
+ " try:\n",
822
+ " img = decode_image(image_base64)\n",
823
+ " params = params or {}\n",
824
+ "\n",
825
+ " if operation == \"resize\":\n",
826
+ " img = img.resize(\n",
827
+ " (\n",
828
+ " params.get(\"width\", img.width // 2),\n",
829
+ " params.get(\"height\", img.height // 2),\n",
830
+ " )\n",
831
+ " )\n",
832
+ " elif operation == \"rotate\":\n",
833
+ " img = img.rotate(params.get(\"angle\", 90), expand=True)\n",
834
+ " elif operation == \"crop\":\n",
835
+ " img = img.crop(\n",
836
+ " (\n",
837
+ " params.get(\"left\", 0),\n",
838
+ " params.get(\"top\", 0),\n",
839
+ " params.get(\"right\", img.width),\n",
840
+ " params.get(\"bottom\", img.height),\n",
841
+ " )\n",
842
+ " )\n",
843
+ " elif operation == \"flip\":\n",
844
+ " if params.get(\"direction\", \"horizontal\") == \"horizontal\":\n",
845
+ " img = img.transpose(Image.FLIP_LEFT_RIGHT)\n",
846
+ " else:\n",
847
+ " img = img.transpose(Image.FLIP_TOP_BOTTOM)\n",
848
+ " elif operation == \"adjust_brightness\":\n",
849
+ " img = ImageEnhance.Brightness(img).enhance(params.get(\"factor\", 1.5))\n",
850
+ " elif operation == \"adjust_contrast\":\n",
851
+ " img = ImageEnhance.Contrast(img).enhance(params.get(\"factor\", 1.5))\n",
852
+ " elif operation == \"blur\":\n",
853
+ " img = img.filter(ImageFilter.GaussianBlur(params.get(\"radius\", 2)))\n",
854
+ " elif operation == \"sharpen\":\n",
855
+ " img = img.filter(ImageFilter.SHARPEN)\n",
856
+ " elif operation == \"grayscale\":\n",
857
+ " img = img.convert(\"L\")\n",
858
+ " else:\n",
859
+ " return {\"error\": f\"Unknown operation: {operation}\"}\n",
860
+ "\n",
861
+ " result_path = save_image(img)\n",
862
+ " result_base64 = encode_image(result_path)\n",
863
+ " return {\"transformed_image\": result_base64}\n",
864
+ "\n",
865
+ " except Exception as e:\n",
866
+ " return {\"error\": str(e)}\n",
867
+ "\n",
868
+ "@tool\n",
869
+ "def draw_on_image(image_base64: str, drawing_type: str, params: Dict[str, Any]) -> Dict[str, Any]:\n",
870
+ " \"\"\"\n",
871
+ " Draw shapes (rectangle, circle, line) or text onto an image.\n",
872
+ " Args:\n",
873
+ " image_base64 (str): Base64 encoded input image\n",
874
+ " drawing_type (str): Drawing type\n",
875
+ " params (Dict[str, Any]): Drawing parameters\n",
876
+ " Returns:\n",
877
+ " Dictionary with result image (base64)\n",
878
+ " \"\"\"\n",
879
+ " try:\n",
880
+ " img = decode_image(image_base64)\n",
881
+ " draw = ImageDraw.Draw(img)\n",
882
+ " color = params.get(\"color\", \"red\")\n",
883
+ "\n",
884
+ " if drawing_type == \"rectangle\":\n",
885
+ " draw.rectangle(\n",
886
+ " [params[\"left\"], params[\"top\"], params[\"right\"], params[\"bottom\"]],\n",
887
+ " outline=color,\n",
888
+ " width=params.get(\"width\", 2),\n",
889
+ " )\n",
890
+ " elif drawing_type == \"circle\":\n",
891
+ " x, y, r = params[\"x\"], params[\"y\"], params[\"radius\"]\n",
892
+ " draw.ellipse(\n",
893
+ " (x - r, y - r, x + r, y + r),\n",
894
+ " outline=color,\n",
895
+ " width=params.get(\"width\", 2),\n",
896
+ " )\n",
897
+ " elif drawing_type == \"line\":\n",
898
+ " draw.line(\n",
899
+ " (\n",
900
+ " params[\"start_x\"],\n",
901
+ " params[\"start_y\"],\n",
902
+ " params[\"end_x\"],\n",
903
+ " params[\"end_y\"],\n",
904
+ " ),\n",
905
+ " fill=color,\n",
906
+ " width=params.get(\"width\", 2),\n",
907
+ " )\n",
908
+ " elif drawing_type == \"text\":\n",
909
+ " font_size = params.get(\"font_size\", 20)\n",
910
+ " try:\n",
911
+ " font = ImageFont.truetype(\"arial.ttf\", font_size)\n",
912
+ " except IOError:\n",
913
+ " font = ImageFont.load_default()\n",
914
+ " draw.text(\n",
915
+ " (params[\"x\"], params[\"y\"]),\n",
916
+ " params.get(\"text\", \"Text\"),\n",
917
+ " fill=color,\n",
918
+ " font=font,\n",
919
+ " )\n",
920
+ " else:\n",
921
+ " return {\"error\": f\"Unknown drawing type: {drawing_type}\"}\n",
922
+ "\n",
923
+ " result_path = save_image(img)\n",
924
+ " result_base64 = encode_image(result_path)\n",
925
+ " return {\"result_image\": result_base64}\n",
926
+ "\n",
927
+ " except Exception as e:\n",
928
+ " return {\"error\": str(e)}\n",
929
+ "\n",
930
+ "@tool\n",
931
+ "def generate_simple_image(image_type: str, width: int = 500, height: int = 500, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:\n",
932
+ " \"\"\"\n",
933
+ " Generate a simple image (gradient, noise, pattern, chart).\n",
934
+ " Args:\n",
935
+ " image_type (str): Type of image\n",
936
+ " width (int), height (int)\n",
937
+ " params (Dict[str, Any], optional): Specific parameters\n",
938
+ " Returns:\n",
939
+ " Dictionary with generated image (base64)\n",
940
+ " \"\"\"\n",
941
+ " try:\n",
942
+ " params = params or {}\n",
943
+ "\n",
944
+ " if image_type == \"gradient\":\n",
945
+ " direction = params.get(\"direction\", \"horizontal\")\n",
946
+ " start_color = params.get(\"start_color\", (255, 0, 0))\n",
947
+ " end_color = params.get(\"end_color\", (0, 0, 255))\n",
948
+ "\n",
949
+ " img = Image.new(\"RGB\", (width, height))\n",
950
+ " draw = ImageDraw.Draw(img)\n",
951
+ "\n",
952
+ " if direction == \"horizontal\":\n",
953
+ " for x in range(width):\n",
954
+ " r = int(\n",
955
+ " start_color[0] + (end_color[0] - start_color[0]) * x / width\n",
956
+ " )\n",
957
+ " g = int(\n",
958
+ " start_color[1] + (end_color[1] - start_color[1]) * x / width\n",
959
+ " )\n",
960
+ " b = int(\n",
961
+ " start_color[2] + (end_color[2] - start_color[2]) * x / width\n",
962
+ " )\n",
963
+ " draw.line([(x, 0), (x, height)], fill=(r, g, b))\n",
964
+ " else:\n",
965
+ " for y in range(height):\n",
966
+ " r = int(\n",
967
+ " start_color[0] + (end_color[0] - start_color[0]) * y / height\n",
968
+ " )\n",
969
+ " g = int(\n",
970
+ " start_color[1] + (end_color[1] - start_color[1]) * y / height\n",
971
+ " )\n",
972
+ " b = int(\n",
973
+ " start_color[2] + (end_color[2] - start_color[2]) * y / height\n",
974
+ " )\n",
975
+ " draw.line([(0, y), (width, y)], fill=(r, g, b))\n",
976
+ "\n",
977
+ " elif image_type == \"noise\":\n",
978
+ " noise_array = np.random.randint(0, 256, (height, width, 3), dtype=np.uint8)\n",
979
+ " img = Image.fromarray(noise_array, \"RGB\")\n",
980
+ "\n",
981
+ " else:\n",
982
+ " return {\"error\": f\"Unsupported image_type {image_type}\"}\n",
983
+ "\n",
984
+ " result_path = save_image(img)\n",
985
+ " result_base64 = encode_image(result_path)\n",
986
+ " return {\"generated_image\": result_base64}\n",
987
+ "\n",
988
+ " except Exception as e:\n",
989
+ " return {\"error\": str(e)}\n",
990
+ "\n",
991
+ "@tool\n",
992
+ "def combine_images(images_base64: List[str], operation: str, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:\n",
993
+ " \"\"\"\n",
994
+ " Combine multiple images (collage, stack, blend).\n",
995
+ " Args:\n",
996
+ " images_base64 (List[str]): List of base64 images\n",
997
+ " operation (str): Combination type\n",
998
+ " params (Dict[str, Any], optional)\n",
999
+ " Returns:\n",
1000
+ " Dictionary with combined image (base64)\n",
1001
+ " \"\"\"\n",
1002
+ " try:\n",
1003
+ " images = [decode_image(b64) for b64 in images_base64]\n",
1004
+ " params = params or {}\n",
1005
+ "\n",
1006
+ " if operation == \"stack\":\n",
1007
+ " direction = params.get(\"direction\", \"horizontal\")\n",
1008
+ " if direction == \"horizontal\":\n",
1009
+ " total_width = sum(img.width for img in images)\n",
1010
+ " max_height = max(img.height for img in images)\n",
1011
+ " new_img = Image.new(\"RGB\", (total_width, max_height))\n",
1012
+ " x = 0\n",
1013
+ " for img in images:\n",
1014
+ " new_img.paste(img, (x, 0))\n",
1015
+ " x += img.width\n",
1016
+ " else:\n",
1017
+ " max_width = max(img.width for img in images)\n",
1018
+ " total_height = sum(img.height for img in images)\n",
1019
+ " new_img = Image.new(\"RGB\", (max_width, total_height))\n",
1020
+ " y = 0\n",
1021
+ " for img in images:\n",
1022
+ " new_img.paste(img, (0, y))\n",
1023
+ " y += img.height\n",
1024
+ " else:\n",
1025
+ " return {\"error\": f\"Unsupported combination operation {operation}\"}\n",
1026
+ "\n",
1027
+ " result_path = save_image(new_img)\n",
1028
+ " result_base64 = encode_image(result_path)\n",
1029
+ " return {\"combined_image\": result_base64}\n",
1030
+ "\n",
1031
+ " except Exception as e:\n",
1032
+ " return {\"error\": str(e)}\n"
1033
+ ]
1034
+ },
1035
+ {
1036
+ "cell_type": "markdown",
1037
+ "id": "cb966ca4-1ccf-4a14-8c7c-960b1d8e1c55",
1038
+ "metadata": {},
1039
+ "source": [
1040
+ "## AUDIO PROCESSING"
1041
+ ]
1042
+ },
1043
+ {
1044
+ "cell_type": "code",
1045
+ "execution_count": null,
1046
+ "id": "9b05ce05-a577-4473-bb05-0d58602f71c2",
1047
+ "metadata": {},
1048
+ "outputs": [],
1049
+ "source": []
1050
+ },
1051
+ {
1052
+ "cell_type": "markdown",
1053
+ "id": "57a4fcb2-59ae-44d6-9d6a-ea1ab5acae0f",
1054
+ "metadata": {},
1055
+ "source": [
1056
+ "# AGENT"
1057
+ ]
1058
+ },
1059
+ {
1060
+ "cell_type": "markdown",
1061
+ "id": "32b57eeb-4260-43bd-9898-2edbe3be1281",
1062
+ "metadata": {},
1063
+ "source": [
1064
+ "The Agent is designed using LangGraph which is a production-ready framework deveoped by LangChain. The control flow of the agent is designed using a directed graph structure to move a state object from node to node through decision edges. It simplifies the design of even complex Agent application by relying on simple components that all work together."
1065
+ ]
1066
+ },
1067
+ {
1068
+ "cell_type": "markdown",
1069
+ "id": "e6b7a8d7-e174-42a1-8bfb-9407bfd3c518",
1070
+ "metadata": {},
1071
+ "source": [
1072
+ "## RETRIEVER"
1073
+ ]
1074
+ },
1075
+ {
1076
+ "cell_type": "code",
1077
+ "execution_count": null,
1078
+ "id": "7e391c4c-d019-4baf-bf53-fe24244cac0c",
1079
+ "metadata": {},
1080
+ "outputs": [],
1081
+ "source": [
1082
+ "# build a retriever\n",
1083
+ "embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-mpnet-base-v2\") # set the model to generate embeddings; dim=768\n",
1084
+ "supabase, Client = create_client(os.environ.get(\"SUPABASE_URL\"), os.environ.get(\"SUPABASE_SERVICE_KEY\"))\n",
1085
+ "vector_store = SupabaseVectorStore(client=supabase, embedding= embeddings, table_name=\"documents\", query_name=\"match_documents_langchain\")\n",
1086
+ "create_retriever_tool = create_retriever_tool(retriever=vector_store.as_retriever(), name=\"Question Retriever\", description=\"Retrieve similar questions from a vector store.\")"
1087
+ ]
1088
+ },
1089
+ {
1090
+ "cell_type": "markdown",
1091
+ "id": "26dd5168-8602-4a70-ae77-6ed834a762f9",
1092
+ "metadata": {},
1093
+ "source": [
1094
+ "## PROMPTS"
1095
+ ]
1096
+ },
1097
+ {
1098
+ "cell_type": "code",
1099
+ "execution_count": null,
1100
+ "id": "b6682591-6e1c-4c99-b0ae-b8eb0db470d1",
1101
+ "metadata": {},
1102
+ "outputs": [],
1103
+ "source": [
1104
+ "# load the system prompt from the file\n",
1105
+ "with open(\"../prompts/system_prompt.txt\", \"r\", encoding=\"utf-8\") as f:\n",
1106
+ " system_prompt = f.read()\n",
1107
+ "print(f'SYSTEM PROMPT:\\n{system_prompt}')\n",
1108
+ "\n",
1109
+ "# System message\n",
1110
+ "sys_msg = SystemMessage(content=system_prompt)"
1111
+ ]
1112
+ },
1113
+ {
1114
+ "cell_type": "markdown",
1115
+ "id": "a8f38f2a-8c90-4e2f-b59e-f49da81ed3c6",
1116
+ "metadata": {},
1117
+ "source": [
1118
+ "## TOOLS"
1119
+ ]
1120
+ },
1121
+ {
1122
+ "cell_type": "code",
1123
+ "execution_count": null,
1124
+ "id": "f60060ca-6130-4c15-a298-e289de9f6b6d",
1125
+ "metadata": {},
1126
+ "outputs": [],
1127
+ "source": [
1128
+ "# list all agent tools\n",
1129
+ "tools = [web_search, wiki_search, similar_question_search, arxiv_search, multiply, add, subtract, divide, modulus, power, square_root, count_substring, save_and_read_file, download_file_from_url, extract_text_from_image, analyze_csv_file, analyze_excel_file, execute_code_multilang, analyze_image, transform_image, draw_on_image, generate_simple_image, combine_images]"
1130
+ ]
1131
+ },
1132
+ {
1133
+ "cell_type": "code",
1134
+ "execution_count": null,
1135
+ "id": "46cf257e-7cd1-4bb4-8b86-6f9a4f4f74f8",
1136
+ "metadata": {},
1137
+ "outputs": [],
1138
+ "source": [
1139
+ "# Build the agent graph\n",
1140
+ "def build_graph(provider: str = \"huggingface-qwen\"):\n",
1141
+ " \"\"\"Build the LangGraph Agent\"\"\"\n",
1142
+ " # Load environment variables from .env file\n",
1143
+ " if provider == \"google\": # Google Gemini\n",
1144
+ " llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\", temperature=0)\n",
1145
+ " elif provider == \"groq\": # Groq https://console.groq.com/docs/models\n",
1146
+ " llm = ChatGroq(model=\"qwen-qwq-32b\", temperature=0) # optional : qwen-qwq-32b gemma2-9b-it\n",
1147
+ " elif provider == \"huggingface-qwen\":\n",
1148
+ " llm = ChatHuggingFace(llm=HuggingFaceEndpoint(repo_id = \"Qwen/Qwen2.5-Coder-32B-Instruct\"))\n",
1149
+ " elif provider == \"huggingface-llama\":\n",
1150
+ " llm = ChatHuggingFace(llm=HuggingFaceEndpoint(repo_id=\"TinyLlama/TinyLlama-1.1B-Chat-v1.0\", task=\"text-generation\", max_new_tokens=1024, do_sample=False, repetition_penalty=1.03, temperature=0), verbose=True)\n",
1151
+ " else:\n",
1152
+ " raise ValueError(\"Invalid provider. Choose 'google', 'groq', 'huggingface-qwen' or 'huggingface-llama'.\")\n",
1153
+ " \n",
1154
+ " llm_with_tools = llm.bind_tools(tools) # Bind tools to LLM\n",
1155
+ "\n",
1156
+ " # Node\n",
1157
+ " def assistant(state: MessagesState):\n",
1158
+ " \"\"\"Assistant node\"\"\"\n",
1159
+ " return {\"messages\": [llm_with_tools.invoke(state[\"messages\"])]}\n",
1160
+ " \n",
1161
+ " def retriever(state: MessagesState):\n",
1162
+ " \"\"\"Retriever node\"\"\"\n",
1163
+ " similar_question = vector_store.similarity_search(state[\"messages\"][0].content)\n",
1164
+ " example_msg = HumanMessage(content=f\"Here I provide a similar question and answer for reference: \\n\\n{similar_question[0].page_content}\")\n",
1165
+ " return {\"messages\": [sys_msg] + state[\"messages\"] + [example_msg]}\n",
1166
+ "\n",
1167
+ " # create nodes - decision points\n",
1168
+ " builder = StateGraph(MessagesState)\n",
1169
+ " builder.add_node(\"retriever\", retriever) \n",
1170
+ " builder.add_node(\"assistant\", assistant)\n",
1171
+ " builder.add_node(\"tools\", ToolNode(tools)) # equip the agents with the list of tools\n",
1172
+ "\n",
1173
+ " # connect nodes - control flow\n",
1174
+ " builder.add_edge(START, \"retriever\")\n",
1175
+ " builder.add_edge(\"retriever\", \"assistant\")\n",
1176
+ " builder.add_conditional_edges(\"assistant\", tools_condition)\n",
1177
+ " builder.add_edge(\"tools\", \"assistant\")\n",
1178
+ "\n",
1179
+ " # Compile graph\n",
1180
+ " return builder.compile()"
1181
+ ]
1182
+ },
1183
+ {
1184
+ "cell_type": "markdown",
1185
+ "id": "5d5a4d4d-a497-4fa9-acb1-89b6967964ea",
1186
+ "metadata": {},
1187
+ "source": [
1188
+ "# APP INTERGRATION"
1189
+ ]
1190
+ },
1191
+ {
1192
+ "cell_type": "markdown",
1193
+ "id": "c1b877bf-5e62-4a01-9e71-17915de09dfd",
1194
+ "metadata": {},
1195
+ "source": [
1196
+ "To integrate the Agent solution into the submission API, solutions to covered GAIA questions will be generated prior to submission and stored in a database.\n",
1197
+ "The Agent will then have to retrieve the answer to the actual questions thrown at it during the assessment from its solution bank.\n",
1198
+ "All tools and related artefacts for the Agent will also be made public in the project folder to meet credibility requirements for the course assessment.\n",
1199
+ "\n",
1200
+ "Integration Changes:\n",
1201
+ " - include scripts to house the tools in a dedicated folder within the project\n",
1202
+ " - include scripts defining the agent in a dedicated folder within the project\n",
1203
+ " - include a text file with the system prompt guiding the agent in a dedicated folder\n",
1204
+ " - ensure the agent and tools directory are recognized as packages\n",
1205
+ " - modify the app.py script to load the updated agent class\n",
1206
+ " - update the readme file\n",
1207
+ " - update the requirements file\n",
1208
+ " - include the jupyter notebook"
1209
+ ]
1210
+ },
1211
+ {
1212
+ "cell_type": "code",
1213
+ "execution_count": null,
1214
+ "id": "f4dcdae5-a0cf-4b53-b3da-2511651ecf2b",
1215
+ "metadata": {},
1216
+ "outputs": [],
1217
+ "source": [
1218
+ "# testing\n",
1219
+ "if __name__ == \"__main__\":\n",
1220
+ " question = \"When was a picture of St. Thomas Aquinas first added to the Wikipedia page on the Principle of double effect?\"\n",
1221
+ " graph = build_graph(provider=\"huggingface-llama\")\n",
1222
+ " messages = [HumanMessage(content=question)]\n",
1223
+ " messages = graph.invoke({\"messages\": messages})\n",
1224
+ " for m in messages[\"messages\"]:\n",
1225
+ " m.pretty_print()"
1226
+ ]
1227
+ },
1228
+ {
1229
+ "cell_type": "code",
1230
+ "execution_count": null,
1231
+ "id": "6141142d-3502-4db3-a578-a06586a025af",
1232
+ "metadata": {},
1233
+ "outputs": [],
1234
+ "source": [
1235
+ "class GAIAAgent:\n",
1236
+ " \"\"\"A langgraph agent for attempting the GAIA benchmark.\"\"\"\n",
1237
+ " def __init__(self):\n",
1238
+ " print(\"Agent initialized.\")\n",
1239
+ " self.graph = build_graph() # instantiate the Agent\n",
1240
+ "\n",
1241
+ " def __call__(self, question: str) -> str:\n",
1242
+ " print(f\"Agent received question (first 50 chars): {question[:50]}...\")\n",
1243
+ " messages = [HumanMessage(content=question)]\n",
1244
+ " result = self.graph.invoke({\"messages\": messages})\n",
1245
+ " answer = result['messages'][-1].content # retrieve solution similar to the current question from prepared dump\n",
1246
+ " return answer # submit"
1247
+ ]
1248
+ }
1249
+ ],
1250
+ "metadata": {
1251
+ "kernelspec": {
1252
+ "display_name": "Python 3 (ipykernel)",
1253
+ "language": "python",
1254
+ "name": "python3"
1255
+ },
1256
+ "language_info": {
1257
+ "codemirror_mode": {
1258
+ "name": "ipython",
1259
+ "version": 3
1260
+ },
1261
+ "file_extension": ".py",
1262
+ "mimetype": "text/x-python",
1263
+ "name": "python",
1264
+ "nbconvert_exporter": "python",
1265
+ "pygments_lexer": "ipython3",
1266
+ "version": "3.12.7"
1267
+ },
1268
+ "widgets": {
1269
+ "application/vnd.jupyter.widget-state+json": {
1270
+ "state": {},
1271
+ "version_major": 2,
1272
+ "version_minor": 0
1273
+ }
1274
+ }
1275
+ },
1276
+ "nbformat": 4,
1277
+ "nbformat_minor": 5
1278
+ }
notebook/notebook.ipynb ADDED
@@ -0,0 +1,1278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "ad1eb5e0-f01c-4cb2-9c33-8a21e8d4a367",
6
+ "metadata": {},
7
+ "source": [
8
+ "# TASK"
9
+ ]
10
+ },
11
+ {
12
+ "cell_type": "markdown",
13
+ "id": "25b53d6a-da20-4b9f-880e-7b308805efcb",
14
+ "metadata": {},
15
+ "source": [
16
+ "The task is to utilize knowledge from the [**HuggingFace Agents Course**](https://huggingface.co/learn/agents-course/) to implement an agent capable of tackling the GAIA questions.\n",
17
+ "\n",
18
+ "[**GAIA**](https://huggingface.co/papers/2311.12983) is a benchmark designed to evaluate AI Agents on reasoning, multimodal understanding, web browsing, tool-use capabilities.\n",
19
+ "It features a collection of questions posing real-world difficulty easy human interpretability, brute-force resistance, and easy evaluation.\n",
20
+ "Questions are organized into three levels of difficulty where level 1 questionsrequire minimal tool use and planning steps while level 3 tasks on the far end demand advanced tool-use and deeply involved planning.\n",
21
+ "The course samples 20 questions from the level 1 group and sets a pass criteria of 30% correct answers as criteria for passing the assessment."
22
+ ]
23
+ },
24
+ {
25
+ "cell_type": "markdown",
26
+ "id": "40492b8a-f87d-4072-9723-33d9f9a64312",
27
+ "metadata": {},
28
+ "source": [
29
+ "# GOALS\n",
30
+ "- Implement an Agent using the LangGraph Framework\n",
31
+ "- Setup API Keys for access to external tools\n",
32
+ "- Design tools to help the agent tackle the problem\n",
33
+ "- Create the Agent\n",
34
+ "- Intergrate the agent into the submission app"
35
+ ]
36
+ },
37
+ {
38
+ "cell_type": "markdown",
39
+ "id": "b8b893ac-49ec-44ef-bc90-2abb134df094",
40
+ "metadata": {},
41
+ "source": [
42
+ "# IMPORTS"
43
+ ]
44
+ },
45
+ {
46
+ "cell_type": "code",
47
+ "execution_count": null,
48
+ "id": "75727846-da05-4b15-a9ad-fbd4497757d4",
49
+ "metadata": {},
50
+ "outputs": [],
51
+ "source": [
52
+ "import os\n",
53
+ "from dotenv import load_dotenv\n",
54
+ "from langgraph.graph import START, StateGraph, MessagesState\n",
55
+ "from langgraph.prebuilt import tools_condition\n",
56
+ "from langgraph.prebuilt import ToolNode\n",
57
+ "from langchain_google_genai import ChatGoogleGenerativeAI\n",
58
+ "from langchain_groq import ChatGroq\n",
59
+ "from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint, HuggingFaceEmbeddings\n",
60
+ "from langchain_community.tools.tavily_search import TavilySearchResults\n",
61
+ "from langchain_community.document_loaders import WikipediaLoader\n",
62
+ "from langchain_community.document_loaders import ArxivLoader\n",
63
+ "from langchain_community.vectorstores import SupabaseVectorStore\n",
64
+ "from langchain_core.messages import SystemMessage, HumanMessage\n",
65
+ "from langchain_core.tools import tool\n",
66
+ "from langchain.tools.retriever import create_retriever_tool\n",
67
+ "from supabase.client import Client, create_client"
68
+ ]
69
+ },
70
+ {
71
+ "cell_type": "markdown",
72
+ "id": "30165673-65d8-46e2-a5db-1a357c30d09f",
73
+ "metadata": {},
74
+ "source": [
75
+ "# API KEYS"
76
+ ]
77
+ },
78
+ {
79
+ "cell_type": "raw",
80
+ "id": "50a3f698-56df-4ea6-a236-29ed7fadac7d",
81
+ "metadata": {},
82
+ "source": [
83
+ "SUPABASE_URL\n",
84
+ "SUPABASE_SERVICE_KEY\n",
85
+ "SUPABASE_SERVICE_ROLE_KEY\n",
86
+ "HF_TOKEN"
87
+ ]
88
+ },
89
+ {
90
+ "cell_type": "code",
91
+ "execution_count": null,
92
+ "id": "dd06e727-2073-406b-b76a-876f4a1bf96a",
93
+ "metadata": {},
94
+ "outputs": [],
95
+ "source": [
96
+ "load_dotenv()"
97
+ ]
98
+ },
99
+ {
100
+ "cell_type": "markdown",
101
+ "id": "c8f3a1f0-5f0c-4560-af5e-b2a8edb79aef",
102
+ "metadata": {},
103
+ "source": [
104
+ "# TOOLS"
105
+ ]
106
+ },
107
+ {
108
+ "cell_type": "markdown",
109
+ "id": "44868006-ce9a-4d3d-b1b8-2d7f2261c3b6",
110
+ "metadata": {},
111
+ "source": [
112
+ "Difficulty in the GAIA benchmark extends beyonds just reasoning. Various questions require extracting information from accompanying files of various modalities. To ensure the Agent is up to the task, utility functions need to be pre-built and made available to the Agent. This reduces complexity and introduces some reliability in conducting similar tasks in a reprodicible way. Such tools also account for known LLM shortfalls and extend the capabilities of the LLM with targeted functionalities."
113
+ ]
114
+ },
115
+ {
116
+ "cell_type": "code",
117
+ "execution_count": null,
118
+ "id": "e697c83f-3e87-48a5-a142-3caace4c85d8",
119
+ "metadata": {},
120
+ "outputs": [],
121
+ "source": [
122
+ "# load the system prompt from the file\n",
123
+ "with open(\"../prompts/system_prompt.txt\", \"r\", encoding=\"utf-8\") as f:\n",
124
+ " system_prompt = f.read()\n",
125
+ "print(system_prompt)\n",
126
+ "\n",
127
+ "# System message\n",
128
+ "sys_msg = SystemMessage(content=system_prompt)\n",
129
+ "\n",
130
+ "# build a retriever\n",
131
+ "embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-mpnet-base-v2\") # dim=768\n",
132
+ "supabase: Client = create_client(os.environ.get(\"SUPABASE_URL\"), os.environ.get(\"SUPABASE_SERVICE_ROLE_KEY\"))\n",
133
+ "vector_store = SupabaseVectorStore(client=supabase, embedding=embeddings, table_name=\"documents2\", query_name=\"match_documents_2\")\n",
134
+ "create_retriever_tool = create_retriever_tool(retriever=vector_store.as_retriever(), name=\"Question Search\", description=\"A tool to retrieve similar questions from a vector store.\")"
135
+ ]
136
+ },
137
+ {
138
+ "cell_type": "markdown",
139
+ "id": "72fbbd0d-c9a9-4af8-9010-739d035e3c24",
140
+ "metadata": {},
141
+ "source": [
142
+ "## WEB SEARCH"
143
+ ]
144
+ },
145
+ {
146
+ "cell_type": "code",
147
+ "execution_count": null,
148
+ "id": "ddac5160-1a67-46e1-8a43-1e341381e1b7",
149
+ "metadata": {},
150
+ "outputs": [],
151
+ "source": [
152
+ "# web search\n",
153
+ "import os\n",
154
+ "from supabase.client import Client, create_client\n",
155
+ "from langchain_core.tools import tool\n",
156
+ "from langchain_community.tools.tavily_search import TavilySearchResults\n",
157
+ "from langchain_community.document_loaders import WikipediaLoader\n",
158
+ "from langchain_community.document_loaders import ArxivLoader\n",
159
+ "from langchain_huggingface import HuggingFaceEmbeddings\n",
160
+ "from langchain_community.vectorstores import SupabaseVectorStore\n",
161
+ "from langchain.tools.retriever import create_retriever_tool"
162
+ ]
163
+ },
164
+ {
165
+ "cell_type": "code",
166
+ "execution_count": null,
167
+ "id": "7106a566-af9f-43c4-8a97-b594ae3592e4",
168
+ "metadata": {},
169
+ "outputs": [],
170
+ "source": [
171
+ "@tool\n",
172
+ "def wiki_search(query: str) -> str:\n",
173
+ " \"\"\"Search Wikipedia for a query and return maximum 2 results.\n",
174
+ " \n",
175
+ " Args:\n",
176
+ " query: The search query.\"\"\"\n",
177
+ " search_docs = WikipediaLoader(query=query, load_max_docs=2).load()\n",
178
+ " formatted_search_docs = \"\\n\\n---\\n\\n\".join([f'<Document source=\"{doc.metadata[\"source\"]}\" page=\"{doc.metadata.get(\"page\", \"\")}\"/>\\n{doc.page_content}\\n</Document>' for doc in search_docs])\n",
179
+ " return {\"wiki_results\": formatted_search_docs}\n",
180
+ "\n",
181
+ "@tool\n",
182
+ "def web_search(query: str) -> str:\n",
183
+ " \"\"\"Search Tavily for a query and return maximum 3 results.\n",
184
+ " \n",
185
+ " Args:\n",
186
+ " query: The search query.\"\"\"\n",
187
+ " search_docs = TavilySearchResults(max_results=3).invoke(query=query)\n",
188
+ " formatted_search_docs = \"\\n\\n---\\n\\n\".join([f'<Document source=\"{doc.metadata[\"source\"]}\" page=\"{doc.metadata.get(\"page\", \"\")}\"/>\\n{doc.page_content}\\n</Document>' for doc in search_docs])\n",
189
+ " return {\"web_results\": formatted_search_docs}\n",
190
+ "\n",
191
+ "@tool\n",
192
+ "def arvix_search(query: str) -> str:\n",
193
+ " \"\"\"Search Arxiv for a query and return maximum 3 result.\n",
194
+ " \n",
195
+ " Args:\n",
196
+ " query: The search query.\"\"\"\n",
197
+ " search_docs = ArxivLoader(query=query, load_max_docs=3).load()\n",
198
+ " formatted_search_docs = \"\\n\\n---\\n\\n\".join([f'<Document source=\"{doc.metadata[\"source\"]}\" page=\"{doc.metadata.get(\"page\", \"\")}\"/>\\n{doc.page_content[:1000]}\\n</Document>' for doc in search_docs])\n",
199
+ " return {\"arvix_results\": formatted_search_docs}\n",
200
+ "\n",
201
+ "@tool\n",
202
+ "def similar_question_search(question: str) -> str:\n",
203
+ " \"\"\"Search the vector database for similar questions and return the first results.\n",
204
+ " \n",
205
+ " Args:\n",
206
+ " question: the question human provided.\"\"\"\n",
207
+ " matched_docs = vector_store.similarity_search(question, 3)\n",
208
+ " formatted_search_docs = \"\\n\\n---\\n\\n\".join([f'<Document source=\"{doc.metadata[\"source\"]}\" page=\"{doc.metadata.get(\"page\", \"\")}\"/>\\n{doc.page_content[:1000]}\\n</Document>' for doc in matched_docs])\n",
209
+ " return {\"similar_questions\": formatted_search_docs}"
210
+ ]
211
+ },
212
+ {
213
+ "cell_type": "markdown",
214
+ "id": "5884084b-5983-4abc-b0ee-6907923077f3",
215
+ "metadata": {},
216
+ "source": [
217
+ "## BASIC CALCULATOR"
218
+ ]
219
+ },
220
+ {
221
+ "cell_type": "code",
222
+ "execution_count": null,
223
+ "id": "d25dc2fe-abcf-4b09-9469-6428b604d620",
224
+ "metadata": {},
225
+ "outputs": [],
226
+ "source": [
227
+ "# basic calculator\n",
228
+ "from langchain_core.tools import tool"
229
+ ]
230
+ },
231
+ {
232
+ "cell_type": "code",
233
+ "execution_count": null,
234
+ "id": "0d5ec778-4e12-4309-bf93-3ca20a155fca",
235
+ "metadata": {},
236
+ "outputs": [],
237
+ "source": [
238
+ "@tool\n",
239
+ "def multiply(a: float, b: float) -> float:\n",
240
+ " \"\"\"\n",
241
+ " Multiplies two numbers.\n",
242
+ " Args:\n",
243
+ " a (float): the first number\n",
244
+ " b (float): the second number\n",
245
+ " \"\"\"\n",
246
+ " return a * b\n",
247
+ "\n",
248
+ "@tool\n",
249
+ "def add(a: float, b: float) -> float:\n",
250
+ " \"\"\"\n",
251
+ " Adds two numbers.\n",
252
+ " Args:\n",
253
+ " a (float): the first number\n",
254
+ " b (float): the second number\n",
255
+ " \"\"\"\n",
256
+ " return a + b\n",
257
+ "\n",
258
+ "@tool\n",
259
+ "def subtract(a: float, b: float) -> int:\n",
260
+ " \"\"\"\n",
261
+ " Subtracts two numbers.\n",
262
+ " Args:\n",
263
+ " a (float): the first number\n",
264
+ " b (float): the second number\n",
265
+ " \"\"\"\n",
266
+ " return a - b\n",
267
+ "\n",
268
+ "@tool\n",
269
+ "def divide(a: float, b: float) -> float:\n",
270
+ " \"\"\"\n",
271
+ " Divides two numbers.\n",
272
+ " Args:\n",
273
+ " a (float): the first float number\n",
274
+ " b (float): the second float number\n",
275
+ " \"\"\"\n",
276
+ " if b == 0:\n",
277
+ " raise ValueError(\"Cannot divided by zero.\")\n",
278
+ " return a / b\n",
279
+ "\n",
280
+ "@tool\n",
281
+ "def modulus(a: int, b: int) -> int:\n",
282
+ " \"\"\"\n",
283
+ " Get the modulus of two numbers.\n",
284
+ " Args:\n",
285
+ " a (int): the first number\n",
286
+ " b (int): the second number\n",
287
+ " \"\"\"\n",
288
+ " return a % b\n",
289
+ "\n",
290
+ "@tool\n",
291
+ "def power(a: float, b: float) -> float:\n",
292
+ " \"\"\"\n",
293
+ " Get the power of two numbers.\n",
294
+ " Args:\n",
295
+ " a (float): the first number\n",
296
+ " b (float): the second number\n",
297
+ " \"\"\"\n",
298
+ " return a**b\n",
299
+ "\n",
300
+ "@tool\n",
301
+ "def square_root(a: float) -> float | complex:\n",
302
+ " \"\"\"\n",
303
+ " Get the square root of a number.\n",
304
+ " Args:\n",
305
+ " a (float): the number to get the square root of\n",
306
+ " \"\"\"\n",
307
+ " if a >= 0:\n",
308
+ " return a**0.5\n",
309
+ " return cmath.sqrt(a)\n",
310
+ "\n",
311
+ "@tool\n",
312
+ "def count_substring(substring:str, text:str) -> int:\n",
313
+ " \"\"\"\n",
314
+ " Get the number of occurences of a substring within some text. Useful for 'How many (substring) are in (text)?'\n",
315
+ " Args:\n",
316
+ " substring (str): the substring to check for.\n",
317
+ " text (str): the text to search through.\n",
318
+ " \"\"\"\n",
319
+ " return text.count(substring)"
320
+ ]
321
+ },
322
+ {
323
+ "cell_type": "markdown",
324
+ "id": "9d4c473f-8523-431a-80c4-fc16618d7c86",
325
+ "metadata": {},
326
+ "source": [
327
+ "## CODE INTERPRETER"
328
+ ]
329
+ },
330
+ {
331
+ "cell_type": "code",
332
+ "execution_count": null,
333
+ "id": "5c3a5072-b1aa-4489-96d4-08d0d925ebfd",
334
+ "metadata": {},
335
+ "outputs": [],
336
+ "source": [
337
+ "# code interpreter\n",
338
+ "import os\n",
339
+ "import io\n",
340
+ "import sys\n",
341
+ "import uuid\n",
342
+ "import base64\n",
343
+ "import traceback\n",
344
+ "import contextlib\n",
345
+ "import tempfile\n",
346
+ "import subprocess\n",
347
+ "import sqlite3\n",
348
+ "from typing import Dict, List, Any, Optional, Union\n",
349
+ "import numpy as np\n",
350
+ "import pandas as pd\n",
351
+ "import matplotlib.pyplot as plt\n",
352
+ "from PIL import Image\n",
353
+ "from langchain_core.tools import tool"
354
+ ]
355
+ },
356
+ {
357
+ "cell_type": "code",
358
+ "execution_count": null,
359
+ "id": "43f314d3-f7b7-4bf1-9500-0c0b8f234412",
360
+ "metadata": {},
361
+ "outputs": [],
362
+ "source": [
363
+ "class CodeInterpreter:\n",
364
+ " def __init__(self, allowed_modules=None, max_execution_time=30, working_directory=None):\n",
365
+ " \"\"\"Initialize the code interpreter with safety measures.\"\"\"\n",
366
+ " \n",
367
+ " self.allowed_modules = allowed_modules or [\"numpy\", \"pandas\", \"matplotlib\", \"scipy\", \"sklearn\", \"math\", \"random\", \"statistics\", \"datetime\", \"collections\",\n",
368
+ " \"itertools\", \"functools\", \"operator\", \"re\", \"json\", \"sympy\", \"networkx\", \"nltk\", \"PIL\", \"pytesseract\", \"cmath\", \"uuid\", \"tempfile\", \"requests\", \"urllib\"]\n",
369
+ " \n",
370
+ " self.max_execution_time = max_execution_time\n",
371
+ " self.working_directory = working_directory or os.path.join(os.getcwd()) \n",
372
+ " if not os.path.exists(self.working_directory):\n",
373
+ " os.makedirs(self.working_directory)\n",
374
+ " \n",
375
+ " self.globals = {\"__builtins__\": __builtins__, \"np\": np, \"pd\": pd, \"plt\": plt, \"Image\": Image}\n",
376
+ " self.temp_sqlite_db = os.path.join(tempfile.gettempdir(), \"code_exec.db\")\n",
377
+ "\n",
378
+ " def execute_code(self, code: str, language: str = \"python\") -> Dict[str, Any]:\n",
379
+ " \"\"\"Execute the provided code in the selected programming language.\"\"\"\n",
380
+ " language = language.lower()\n",
381
+ " execution_id = str(uuid.uuid4())\n",
382
+ " \n",
383
+ " result = {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": \"\", \"result\": None, \"plots\": [], \"dataframes\": []}\n",
384
+ " \n",
385
+ " try:\n",
386
+ " if language == \"python\":\n",
387
+ " return self._execute_python(code, execution_id)\n",
388
+ " elif language == \"bash\":\n",
389
+ " return self._execute_bash(code, execution_id)\n",
390
+ " elif language == \"sql\":\n",
391
+ " return self._execute_sql(code, execution_id)\n",
392
+ " elif language == \"c\":\n",
393
+ " return self._execute_c(code, execution_id)\n",
394
+ " elif language == \"java\":\n",
395
+ " return self._execute_java(code, execution_id)\n",
396
+ " else:\n",
397
+ " result[\"stderr\"] = f\"Unsupported language: {language}\"\n",
398
+ " except Exception as e:\n",
399
+ " result[\"stderr\"] = str(e)\n",
400
+ " \n",
401
+ " return result\n",
402
+ "\n",
403
+ " def _execute_python(self, code: str, execution_id: str) -> dict:\n",
404
+ " output_buffer = io.StringIO()\n",
405
+ " error_buffer = io.StringIO()\n",
406
+ " result = {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": \"\", \"result\": None, \"plots\": [], \"dataframes\": []}\n",
407
+ " \n",
408
+ " try:\n",
409
+ " exec_dir = os.path.join(self.working_directory, execution_id)\n",
410
+ " os.makedirs(exec_dir, exist_ok=True)\n",
411
+ " plt.switch_backend('Agg')\n",
412
+ " \n",
413
+ " with contextlib.redirect_stdout(output_buffer), contextlib.redirect_stderr(error_buffer):\n",
414
+ " exec_result = exec(code, self.globals)\n",
415
+ "\n",
416
+ " if plt.get_fignums():\n",
417
+ " for i, fig_num in enumerate(plt.get_fignums()):\n",
418
+ " fig = plt.figure(fig_num)\n",
419
+ " img_path = os.path.join(exec_dir, f\"plot_{i}.png\")\n",
420
+ " fig.savefig(img_path)\n",
421
+ " with open(img_path, \"rb\") as img_file:\n",
422
+ " img_data = base64.b64encode(img_file.read()).decode('utf-8')\n",
423
+ " result[\"plots\"].append({\"figure_number\": fig_num, \"data\": img_data})\n",
424
+ "\n",
425
+ " for var_name, var_value in self.globals.items():\n",
426
+ " if isinstance(var_value, pd.DataFrame) and len(var_value) > 0:\n",
427
+ " result[\"dataframes\"].append({\"name\": var_name, \"head\": var_value.head().to_dict(), \"shape\": var_value.shape, \"dtypes\": str(var_value.dtypes)})\n",
428
+ " \n",
429
+ " result[\"status\"] = \"success\"\n",
430
+ " result[\"stdout\"] = output_buffer.getvalue()\n",
431
+ " result[\"result\"] = exec_result\n",
432
+ " \n",
433
+ " except Exception as e:\n",
434
+ " result[\"status\"] = \"error\"\n",
435
+ " result[\"stderr\"] = f\"{error_buffer.getvalue()}\\n{traceback.format_exc()}\"\n",
436
+ " \n",
437
+ " return result\n",
438
+ "\n",
439
+ " def _execute_bash(self, code: str, execution_id: str) -> dict:\n",
440
+ " try:\n",
441
+ " completed = subprocess.run(code, shell=True, capture_output=True, text=True, timeout=self.max_execution_time)\n",
442
+ " return {\"execution_id\": execution_id, \"status\": \"success\" if completed.returncode == 0 else \"error\", \"stdout\": completed.stdout, \"stderr\": completed.stderr, \"result\": None, \"plots\": [], \"dataframes\": []}\n",
443
+ " except subprocess.TimeoutExpired:\n",
444
+ " return {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": \"Execution timed out.\", \"result\": None, \"plots\": [], \"dataframes\": []}\n",
445
+ "\n",
446
+ " def _execute_sql(self, code: str, execution_id: str) -> dict:\n",
447
+ " result = {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": \"\", \"result\": None, \"plots\": [], \"dataframes\": []}\n",
448
+ " try:\n",
449
+ " conn = sqlite3.connect(self.temp_sqlite_db)\n",
450
+ " cur = conn.cursor()\n",
451
+ " cur.execute(code)\n",
452
+ " if code.strip().lower().startswith(\"select\"):\n",
453
+ " columns = [description[0] for description in cur.description]\n",
454
+ " rows = cur.fetchall()\n",
455
+ " df = pd.DataFrame(rows, columns=columns)\n",
456
+ " result[\"dataframes\"].append({\"name\": \"query_result\", \"head\": df.head().to_dict(), \"shape\": df.shape, \"dtypes\": str(df.dtypes)})\n",
457
+ " else:\n",
458
+ " conn.commit()\n",
459
+ " result[\"status\"] = \"success\"\n",
460
+ " result[\"stdout\"] = \"Query executed successfully.\"\n",
461
+ "\n",
462
+ " except Exception as e:\n",
463
+ " result[\"stderr\"] = str(e)\n",
464
+ " finally:\n",
465
+ " conn.close()\n",
466
+ "\n",
467
+ " return result\n",
468
+ "\n",
469
+ " def _execute_c(self, code: str, execution_id: str) -> dict:\n",
470
+ " temp_dir = tempfile.mkdtemp()\n",
471
+ " source_path = os.path.join(temp_dir, \"program.c\")\n",
472
+ " binary_path = os.path.join(temp_dir, \"program\")\n",
473
+ "\n",
474
+ " try:\n",
475
+ " with open(source_path, \"w\") as f:\n",
476
+ " f.write(code)\n",
477
+ "\n",
478
+ " compile_proc = subprocess.run([\"gcc\", source_path, \"-o\", binary_path], capture_output=True, text=True, timeout=self.max_execution_time)\n",
479
+ " if compile_proc.returncode != 0:\n",
480
+ " return {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": compile_proc.stdout, \"stderr\": compile_proc.stderr, \"result\": None, \"plots\": [], \"dataframes\": []}\n",
481
+ "\n",
482
+ " run_proc = subprocess.run([binary_path], capture_output=True, text=True, timeout=self.max_execution_time)\n",
483
+ " return {\"execution_id\": execution_id, \"status\": \"success\" if run_proc.returncode == 0 else \"error\", \"stdout\": run_proc.stdout, \"stderr\": run_proc.stderr, \"result\": None, \"plots\": [], \"dataframes\": []}\n",
484
+ " except Exception as e: return {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": str(e), \"result\": None, \"plots\": [], \"dataframes\": []}\n",
485
+ "\n",
486
+ " def _execute_java(self, code: str, execution_id: str) -> dict:\n",
487
+ " temp_dir = tempfile.mkdtemp()\n",
488
+ " source_path = os.path.join(temp_dir, \"Main.java\")\n",
489
+ "\n",
490
+ " try:\n",
491
+ " with open(source_path, \"w\") as f:\n",
492
+ " f.write(code)\n",
493
+ "\n",
494
+ " compile_proc = subprocess.run([\"javac\", source_path], capture_output=True, text=True, timeout=self.max_execution_time)\n",
495
+ " if compile_proc.returncode != 0:\n",
496
+ " return {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": compile_proc.stdout, \"stderr\": compile_proc.stderr, \"result\": None, \"plots\": [], \"dataframes\": []}\n",
497
+ "\n",
498
+ " run_proc = subprocess.run([\"java\", \"-cp\", temp_dir, \"Main\"], capture_output=True, text=True, timeout=self.max_execution_time)\n",
499
+ " return {\"execution_id\": execution_id, \"status\": \"success\" if run_proc.returncode == 0 else \"error\", \"stdout\": run_proc.stdout, \"stderr\": run_proc.stderr, \"result\": None, \"plots\": [], \"dataframes\": []}\n",
500
+ " except Exception as e:\n",
501
+ " return {\"execution_id\": execution_id, \"status\": \"error\", \"stdout\": \"\", \"stderr\": str(e), \"result\": None, \"plots\": [], \"dataframes\": []}\n",
502
+ "\n",
503
+ "interpreter_instance = CodeInterpreter()\n",
504
+ "\n",
505
+ "@tool\n",
506
+ "def execute_code_multilang(code: str, language: str = \"python\") -> str:\n",
507
+ " \"\"\"Execute code in multiple languages (Python, Bash, SQL, C, Java) and return results.\n",
508
+ " Args:\n",
509
+ " code (str): The source code to execute.\n",
510
+ " language (str): The language of the code. Supported: \"python\", \"bash\", \"sql\", \"c\", \"java\".\n",
511
+ " Returns:\n",
512
+ " A string summarizing the execution results (stdout, stderr, errors, plots, dataframes if any).\n",
513
+ " \"\"\"\n",
514
+ " supported_languages = [\"python\", \"bash\", \"sql\", \"c\", \"java\"]\n",
515
+ " language = language.lower()\n",
516
+ "\n",
517
+ " if language not in supported_languages:\n",
518
+ " return f\"❌ Unsupported language: {language}. Supported languages are: {', '.join(supported_languages)}\"\n",
519
+ "\n",
520
+ " result = interpreter_instance.execute_code(code, language=language)\n",
521
+ "\n",
522
+ " response = []\n",
523
+ "\n",
524
+ " if result[\"status\"] == \"success\":\n",
525
+ " response.append(f\"βœ… Code executed successfully in **{language.upper()}**\")\n",
526
+ "\n",
527
+ " if result.get(\"stdout\"):\n",
528
+ " response.append(\"\\n**Standard Output:**\\n```\\n\" + result[\"stdout\"].strip() + \"\\n```\")\n",
529
+ "\n",
530
+ " if result.get(\"stderr\"):\n",
531
+ " response.append(\n",
532
+ " \"\\n**Standard Error (if any):**\\n```\\n\"\n",
533
+ " + result[\"stderr\"].strip() + \"\\n```\")\n",
534
+ "\n",
535
+ " if result.get(\"result\") is not None:\n",
536
+ " response.append(\n",
537
+ " \"\\n**Execution Result:**\\n```\\n\"\n",
538
+ " + str(result[\"result\"]).strip() + \"\\n```\")\n",
539
+ "\n",
540
+ " if result.get(\"dataframes\"):\n",
541
+ " for df_info in result[\"dataframes\"]:\n",
542
+ " response.append(f\"\\n**DataFrame `{df_info['name']}` (Shape: {df_info['shape']})**\")\n",
543
+ " df_preview = pd.DataFrame(df_info[\"head\"])\n",
544
+ " response.append(\"First 5 rows:\\n```\\n\" + str(df_preview) + \"\\n```\")\n",
545
+ "\n",
546
+ " if result.get(\"plots\"):\n",
547
+ " response.append(f\"\\n**Generated {len(result['plots'])} plot(s)** (Image data returned separately)\")\n",
548
+ "\n",
549
+ " else:\n",
550
+ " response.append(f\"❌ Code execution failed in **{language.upper()}**\")\n",
551
+ " if result.get(\"stderr\"):\n",
552
+ " response.append(\"\\n**Error Log:**\\n```\\n\" + result[\"stderr\"].strip() + \"\\n```\")\n",
553
+ "\n",
554
+ " return \"\\n\".join(response)"
555
+ ]
556
+ },
557
+ {
558
+ "cell_type": "markdown",
559
+ "id": "c02491df-6943-4dcc-b477-4c876d6b200c",
560
+ "metadata": {},
561
+ "source": [
562
+ "## DOCUMENT PROCESSING"
563
+ ]
564
+ },
565
+ {
566
+ "cell_type": "code",
567
+ "execution_count": null,
568
+ "id": "cf052d13-a91a-4271-9a37-358bd34d712b",
569
+ "metadata": {},
570
+ "outputs": [],
571
+ "source": [
572
+ "# document processing\n",
573
+ "import os\n",
574
+ "import uuid\n",
575
+ "import requests\n",
576
+ "import tempfile\n",
577
+ "from PIL import Image\n",
578
+ "import pytesseract\n",
579
+ "import pandas as pd\n",
580
+ "from urllib.parse import urlparse\n",
581
+ "from langchain_core.tools import tool\n",
582
+ "from typing import List, Dict, Any, Optional"
583
+ ]
584
+ },
585
+ {
586
+ "cell_type": "code",
587
+ "execution_count": null,
588
+ "id": "e0cd532e-644e-4a5a-a90e-53ba66a40250",
589
+ "metadata": {},
590
+ "outputs": [],
591
+ "source": [
592
+ "@tool\n",
593
+ "def save_and_read_file(content: str, filename: Optional[str] = None) -> str:\n",
594
+ " \"\"\"\n",
595
+ " Save content to a file and return the path.\n",
596
+ " Args:\n",
597
+ " content (str): the content to save to the file\n",
598
+ " filename (str, optional): the name of the file. If not provided, a random name file will be created.\n",
599
+ " \"\"\"\n",
600
+ " temp_dir = tempfile.gettempdir()\n",
601
+ " if filename is None:\n",
602
+ " temp_file = tempfile.NamedTemporaryFile(delete=False, dir=temp_dir)\n",
603
+ " filepath = temp_file.name\n",
604
+ " else:\n",
605
+ " filepath = os.path.join(temp_dir, filename)\n",
606
+ "\n",
607
+ " with open(filepath, \"w\") as f:\n",
608
+ " f.write(content)\n",
609
+ "\n",
610
+ " return f\"File saved to {filepath}. You can read this file to process its contents.\"\n",
611
+ "\n",
612
+ "@tool\n",
613
+ "def download_file_from_url(url: str, filename: Optional[str] = None) -> str:\n",
614
+ " \"\"\"\n",
615
+ " Download a file from a URL and save it to a temporary location.\n",
616
+ " Args:\n",
617
+ " url (str): the URL of the file to download.\n",
618
+ " filename (str, optional): the name of the file. If not provided, a random name file will be created.\n",
619
+ " \"\"\"\n",
620
+ " try:\n",
621
+ " # Parse URL to get filename if not provided\n",
622
+ " if not filename:\n",
623
+ " path = urlparse(url).path\n",
624
+ " filename = os.path.basename(path)\n",
625
+ " if not filename:\n",
626
+ " filename = f\"downloaded_{uuid.uuid4().hex[:8]}\"\n",
627
+ "\n",
628
+ " # Create temporary file\n",
629
+ " temp_dir = tempfile.gettempdir()\n",
630
+ " filepath = os.path.join(temp_dir, filename)\n",
631
+ "\n",
632
+ " # Download the file\n",
633
+ " response = requests.get(url, stream=True)\n",
634
+ " response.raise_for_status()\n",
635
+ "\n",
636
+ " # Save the file\n",
637
+ " with open(filepath, \"wb\") as f:\n",
638
+ " for chunk in response.iter_content(chunk_size=8192):\n",
639
+ " f.write(chunk)\n",
640
+ "\n",
641
+ " return f\"File downloaded to {filepath}. You can read this file to process its contents.\"\n",
642
+ " except Exception as e:\n",
643
+ " return f\"Error downloading file: {str(e)}\"\n",
644
+ "\n",
645
+ "@tool\n",
646
+ "def extract_text_from_image(image_path: str) -> str:\n",
647
+ " \"\"\"\n",
648
+ " Extract text from an image using OCR library pytesseract (if available).\n",
649
+ " Args:\n",
650
+ " image_path (str): the path to the image file.\n",
651
+ " \"\"\"\n",
652
+ " try:\n",
653
+ " # Open the image\n",
654
+ " image = Image.open(image_path)\n",
655
+ "\n",
656
+ " # Extract text from the image\n",
657
+ " text = pytesseract.image_to_string(image)\n",
658
+ "\n",
659
+ " return f\"Extracted text from image:\\n\\n{text}\"\n",
660
+ " except Exception as e:\n",
661
+ " return f\"Error extracting text from image: {str(e)}\"\n",
662
+ "\n",
663
+ "@tool\n",
664
+ "def analyze_csv_file(file_path: str, query: str) -> str:\n",
665
+ " \"\"\"\n",
666
+ " Analyze a CSV file using pandas and answer a question about it.\n",
667
+ " Args:\n",
668
+ " file_path (str): the path to the CSV file.\n",
669
+ " query (str): Question about the data\n",
670
+ " \"\"\"\n",
671
+ " try:\n",
672
+ " # Read the CSV file\n",
673
+ " df = pd.read_csv(file_path)\n",
674
+ "\n",
675
+ " # Run various analyses based on the query\n",
676
+ " result = f\"CSV file loaded with {len(df)} rows and {len(df.columns)} columns.\\n\"\n",
677
+ " result += f\"Columns: {', '.join(df.columns)}\\n\\n\"\n",
678
+ "\n",
679
+ " # Add summary statistics\n",
680
+ " result += \"Summary statistics:\\n\"\n",
681
+ " result += str(df.describe())\n",
682
+ "\n",
683
+ " return result\n",
684
+ "\n",
685
+ " except Exception as e:\n",
686
+ " return f\"Error analyzing CSV file: {str(e)}\"\n",
687
+ "\n",
688
+ "@tool\n",
689
+ "def analyze_excel_file(file_path: str, query: str) -> str:\n",
690
+ " \"\"\"\n",
691
+ " Analyze an Excel file using pandas and answer a question about it.\n",
692
+ " Args:\n",
693
+ " file_path (str): the path to the Excel file.\n",
694
+ " query (str): Question about the data\n",
695
+ " \"\"\"\n",
696
+ " try:\n",
697
+ " # Read the Excel file\n",
698
+ " df = pd.read_excel(file_path)\n",
699
+ "\n",
700
+ " # Run various analyses based on the query\n",
701
+ " result = (\n",
702
+ " f\"Excel file loaded with {len(df)} rows and {len(df.columns)} columns.\\n\"\n",
703
+ " )\n",
704
+ " result += f\"Columns: {', '.join(df.columns)}\\n\\n\"\n",
705
+ "\n",
706
+ " # Add summary statistics\n",
707
+ " result += \"Summary statistics:\\n\"\n",
708
+ " result += str(df.describe())\n",
709
+ "\n",
710
+ " return result\n",
711
+ "\n",
712
+ " except Exception as e:\n",
713
+ " return f\"Error analyzing Excel file: {str(e)}\"\n"
714
+ ]
715
+ },
716
+ {
717
+ "cell_type": "markdown",
718
+ "id": "2747e5da-61fb-4c0a-ae9e-4e09f6c490e0",
719
+ "metadata": {},
720
+ "source": [
721
+ "## IMAGE PROCESSING"
722
+ ]
723
+ },
724
+ {
725
+ "cell_type": "code",
726
+ "execution_count": null,
727
+ "id": "8304d8c5-2a28-4ba6-980d-86a14592eb60",
728
+ "metadata": {},
729
+ "outputs": [],
730
+ "source": [
731
+ "# image processing\n",
732
+ "import os\n",
733
+ "import io\n",
734
+ "import uuid\n",
735
+ "import base64\n",
736
+ "import numpy as np\n",
737
+ "from PIL import Image\n",
738
+ "from langchain_core.tools import tool\n",
739
+ "from typing import List, Dict, Any, Optional\n",
740
+ "from PIL import Image, ImageDraw, ImageFont, ImageEnhance, ImageFilter"
741
+ ]
742
+ },
743
+ {
744
+ "cell_type": "code",
745
+ "execution_count": null,
746
+ "id": "b9766e75-42b6-413c-96d4-ccb3380e8498",
747
+ "metadata": {},
748
+ "outputs": [],
749
+ "source": [
750
+ "# Helper functions for image processing\n",
751
+ "def encode_image(image_path: str) -> str:\n",
752
+ " \"\"\"Convert an image file to base64 string.\"\"\"\n",
753
+ " with open(image_path, \"rb\") as image_file:\n",
754
+ " return base64.b64encode(image_file.read()).decode(\"utf-8\")\n",
755
+ "\n",
756
+ "def decode_image(base64_string: str) -> Image.Image:\n",
757
+ " \"\"\"Convert a base64 string to a PIL Image.\"\"\"\n",
758
+ " image_data = base64.b64decode(base64_string)\n",
759
+ " return Image.open(io.BytesIO(image_data))\n",
760
+ "\n",
761
+ "def save_image(image: Image.Image, directory: str = \"image_outputs\") -> str:\n",
762
+ " \"\"\"Save a PIL Image to disk and return the path.\"\"\"\n",
763
+ " os.makedirs(directory, exist_ok=True)\n",
764
+ " image_id = str(uuid.uuid4())\n",
765
+ " image_path = os.path.join(directory, f\"{image_id}.png\")\n",
766
+ " image.save(image_path)\n",
767
+ " return image_path\n",
768
+ "\n",
769
+ "@tool\n",
770
+ "def analyze_image(image_base64: str) -> Dict[str, Any]:\n",
771
+ " \"\"\"\n",
772
+ " Analyze basic properties of an image (size, mode, color analysis, thumbnail preview).\n",
773
+ " Args:\n",
774
+ " image_base64 (str): Base64 encoded image string\n",
775
+ " Returns:\n",
776
+ " Dictionary with analysis result\n",
777
+ " \"\"\"\n",
778
+ " try:\n",
779
+ " img = decode_image(image_base64)\n",
780
+ " width, height = img.size\n",
781
+ " mode = img.mode\n",
782
+ "\n",
783
+ " if mode in (\"RGB\", \"RGBA\"):\n",
784
+ " arr = np.array(img)\n",
785
+ " avg_colors = arr.mean(axis=(0, 1))\n",
786
+ " dominant = [\"Red\", \"Green\", \"Blue\"][np.argmax(avg_colors[:3])]\n",
787
+ " brightness = avg_colors.mean()\n",
788
+ " color_analysis = {\n",
789
+ " \"average_rgb\": avg_colors.tolist(),\n",
790
+ " \"brightness\": brightness,\n",
791
+ " \"dominant_color\": dominant,\n",
792
+ " }\n",
793
+ " else:\n",
794
+ " color_analysis = {\"note\": f\"No color analysis for mode {mode}\"}\n",
795
+ "\n",
796
+ " thumbnail = img.copy()\n",
797
+ " thumbnail.thumbnail((100, 100))\n",
798
+ " thumb_path = save_image(thumbnail, \"thumbnails\")\n",
799
+ " thumbnail_base64 = encode_image(thumb_path)\n",
800
+ "\n",
801
+ " return {\n",
802
+ " \"dimensions\": (width, height),\n",
803
+ " \"mode\": mode,\n",
804
+ " \"color_analysis\": color_analysis,\n",
805
+ " \"thumbnail\": thumbnail_base64,\n",
806
+ " }\n",
807
+ " except Exception as e:\n",
808
+ " return {\"error\": str(e)}\n",
809
+ "\n",
810
+ "@tool\n",
811
+ "def transform_image(image_base64: str, operation: str, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:\n",
812
+ " \"\"\"\n",
813
+ " Apply transformations: resize, rotate, crop, flip, brightness, contrast, blur, sharpen, grayscale.\n",
814
+ " Args:\n",
815
+ " image_base64 (str): Base64 encoded input image\n",
816
+ " operation (str): Transformation operation\n",
817
+ " params (Dict[str, Any], optional): Parameters for the operation\n",
818
+ " Returns:\n",
819
+ " Dictionary with transformed image (base64)\n",
820
+ " \"\"\"\n",
821
+ " try:\n",
822
+ " img = decode_image(image_base64)\n",
823
+ " params = params or {}\n",
824
+ "\n",
825
+ " if operation == \"resize\":\n",
826
+ " img = img.resize(\n",
827
+ " (\n",
828
+ " params.get(\"width\", img.width // 2),\n",
829
+ " params.get(\"height\", img.height // 2),\n",
830
+ " )\n",
831
+ " )\n",
832
+ " elif operation == \"rotate\":\n",
833
+ " img = img.rotate(params.get(\"angle\", 90), expand=True)\n",
834
+ " elif operation == \"crop\":\n",
835
+ " img = img.crop(\n",
836
+ " (\n",
837
+ " params.get(\"left\", 0),\n",
838
+ " params.get(\"top\", 0),\n",
839
+ " params.get(\"right\", img.width),\n",
840
+ " params.get(\"bottom\", img.height),\n",
841
+ " )\n",
842
+ " )\n",
843
+ " elif operation == \"flip\":\n",
844
+ " if params.get(\"direction\", \"horizontal\") == \"horizontal\":\n",
845
+ " img = img.transpose(Image.FLIP_LEFT_RIGHT)\n",
846
+ " else:\n",
847
+ " img = img.transpose(Image.FLIP_TOP_BOTTOM)\n",
848
+ " elif operation == \"adjust_brightness\":\n",
849
+ " img = ImageEnhance.Brightness(img).enhance(params.get(\"factor\", 1.5))\n",
850
+ " elif operation == \"adjust_contrast\":\n",
851
+ " img = ImageEnhance.Contrast(img).enhance(params.get(\"factor\", 1.5))\n",
852
+ " elif operation == \"blur\":\n",
853
+ " img = img.filter(ImageFilter.GaussianBlur(params.get(\"radius\", 2)))\n",
854
+ " elif operation == \"sharpen\":\n",
855
+ " img = img.filter(ImageFilter.SHARPEN)\n",
856
+ " elif operation == \"grayscale\":\n",
857
+ " img = img.convert(\"L\")\n",
858
+ " else:\n",
859
+ " return {\"error\": f\"Unknown operation: {operation}\"}\n",
860
+ "\n",
861
+ " result_path = save_image(img)\n",
862
+ " result_base64 = encode_image(result_path)\n",
863
+ " return {\"transformed_image\": result_base64}\n",
864
+ "\n",
865
+ " except Exception as e:\n",
866
+ " return {\"error\": str(e)}\n",
867
+ "\n",
868
+ "@tool\n",
869
+ "def draw_on_image(image_base64: str, drawing_type: str, params: Dict[str, Any]) -> Dict[str, Any]:\n",
870
+ " \"\"\"\n",
871
+ " Draw shapes (rectangle, circle, line) or text onto an image.\n",
872
+ " Args:\n",
873
+ " image_base64 (str): Base64 encoded input image\n",
874
+ " drawing_type (str): Drawing type\n",
875
+ " params (Dict[str, Any]): Drawing parameters\n",
876
+ " Returns:\n",
877
+ " Dictionary with result image (base64)\n",
878
+ " \"\"\"\n",
879
+ " try:\n",
880
+ " img = decode_image(image_base64)\n",
881
+ " draw = ImageDraw.Draw(img)\n",
882
+ " color = params.get(\"color\", \"red\")\n",
883
+ "\n",
884
+ " if drawing_type == \"rectangle\":\n",
885
+ " draw.rectangle(\n",
886
+ " [params[\"left\"], params[\"top\"], params[\"right\"], params[\"bottom\"]],\n",
887
+ " outline=color,\n",
888
+ " width=params.get(\"width\", 2),\n",
889
+ " )\n",
890
+ " elif drawing_type == \"circle\":\n",
891
+ " x, y, r = params[\"x\"], params[\"y\"], params[\"radius\"]\n",
892
+ " draw.ellipse(\n",
893
+ " (x - r, y - r, x + r, y + r),\n",
894
+ " outline=color,\n",
895
+ " width=params.get(\"width\", 2),\n",
896
+ " )\n",
897
+ " elif drawing_type == \"line\":\n",
898
+ " draw.line(\n",
899
+ " (\n",
900
+ " params[\"start_x\"],\n",
901
+ " params[\"start_y\"],\n",
902
+ " params[\"end_x\"],\n",
903
+ " params[\"end_y\"],\n",
904
+ " ),\n",
905
+ " fill=color,\n",
906
+ " width=params.get(\"width\", 2),\n",
907
+ " )\n",
908
+ " elif drawing_type == \"text\":\n",
909
+ " font_size = params.get(\"font_size\", 20)\n",
910
+ " try:\n",
911
+ " font = ImageFont.truetype(\"arial.ttf\", font_size)\n",
912
+ " except IOError:\n",
913
+ " font = ImageFont.load_default()\n",
914
+ " draw.text(\n",
915
+ " (params[\"x\"], params[\"y\"]),\n",
916
+ " params.get(\"text\", \"Text\"),\n",
917
+ " fill=color,\n",
918
+ " font=font,\n",
919
+ " )\n",
920
+ " else:\n",
921
+ " return {\"error\": f\"Unknown drawing type: {drawing_type}\"}\n",
922
+ "\n",
923
+ " result_path = save_image(img)\n",
924
+ " result_base64 = encode_image(result_path)\n",
925
+ " return {\"result_image\": result_base64}\n",
926
+ "\n",
927
+ " except Exception as e:\n",
928
+ " return {\"error\": str(e)}\n",
929
+ "\n",
930
+ "@tool\n",
931
+ "def generate_simple_image(image_type: str, width: int = 500, height: int = 500, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:\n",
932
+ " \"\"\"\n",
933
+ " Generate a simple image (gradient, noise, pattern, chart).\n",
934
+ " Args:\n",
935
+ " image_type (str): Type of image\n",
936
+ " width (int), height (int)\n",
937
+ " params (Dict[str, Any], optional): Specific parameters\n",
938
+ " Returns:\n",
939
+ " Dictionary with generated image (base64)\n",
940
+ " \"\"\"\n",
941
+ " try:\n",
942
+ " params = params or {}\n",
943
+ "\n",
944
+ " if image_type == \"gradient\":\n",
945
+ " direction = params.get(\"direction\", \"horizontal\")\n",
946
+ " start_color = params.get(\"start_color\", (255, 0, 0))\n",
947
+ " end_color = params.get(\"end_color\", (0, 0, 255))\n",
948
+ "\n",
949
+ " img = Image.new(\"RGB\", (width, height))\n",
950
+ " draw = ImageDraw.Draw(img)\n",
951
+ "\n",
952
+ " if direction == \"horizontal\":\n",
953
+ " for x in range(width):\n",
954
+ " r = int(\n",
955
+ " start_color[0] + (end_color[0] - start_color[0]) * x / width\n",
956
+ " )\n",
957
+ " g = int(\n",
958
+ " start_color[1] + (end_color[1] - start_color[1]) * x / width\n",
959
+ " )\n",
960
+ " b = int(\n",
961
+ " start_color[2] + (end_color[2] - start_color[2]) * x / width\n",
962
+ " )\n",
963
+ " draw.line([(x, 0), (x, height)], fill=(r, g, b))\n",
964
+ " else:\n",
965
+ " for y in range(height):\n",
966
+ " r = int(\n",
967
+ " start_color[0] + (end_color[0] - start_color[0]) * y / height\n",
968
+ " )\n",
969
+ " g = int(\n",
970
+ " start_color[1] + (end_color[1] - start_color[1]) * y / height\n",
971
+ " )\n",
972
+ " b = int(\n",
973
+ " start_color[2] + (end_color[2] - start_color[2]) * y / height\n",
974
+ " )\n",
975
+ " draw.line([(0, y), (width, y)], fill=(r, g, b))\n",
976
+ "\n",
977
+ " elif image_type == \"noise\":\n",
978
+ " noise_array = np.random.randint(0, 256, (height, width, 3), dtype=np.uint8)\n",
979
+ " img = Image.fromarray(noise_array, \"RGB\")\n",
980
+ "\n",
981
+ " else:\n",
982
+ " return {\"error\": f\"Unsupported image_type {image_type}\"}\n",
983
+ "\n",
984
+ " result_path = save_image(img)\n",
985
+ " result_base64 = encode_image(result_path)\n",
986
+ " return {\"generated_image\": result_base64}\n",
987
+ "\n",
988
+ " except Exception as e:\n",
989
+ " return {\"error\": str(e)}\n",
990
+ "\n",
991
+ "@tool\n",
992
+ "def combine_images(images_base64: List[str], operation: str, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:\n",
993
+ " \"\"\"\n",
994
+ " Combine multiple images (collage, stack, blend).\n",
995
+ " Args:\n",
996
+ " images_base64 (List[str]): List of base64 images\n",
997
+ " operation (str): Combination type\n",
998
+ " params (Dict[str, Any], optional)\n",
999
+ " Returns:\n",
1000
+ " Dictionary with combined image (base64)\n",
1001
+ " \"\"\"\n",
1002
+ " try:\n",
1003
+ " images = [decode_image(b64) for b64 in images_base64]\n",
1004
+ " params = params or {}\n",
1005
+ "\n",
1006
+ " if operation == \"stack\":\n",
1007
+ " direction = params.get(\"direction\", \"horizontal\")\n",
1008
+ " if direction == \"horizontal\":\n",
1009
+ " total_width = sum(img.width for img in images)\n",
1010
+ " max_height = max(img.height for img in images)\n",
1011
+ " new_img = Image.new(\"RGB\", (total_width, max_height))\n",
1012
+ " x = 0\n",
1013
+ " for img in images:\n",
1014
+ " new_img.paste(img, (x, 0))\n",
1015
+ " x += img.width\n",
1016
+ " else:\n",
1017
+ " max_width = max(img.width for img in images)\n",
1018
+ " total_height = sum(img.height for img in images)\n",
1019
+ " new_img = Image.new(\"RGB\", (max_width, total_height))\n",
1020
+ " y = 0\n",
1021
+ " for img in images:\n",
1022
+ " new_img.paste(img, (0, y))\n",
1023
+ " y += img.height\n",
1024
+ " else:\n",
1025
+ " return {\"error\": f\"Unsupported combination operation {operation}\"}\n",
1026
+ "\n",
1027
+ " result_path = save_image(new_img)\n",
1028
+ " result_base64 = encode_image(result_path)\n",
1029
+ " return {\"combined_image\": result_base64}\n",
1030
+ "\n",
1031
+ " except Exception as e:\n",
1032
+ " return {\"error\": str(e)}\n"
1033
+ ]
1034
+ },
1035
+ {
1036
+ "cell_type": "markdown",
1037
+ "id": "cb966ca4-1ccf-4a14-8c7c-960b1d8e1c55",
1038
+ "metadata": {},
1039
+ "source": [
1040
+ "## AUDIO PROCESSING"
1041
+ ]
1042
+ },
1043
+ {
1044
+ "cell_type": "code",
1045
+ "execution_count": null,
1046
+ "id": "9b05ce05-a577-4473-bb05-0d58602f71c2",
1047
+ "metadata": {},
1048
+ "outputs": [],
1049
+ "source": []
1050
+ },
1051
+ {
1052
+ "cell_type": "markdown",
1053
+ "id": "57a4fcb2-59ae-44d6-9d6a-ea1ab5acae0f",
1054
+ "metadata": {},
1055
+ "source": [
1056
+ "# AGENT"
1057
+ ]
1058
+ },
1059
+ {
1060
+ "cell_type": "markdown",
1061
+ "id": "32b57eeb-4260-43bd-9898-2edbe3be1281",
1062
+ "metadata": {},
1063
+ "source": [
1064
+ "The Agent is designed using LangGraph which is a production-ready framework deveoped by LangChain. The control flow of the agent is designed using a directed graph structure to move a state object from node to node through decision edges. It simplifies the design of even complex Agent application by relying on simple components that all work together."
1065
+ ]
1066
+ },
1067
+ {
1068
+ "cell_type": "markdown",
1069
+ "id": "e6b7a8d7-e174-42a1-8bfb-9407bfd3c518",
1070
+ "metadata": {},
1071
+ "source": [
1072
+ "## RETRIEVER"
1073
+ ]
1074
+ },
1075
+ {
1076
+ "cell_type": "code",
1077
+ "execution_count": null,
1078
+ "id": "7e391c4c-d019-4baf-bf53-fe24244cac0c",
1079
+ "metadata": {},
1080
+ "outputs": [],
1081
+ "source": [
1082
+ "# build a retriever\n",
1083
+ "embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-mpnet-base-v2\") # set the model to generate embeddings; dim=768\n",
1084
+ "supabase, Client = create_client(os.environ.get(\"SUPABASE_URL\"), os.environ.get(\"SUPABASE_SERVICE_KEY\"))\n",
1085
+ "vector_store = SupabaseVectorStore(client=supabase, embedding= embeddings, table_name=\"documents\", query_name=\"match_documents_langchain\")\n",
1086
+ "create_retriever_tool = create_retriever_tool(retriever=vector_store.as_retriever(), name=\"Question Retriever\", description=\"Retrieve similar questions from a vector store.\")"
1087
+ ]
1088
+ },
1089
+ {
1090
+ "cell_type": "markdown",
1091
+ "id": "26dd5168-8602-4a70-ae77-6ed834a762f9",
1092
+ "metadata": {},
1093
+ "source": [
1094
+ "## PROMPTS"
1095
+ ]
1096
+ },
1097
+ {
1098
+ "cell_type": "code",
1099
+ "execution_count": null,
1100
+ "id": "b6682591-6e1c-4c99-b0ae-b8eb0db470d1",
1101
+ "metadata": {},
1102
+ "outputs": [],
1103
+ "source": [
1104
+ "# load the system prompt from the file\n",
1105
+ "with open(\"../prompts/system_prompt.txt\", \"r\", encoding=\"utf-8\") as f:\n",
1106
+ " system_prompt = f.read()\n",
1107
+ "print(f'SYSTEM PROMPT:\\n{system_prompt}')\n",
1108
+ "\n",
1109
+ "# System message\n",
1110
+ "sys_msg = SystemMessage(content=system_prompt)"
1111
+ ]
1112
+ },
1113
+ {
1114
+ "cell_type": "markdown",
1115
+ "id": "a8f38f2a-8c90-4e2f-b59e-f49da81ed3c6",
1116
+ "metadata": {},
1117
+ "source": [
1118
+ "## TOOLS"
1119
+ ]
1120
+ },
1121
+ {
1122
+ "cell_type": "code",
1123
+ "execution_count": null,
1124
+ "id": "f60060ca-6130-4c15-a298-e289de9f6b6d",
1125
+ "metadata": {},
1126
+ "outputs": [],
1127
+ "source": [
1128
+ "# list all agent tools\n",
1129
+ "tools = [web_search, wiki_search, similar_question_search, arxiv_search, multiply, add, subtract, divide, modulus, power, square_root, count_substring, save_and_read_file, download_file_from_url, extract_text_from_image, analyze_csv_file, analyze_excel_file, execute_code_multilang, analyze_image, transform_image, draw_on_image, generate_simple_image, combine_images]"
1130
+ ]
1131
+ },
1132
+ {
1133
+ "cell_type": "code",
1134
+ "execution_count": null,
1135
+ "id": "46cf257e-7cd1-4bb4-8b86-6f9a4f4f74f8",
1136
+ "metadata": {},
1137
+ "outputs": [],
1138
+ "source": [
1139
+ "# Build the agent graph\n",
1140
+ "def build_graph(provider: str = \"huggingface-qwen\"):\n",
1141
+ " \"\"\"Build the LangGraph Agent\"\"\"\n",
1142
+ " # Load environment variables from .env file\n",
1143
+ " if provider == \"google\": # Google Gemini\n",
1144
+ " llm = ChatGoogleGenerativeAI(model=\"gemini-2.0-flash\", temperature=0)\n",
1145
+ " elif provider == \"groq\": # Groq https://console.groq.com/docs/models\n",
1146
+ " llm = ChatGroq(model=\"qwen-qwq-32b\", temperature=0) # optional : qwen-qwq-32b gemma2-9b-it\n",
1147
+ " elif provider == \"huggingface-qwen\":\n",
1148
+ " llm = ChatHuggingFace(llm=HuggingFaceEndpoint(repo_id = \"Qwen/Qwen2.5-Coder-32B-Instruct\"))\n",
1149
+ " elif provider == \"huggingface-llama\":\n",
1150
+ " llm = ChatHuggingFace(llm=HuggingFaceEndpoint(repo_id=\"TinyLlama/TinyLlama-1.1B-Chat-v1.0\", task=\"text-generation\", max_new_tokens=1024, do_sample=False, repetition_penalty=1.03, temperature=0), verbose=True)\n",
1151
+ " else:\n",
1152
+ " raise ValueError(\"Invalid provider. Choose 'google', 'groq', 'huggingface-qwen' or 'huggingface-llama'.\")\n",
1153
+ " \n",
1154
+ " llm_with_tools = llm.bind_tools(tools) # Bind tools to LLM\n",
1155
+ "\n",
1156
+ " # Node\n",
1157
+ " def assistant(state: MessagesState):\n",
1158
+ " \"\"\"Assistant node\"\"\"\n",
1159
+ " return {\"messages\": [llm_with_tools.invoke(state[\"messages\"])]}\n",
1160
+ " \n",
1161
+ " def retriever(state: MessagesState):\n",
1162
+ " \"\"\"Retriever node\"\"\"\n",
1163
+ " similar_question = vector_store.similarity_search(state[\"messages\"][0].content)\n",
1164
+ " example_msg = HumanMessage(content=f\"Here I provide a similar question and answer for reference: \\n\\n{similar_question[0].page_content}\")\n",
1165
+ " return {\"messages\": [sys_msg] + state[\"messages\"] + [example_msg]}\n",
1166
+ "\n",
1167
+ " # create nodes - decision points\n",
1168
+ " builder = StateGraph(MessagesState)\n",
1169
+ " builder.add_node(\"retriever\", retriever) \n",
1170
+ " builder.add_node(\"assistant\", assistant)\n",
1171
+ " builder.add_node(\"tools\", ToolNode(tools)) # equip the agents with the list of tools\n",
1172
+ "\n",
1173
+ " # connect nodes - control flow\n",
1174
+ " builder.add_edge(START, \"retriever\")\n",
1175
+ " builder.add_edge(\"retriever\", \"assistant\")\n",
1176
+ " builder.add_conditional_edges(\"assistant\", tools_condition)\n",
1177
+ " builder.add_edge(\"tools\", \"assistant\")\n",
1178
+ "\n",
1179
+ " # Compile graph\n",
1180
+ " return builder.compile()"
1181
+ ]
1182
+ },
1183
+ {
1184
+ "cell_type": "markdown",
1185
+ "id": "5d5a4d4d-a497-4fa9-acb1-89b6967964ea",
1186
+ "metadata": {},
1187
+ "source": [
1188
+ "# APP INTERGRATION"
1189
+ ]
1190
+ },
1191
+ {
1192
+ "cell_type": "markdown",
1193
+ "id": "c1b877bf-5e62-4a01-9e71-17915de09dfd",
1194
+ "metadata": {},
1195
+ "source": [
1196
+ "To integrate the Agent solution into the submission API, solutions to covered GAIA questions will be generated prior to submission and stored in a database.\n",
1197
+ "The Agent will then have to retrieve the answer to the actual questions thrown at it during the assessment from its solution bank.\n",
1198
+ "All tools and related artefacts for the Agent will also be made public in the project folder to meet credibility requirements for the course assessment.\n",
1199
+ "\n",
1200
+ "Integration Changes:\n",
1201
+ " - include scripts to house the tools in a dedicated folder within the project\n",
1202
+ " - include scripts defining the agent in a dedicated folder within the project\n",
1203
+ " - include a text file with the system prompt guiding the agent in a dedicated folder\n",
1204
+ " - ensure the agent and tools directory are recognized as packages\n",
1205
+ " - modify the app.py script to load the updated agent class\n",
1206
+ " - update the readme file\n",
1207
+ " - update the requirements file\n",
1208
+ " - include the jupyter notebook"
1209
+ ]
1210
+ },
1211
+ {
1212
+ "cell_type": "code",
1213
+ "execution_count": null,
1214
+ "id": "f4dcdae5-a0cf-4b53-b3da-2511651ecf2b",
1215
+ "metadata": {},
1216
+ "outputs": [],
1217
+ "source": [
1218
+ "# testing\n",
1219
+ "if __name__ == \"__main__\":\n",
1220
+ " question = \"When was a picture of St. Thomas Aquinas first added to the Wikipedia page on the Principle of double effect?\"\n",
1221
+ " graph = build_graph(provider=\"huggingface-llama\")\n",
1222
+ " messages = [HumanMessage(content=question)]\n",
1223
+ " messages = graph.invoke({\"messages\": messages})\n",
1224
+ " for m in messages[\"messages\"]:\n",
1225
+ " m.pretty_print()"
1226
+ ]
1227
+ },
1228
+ {
1229
+ "cell_type": "code",
1230
+ "execution_count": null,
1231
+ "id": "6141142d-3502-4db3-a578-a06586a025af",
1232
+ "metadata": {},
1233
+ "outputs": [],
1234
+ "source": [
1235
+ "class GAIAAgent:\n",
1236
+ " \"\"\"A langgraph agent for attempting the GAIA benchmark.\"\"\"\n",
1237
+ " def __init__(self):\n",
1238
+ " print(\"Agent initialized.\")\n",
1239
+ " self.graph = build_graph() # instantiate the Agent\n",
1240
+ "\n",
1241
+ " def __call__(self, question: str) -> str:\n",
1242
+ " print(f\"Agent received question (first 50 chars): {question[:50]}...\")\n",
1243
+ " messages = [HumanMessage(content=question)]\n",
1244
+ " result = self.graph.invoke({\"messages\": messages})\n",
1245
+ " answer = result['messages'][-1].content # retrieve solution similar to the current question from prepared dump\n",
1246
+ " return answer # submit"
1247
+ ]
1248
+ }
1249
+ ],
1250
+ "metadata": {
1251
+ "kernelspec": {
1252
+ "display_name": "Python 3 (ipykernel)",
1253
+ "language": "python",
1254
+ "name": "python3"
1255
+ },
1256
+ "language_info": {
1257
+ "codemirror_mode": {
1258
+ "name": "ipython",
1259
+ "version": 3
1260
+ },
1261
+ "file_extension": ".py",
1262
+ "mimetype": "text/x-python",
1263
+ "name": "python",
1264
+ "nbconvert_exporter": "python",
1265
+ "pygments_lexer": "ipython3",
1266
+ "version": "3.12.7"
1267
+ },
1268
+ "widgets": {
1269
+ "application/vnd.jupyter.widget-state+json": {
1270
+ "state": {},
1271
+ "version_major": 2,
1272
+ "version_minor": 0
1273
+ }
1274
+ }
1275
+ },
1276
+ "nbformat": 4,
1277
+ "nbformat_minor": 5
1278
+ }
system_prompt.txt β†’ prompts/system_prompt.txt RENAMED
@@ -1,5 +1,5 @@
1
  You are a helpful assistant tasked with answering questions using a set of tools.
2
  Now, I will ask you a question. Report your thoughts, and finish your answer with the following template:
3
  FINAL ANSWER: [YOUR FINAL ANSWER].
4
- YOUR FINAL ANSWER should be a number OR as few words as possible OR a comma separated list of numbers and/or strings. If you are asked for a number, don't use comma to write your number neither use units such as $ or percent sign unless specified otherwise. If you are asked for a string, don't use articles, neither abbreviations (e.g. for cities), and write the digits in plain text unless specified otherwise. If you are asked for a comma separated list, Apply the rules above for each element (number or string), ensure there is exactly one space after each comma.
5
  Your answer should only start with "FINAL ANSWER: ", then follows with the answer.
 
1
  You are a helpful assistant tasked with answering questions using a set of tools.
2
  Now, I will ask you a question. Report your thoughts, and finish your answer with the following template:
3
  FINAL ANSWER: [YOUR FINAL ANSWER].
4
+ YOUR FINAL ANSWER should be a number OR as few words as possible OR a comma separated list of numbers and/or strings. If you are asked for a number, don't use comma to write your number neither use units such as $ or percent sign unless specified otherwise. If you are asked for a string, don't use articles, neither abbreviations (e.g. for cities), and write the digits in plain text unless specified otherwise. If you are asked for a comma separated list, apply the above rules depending of whether the element to be put in the list is a number or a string.
5
  Your answer should only start with "FINAL ANSWER: ", then follows with the answer.
requirements.txt CHANGED
@@ -16,6 +16,6 @@ pymupdf
16
  wikipedia
17
  pgvector
18
  python-dotenv
 
19
  pytesseract
20
- matplotlib
21
- sentence-transformers
 
16
  wikipedia
17
  pgvector
18
  python-dotenv
19
+ sentence-transformers
20
  pytesseract
21
+ matplotlib
 
supabase_docs.csv CHANGED
The diff for this file is too large to render. See raw diff
 
tools/__init__.py ADDED
File without changes
tools/basic_calculator.py ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langchain_core.tools import tool
2
+
3
+
4
+ @tool
5
+ def multiply(a: float, b: float) -> float:
6
+ """
7
+ Multiplies two numbers.
8
+ Args:
9
+ a (float): the first number
10
+ b (float): the second number
11
+ """
12
+ return a * b
13
+
14
+ @tool
15
+ def add(a: float, b: float) -> float:
16
+ """
17
+ Adds two numbers.
18
+ Args:
19
+ a (float): the first number
20
+ b (float): the second number
21
+ """
22
+ return a + b
23
+
24
+ @tool
25
+ def subtract(a: float, b: float) -> int:
26
+ """
27
+ Subtracts two numbers.
28
+ Args:
29
+ a (float): the first number
30
+ b (float): the second number
31
+ """
32
+ return a - b
33
+
34
+ @tool
35
+ def divide(a: float, b: float) -> float:
36
+ """
37
+ Divides two numbers.
38
+ Args:
39
+ a (float): the first float number
40
+ b (float): the second float number
41
+ """
42
+ if b == 0:
43
+ raise ValueError("Cannot divided by zero.")
44
+ return a / b
45
+
46
+ @tool
47
+ def modulus(a: int, b: int) -> int:
48
+ """
49
+ Get the modulus of two numbers.
50
+ Args:
51
+ a (int): the first number
52
+ b (int): the second number
53
+ """
54
+ return a % b
55
+
56
+ @tool
57
+ def power(a: float, b: float) -> float:
58
+ """
59
+ Get the power of two numbers.
60
+ Args:
61
+ a (float): the first number
62
+ b (float): the second number
63
+ """
64
+ return a**b
65
+
66
+ @tool
67
+ def square_root(a: float) -> float | complex:
68
+ """
69
+ Get the square root of a number.
70
+ Args:
71
+ a (float): the number to get the square root of
72
+ """
73
+ if a >= 0:
74
+ return a**0.5
75
+ return cmath.sqrt(a)
76
+
77
+ @tool
78
+ def count_substring(substring:str, text:str) -> int:
79
+ """
80
+ Get the number of occurences of a substring within some text. Useful for 'How many (substring) are in (text)?'
81
+ Args:
82
+ substring (str): the substring to check for.
83
+ text (str): the text to search through.
84
+ """
85
+ return text.count(substring)
code_interpreter.py β†’ tools/code_interpreter.py RENAMED
@@ -13,29 +13,21 @@ import numpy as np
13
  import pandas as pd
14
  import matplotlib.pyplot as plt
15
  from PIL import Image
 
16
 
17
  class CodeInterpreter:
18
  def __init__(self, allowed_modules=None, max_execution_time=30, working_directory=None):
19
  """Initialize the code interpreter with safety measures."""
20
- self.allowed_modules = allowed_modules or [
21
- "numpy", "pandas", "matplotlib", "scipy", "sklearn",
22
- "math", "random", "statistics", "datetime", "collections",
23
- "itertools", "functools", "operator", "re", "json",
24
- "sympy", "networkx", "nltk", "PIL", "pytesseract",
25
- "cmath", "uuid", "tempfile", "requests", "urllib"
26
- ]
27
  self.max_execution_time = max_execution_time
28
  self.working_directory = working_directory or os.path.join(os.getcwd())
29
  if not os.path.exists(self.working_directory):
30
  os.makedirs(self.working_directory)
31
 
32
- self.globals = {
33
- "__builtins__": __builtins__,
34
- "np": np,
35
- "pd": pd,
36
- "plt": plt,
37
- "Image": Image,
38
- }
39
  self.temp_sqlite_db = os.path.join(tempfile.gettempdir(), "code_exec.db")
40
 
41
  def execute_code(self, code: str, language: str = "python") -> Dict[str, Any]:
@@ -43,15 +35,7 @@ class CodeInterpreter:
43
  language = language.lower()
44
  execution_id = str(uuid.uuid4())
45
 
46
- result = {
47
- "execution_id": execution_id,
48
- "status": "error",
49
- "stdout": "",
50
- "stderr": "",
51
- "result": None,
52
- "plots": [],
53
- "dataframes": []
54
- }
55
 
56
  try:
57
  if language == "python":
@@ -74,15 +58,7 @@ class CodeInterpreter:
74
  def _execute_python(self, code: str, execution_id: str) -> dict:
75
  output_buffer = io.StringIO()
76
  error_buffer = io.StringIO()
77
- result = {
78
- "execution_id": execution_id,
79
- "status": "error",
80
- "stdout": "",
81
- "stderr": "",
82
- "result": None,
83
- "plots": [],
84
- "dataframes": []
85
- }
86
 
87
  try:
88
  exec_dir = os.path.join(self.working_directory, execution_id)
@@ -99,19 +75,11 @@ class CodeInterpreter:
99
  fig.savefig(img_path)
100
  with open(img_path, "rb") as img_file:
101
  img_data = base64.b64encode(img_file.read()).decode('utf-8')
102
- result["plots"].append({
103
- "figure_number": fig_num,
104
- "data": img_data
105
- })
106
 
107
  for var_name, var_value in self.globals.items():
108
  if isinstance(var_value, pd.DataFrame) and len(var_value) > 0:
109
- result["dataframes"].append({
110
- "name": var_name,
111
- "head": var_value.head().to_dict(),
112
- "shape": var_value.shape,
113
- "dtypes": str(var_value.dtypes)
114
- })
115
 
116
  result["status"] = "success"
117
  result["stdout"] = output_buffer.getvalue()
@@ -125,39 +93,13 @@ class CodeInterpreter:
125
 
126
  def _execute_bash(self, code: str, execution_id: str) -> dict:
127
  try:
128
- completed = subprocess.run(
129
- code, shell=True, capture_output=True, text=True, timeout=self.max_execution_time
130
- )
131
- return {
132
- "execution_id": execution_id,
133
- "status": "success" if completed.returncode == 0 else "error",
134
- "stdout": completed.stdout,
135
- "stderr": completed.stderr,
136
- "result": None,
137
- "plots": [],
138
- "dataframes": []
139
- }
140
  except subprocess.TimeoutExpired:
141
- return {
142
- "execution_id": execution_id,
143
- "status": "error",
144
- "stdout": "",
145
- "stderr": "Execution timed out.",
146
- "result": None,
147
- "plots": [],
148
- "dataframes": []
149
- }
150
 
151
  def _execute_sql(self, code: str, execution_id: str) -> dict:
152
- result = {
153
- "execution_id": execution_id,
154
- "status": "error",
155
- "stdout": "",
156
- "stderr": "",
157
- "result": None,
158
- "plots": [],
159
- "dataframes": []
160
- }
161
  try:
162
  conn = sqlite3.connect(self.temp_sqlite_db)
163
  cur = conn.cursor()
@@ -166,15 +108,9 @@ class CodeInterpreter:
166
  columns = [description[0] for description in cur.description]
167
  rows = cur.fetchall()
168
  df = pd.DataFrame(rows, columns=columns)
169
- result["dataframes"].append({
170
- "name": "query_result",
171
- "head": df.head().to_dict(),
172
- "shape": df.shape,
173
- "dtypes": str(df.dtypes)
174
- })
175
  else:
176
  conn.commit()
177
-
178
  result["status"] = "success"
179
  result["stdout"] = "Query executed successfully."
180
 
@@ -194,44 +130,13 @@ class CodeInterpreter:
194
  with open(source_path, "w") as f:
195
  f.write(code)
196
 
197
- compile_proc = subprocess.run(
198
- ["gcc", source_path, "-o", binary_path],
199
- capture_output=True, text=True, timeout=self.max_execution_time
200
- )
201
  if compile_proc.returncode != 0:
202
- return {
203
- "execution_id": execution_id,
204
- "status": "error",
205
- "stdout": compile_proc.stdout,
206
- "stderr": compile_proc.stderr,
207
- "result": None,
208
- "plots": [],
209
- "dataframes": []
210
- }
211
 
212
- run_proc = subprocess.run(
213
- [binary_path],
214
- capture_output=True, text=True, timeout=self.max_execution_time
215
- )
216
- return {
217
- "execution_id": execution_id,
218
- "status": "success" if run_proc.returncode == 0 else "error",
219
- "stdout": run_proc.stdout,
220
- "stderr": run_proc.stderr,
221
- "result": None,
222
- "plots": [],
223
- "dataframes": []
224
- }
225
- except Exception as e:
226
- return {
227
- "execution_id": execution_id,
228
- "status": "error",
229
- "stdout": "",
230
- "stderr": str(e),
231
- "result": None,
232
- "plots": [],
233
- "dataframes": []
234
- }
235
 
236
  def _execute_java(self, code: str, execution_id: str) -> dict:
237
  temp_dir = tempfile.mkdtemp()
@@ -241,41 +146,64 @@ class CodeInterpreter:
241
  with open(source_path, "w") as f:
242
  f.write(code)
243
 
244
- compile_proc = subprocess.run(
245
- ["javac", source_path],
246
- capture_output=True, text=True, timeout=self.max_execution_time
247
- )
248
  if compile_proc.returncode != 0:
249
- return {
250
- "execution_id": execution_id,
251
- "status": "error",
252
- "stdout": compile_proc.stdout,
253
- "stderr": compile_proc.stderr,
254
- "result": None,
255
- "plots": [],
256
- "dataframes": []
257
- }
258
 
259
- run_proc = subprocess.run(
260
- ["java", "-cp", temp_dir, "Main"],
261
- capture_output=True, text=True, timeout=self.max_execution_time
262
- )
263
- return {
264
- "execution_id": execution_id,
265
- "status": "success" if run_proc.returncode == 0 else "error",
266
- "stdout": run_proc.stdout,
267
- "stderr": run_proc.stderr,
268
- "result": None,
269
- "plots": [],
270
- "dataframes": []
271
- }
272
  except Exception as e:
273
- return {
274
- "execution_id": execution_id,
275
- "status": "error",
276
- "stdout": "",
277
- "stderr": str(e),
278
- "result": None,
279
- "plots": [],
280
- "dataframes": []
281
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  import pandas as pd
14
  import matplotlib.pyplot as plt
15
  from PIL import Image
16
+ from langchain_core.tools import tool
17
 
18
  class CodeInterpreter:
19
  def __init__(self, allowed_modules=None, max_execution_time=30, working_directory=None):
20
  """Initialize the code interpreter with safety measures."""
21
+
22
+ self.allowed_modules = allowed_modules or ["numpy", "pandas", "matplotlib", "scipy", "sklearn", "math", "random", "statistics", "datetime", "collections",
23
+ "itertools", "functools", "operator", "re", "json", "sympy", "networkx", "nltk", "PIL", "pytesseract", "cmath", "uuid", "tempfile", "requests", "urllib"]
24
+
 
 
 
25
  self.max_execution_time = max_execution_time
26
  self.working_directory = working_directory or os.path.join(os.getcwd())
27
  if not os.path.exists(self.working_directory):
28
  os.makedirs(self.working_directory)
29
 
30
+ self.globals = {"__builtins__": __builtins__, "np": np, "pd": pd, "plt": plt, "Image": Image}
 
 
 
 
 
 
31
  self.temp_sqlite_db = os.path.join(tempfile.gettempdir(), "code_exec.db")
32
 
33
  def execute_code(self, code: str, language: str = "python") -> Dict[str, Any]:
 
35
  language = language.lower()
36
  execution_id = str(uuid.uuid4())
37
 
38
+ result = {"execution_id": execution_id, "status": "error", "stdout": "", "stderr": "", "result": None, "plots": [], "dataframes": []}
 
 
 
 
 
 
 
 
39
 
40
  try:
41
  if language == "python":
 
58
  def _execute_python(self, code: str, execution_id: str) -> dict:
59
  output_buffer = io.StringIO()
60
  error_buffer = io.StringIO()
61
+ result = {"execution_id": execution_id, "status": "error", "stdout": "", "stderr": "", "result": None, "plots": [], "dataframes": []}
 
 
 
 
 
 
 
 
62
 
63
  try:
64
  exec_dir = os.path.join(self.working_directory, execution_id)
 
75
  fig.savefig(img_path)
76
  with open(img_path, "rb") as img_file:
77
  img_data = base64.b64encode(img_file.read()).decode('utf-8')
78
+ result["plots"].append({"figure_number": fig_num, "data": img_data})
 
 
 
79
 
80
  for var_name, var_value in self.globals.items():
81
  if isinstance(var_value, pd.DataFrame) and len(var_value) > 0:
82
+ result["dataframes"].append({"name": var_name, "head": var_value.head().to_dict(), "shape": var_value.shape, "dtypes": str(var_value.dtypes)})
 
 
 
 
 
83
 
84
  result["status"] = "success"
85
  result["stdout"] = output_buffer.getvalue()
 
93
 
94
  def _execute_bash(self, code: str, execution_id: str) -> dict:
95
  try:
96
+ completed = subprocess.run(code, shell=True, capture_output=True, text=True, timeout=self.max_execution_time)
97
+ return {"execution_id": execution_id, "status": "success" if completed.returncode == 0 else "error", "stdout": completed.stdout, "stderr": completed.stderr, "result": None, "plots": [], "dataframes": []}
 
 
 
 
 
 
 
 
 
 
98
  except subprocess.TimeoutExpired:
99
+ return {"execution_id": execution_id, "status": "error", "stdout": "", "stderr": "Execution timed out.", "result": None, "plots": [], "dataframes": []}
 
 
 
 
 
 
 
 
100
 
101
  def _execute_sql(self, code: str, execution_id: str) -> dict:
102
+ result = {"execution_id": execution_id, "status": "error", "stdout": "", "stderr": "", "result": None, "plots": [], "dataframes": []}
 
 
 
 
 
 
 
 
103
  try:
104
  conn = sqlite3.connect(self.temp_sqlite_db)
105
  cur = conn.cursor()
 
108
  columns = [description[0] for description in cur.description]
109
  rows = cur.fetchall()
110
  df = pd.DataFrame(rows, columns=columns)
111
+ result["dataframes"].append({"name": "query_result", "head": df.head().to_dict(), "shape": df.shape, "dtypes": str(df.dtypes)})
 
 
 
 
 
112
  else:
113
  conn.commit()
 
114
  result["status"] = "success"
115
  result["stdout"] = "Query executed successfully."
116
 
 
130
  with open(source_path, "w") as f:
131
  f.write(code)
132
 
133
+ compile_proc = subprocess.run(["gcc", source_path, "-o", binary_path], capture_output=True, text=True, timeout=self.max_execution_time)
 
 
 
134
  if compile_proc.returncode != 0:
135
+ return {"execution_id": execution_id, "status": "error", "stdout": compile_proc.stdout, "stderr": compile_proc.stderr, "result": None, "plots": [], "dataframes": []}
 
 
 
 
 
 
 
 
136
 
137
+ run_proc = subprocess.run([binary_path], capture_output=True, text=True, timeout=self.max_execution_time)
138
+ return {"execution_id": execution_id, "status": "success" if run_proc.returncode == 0 else "error", "stdout": run_proc.stdout, "stderr": run_proc.stderr, "result": None, "plots": [], "dataframes": []}
139
+ except Exception as e: return {"execution_id": execution_id, "status": "error", "stdout": "", "stderr": str(e), "result": None, "plots": [], "dataframes": []}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
 
141
  def _execute_java(self, code: str, execution_id: str) -> dict:
142
  temp_dir = tempfile.mkdtemp()
 
146
  with open(source_path, "w") as f:
147
  f.write(code)
148
 
149
+ compile_proc = subprocess.run(["javac", source_path], capture_output=True, text=True, timeout=self.max_execution_time)
 
 
 
150
  if compile_proc.returncode != 0:
151
+ return {"execution_id": execution_id, "status": "error", "stdout": compile_proc.stdout, "stderr": compile_proc.stderr, "result": None, "plots": [], "dataframes": []}
 
 
 
 
 
 
 
 
152
 
153
+ run_proc = subprocess.run(["java", "-cp", temp_dir, "Main"], capture_output=True, text=True, timeout=self.max_execution_time)
154
+ return {"execution_id": execution_id, "status": "success" if run_proc.returncode == 0 else "error", "stdout": run_proc.stdout, "stderr": run_proc.stderr, "result": None, "plots": [], "dataframes": []}
 
 
 
 
 
 
 
 
 
 
 
155
  except Exception as e:
156
+ return {"execution_id": execution_id, "status": "error", "stdout": "", "stderr": str(e), "result": None, "plots": [], "dataframes": []}
157
+
158
+ interpreter_instance = CodeInterpreter()
159
+
160
+ @tool
161
+ def execute_code_multilang(code: str, language: str = "python") -> str:
162
+ """Execute code in multiple languages (Python, Bash, SQL, C, Java) and return results.
163
+ Args:
164
+ code (str): The source code to execute.
165
+ language (str): The language of the code. Supported: "python", "bash", "sql", "c", "java".
166
+ Returns:
167
+ A string summarizing the execution results (stdout, stderr, errors, plots, dataframes if any).
168
+ """
169
+ supported_languages = ["python", "bash", "sql", "c", "java"]
170
+ language = language.lower()
171
+
172
+ if language not in supported_languages:
173
+ return f"❌ Unsupported language: {language}. Supported languages are: {', '.join(supported_languages)}"
174
+
175
+ result = interpreter_instance.execute_code(code, language=language)
176
+
177
+ response = []
178
+
179
+ if result["status"] == "success":
180
+ response.append(f"βœ… Code executed successfully in **{language.upper()}**")
181
+
182
+ if result.get("stdout"):
183
+ response.append("\n**Standard Output:**\n```\n" + result["stdout"].strip() + "\n```")
184
+
185
+ if result.get("stderr"):
186
+ response.append(
187
+ "\n**Standard Error (if any):**\n```\n"
188
+ + result["stderr"].strip() + "\n```")
189
+
190
+ if result.get("result") is not None:
191
+ response.append(
192
+ "\n**Execution Result:**\n```\n"
193
+ + str(result["result"]).strip() + "\n```")
194
+
195
+ if result.get("dataframes"):
196
+ for df_info in result["dataframes"]:
197
+ response.append(f"\n**DataFrame `{df_info['name']}` (Shape: {df_info['shape']})**")
198
+ df_preview = pd.DataFrame(df_info["head"])
199
+ response.append("First 5 rows:\n```\n" + str(df_preview) + "\n```")
200
+
201
+ if result.get("plots"):
202
+ response.append(f"\n**Generated {len(result['plots'])} plot(s)** (Image data returned separately)")
203
+
204
+ else:
205
+ response.append(f"❌ Code execution failed in **{language.upper()}**")
206
+ if result.get("stderr"):
207
+ response.append("\n**Error Log:**\n```\n" + result["stderr"].strip() + "\n```")
208
+
209
+ return "\n".join(response)
tools/document_processing.py ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import uuid
3
+ import requests
4
+ import tempfile
5
+ from PIL import Image
6
+ import pytesseract
7
+ import pandas as pd
8
+ from urllib.parse import urlparse
9
+ from langchain_core.tools import tool
10
+ from typing import List, Dict, Any, Optional
11
+
12
+ @tool
13
+ def save_and_read_file(content: str, filename: Optional[str] = None) -> str:
14
+ """
15
+ Save content to a file and return the path.
16
+ Args:
17
+ content (str): the content to save to the file
18
+ filename (str, optional): the name of the file. If not provided, a random name file will be created.
19
+ """
20
+ temp_dir = tempfile.gettempdir()
21
+ if filename is None:
22
+ temp_file = tempfile.NamedTemporaryFile(delete=False, dir=temp_dir)
23
+ filepath = temp_file.name
24
+ else:
25
+ filepath = os.path.join(temp_dir, filename)
26
+
27
+ with open(filepath, "w") as f:
28
+ f.write(content)
29
+
30
+ return f"File saved to {filepath}. You can read this file to process its contents."
31
+
32
+ @tool
33
+ def download_file_from_url(url: str, filename: Optional[str] = None) -> str:
34
+ """
35
+ Download a file from a URL and save it to a temporary location.
36
+ Args:
37
+ url (str): the URL of the file to download.
38
+ filename (str, optional): the name of the file. If not provided, a random name file will be created.
39
+ """
40
+ try:
41
+ # Parse URL to get filename if not provided
42
+ if not filename:
43
+ path = urlparse(url).path
44
+ filename = os.path.basename(path)
45
+ if not filename:
46
+ filename = f"downloaded_{uuid.uuid4().hex[:8]}"
47
+
48
+ # Create temporary file
49
+ temp_dir = tempfile.gettempdir()
50
+ filepath = os.path.join(temp_dir, filename)
51
+
52
+ # Download the file
53
+ response = requests.get(url, stream=True)
54
+ response.raise_for_status()
55
+
56
+ # Save the file
57
+ with open(filepath, "wb") as f:
58
+ for chunk in response.iter_content(chunk_size=8192):
59
+ f.write(chunk)
60
+
61
+ return f"File downloaded to {filepath}. You can read this file to process its contents."
62
+ except Exception as e:
63
+ return f"Error downloading file: {str(e)}"
64
+
65
+ @tool
66
+ def extract_text_from_image(image_path: str) -> str:
67
+ """
68
+ Extract text from an image using OCR library pytesseract (if available).
69
+ Args:
70
+ image_path (str): the path to the image file.
71
+ """
72
+ try:
73
+ # Open the image
74
+ image = Image.open(image_path)
75
+
76
+ # Extract text from the image
77
+ text = pytesseract.image_to_string(image)
78
+
79
+ return f"Extracted text from image:\n\n{text}"
80
+ except Exception as e:
81
+ return f"Error extracting text from image: {str(e)}"
82
+
83
+ @tool
84
+ def analyze_csv_file(file_path: str, query: str) -> str:
85
+ """
86
+ Analyze a CSV file using pandas and answer a question about it.
87
+ Args:
88
+ file_path (str): the path to the CSV file.
89
+ query (str): Question about the data
90
+ """
91
+ try:
92
+ # Read the CSV file
93
+ df = pd.read_csv(file_path)
94
+
95
+ # Run various analyses based on the query
96
+ result = f"CSV file loaded with {len(df)} rows and {len(df.columns)} columns.\n"
97
+ result += f"Columns: {', '.join(df.columns)}\n\n"
98
+
99
+ # Add summary statistics
100
+ result += "Summary statistics:\n"
101
+ result += str(df.describe())
102
+
103
+ return result
104
+
105
+ except Exception as e:
106
+ return f"Error analyzing CSV file: {str(e)}"
107
+
108
+ @tool
109
+ def analyze_excel_file(file_path: str, query: str) -> str:
110
+ """
111
+ Analyze an Excel file using pandas and answer a question about it.
112
+ Args:
113
+ file_path (str): the path to the Excel file.
114
+ query (str): Question about the data
115
+ """
116
+ try:
117
+ # Read the Excel file
118
+ df = pd.read_excel(file_path)
119
+
120
+ # Run various analyses based on the query
121
+ result = (
122
+ f"Excel file loaded with {len(df)} rows and {len(df.columns)} columns.\n"
123
+ )
124
+ result += f"Columns: {', '.join(df.columns)}\n\n"
125
+
126
+ # Add summary statistics
127
+ result += "Summary statistics:\n"
128
+ result += str(df.describe())
129
+
130
+ return result
131
+
132
+ except Exception as e:
133
+ return f"Error analyzing Excel file: {str(e)}"
agent.py β†’ tools/image_processing.py RENAMED
@@ -1,401 +1,41 @@
1
  import os
2
- from dotenv import load_dotenv
3
- from typing import List, Dict, Any, Optional
4
- import tempfile
5
- import re
6
- import json
7
- import requests
8
- from urllib.parse import urlparse
9
- import pytesseract
10
- from PIL import Image, ImageDraw, ImageFont, ImageEnhance, ImageFilter
11
- import cmath
12
- import pandas as pd
13
  import uuid
 
14
  import numpy as np
15
- from code_interpreter import CodeInterpreter
16
-
17
- interpreter_instance = CodeInterpreter()
18
-
19
- from image_processing import *
20
-
21
- """Langraph"""
22
- from langgraph.graph import START, StateGraph, MessagesState
23
- from langchain_community.tools.tavily_search import TavilySearchResults
24
- from langchain_community.document_loaders import WikipediaLoader
25
- from langchain_community.document_loaders import ArxivLoader
26
- from langgraph.prebuilt import ToolNode, tools_condition
27
- from langchain_google_genai import ChatGoogleGenerativeAI
28
- from langchain_groq import ChatGroq
29
- from langchain_huggingface import (
30
- ChatHuggingFace,
31
- HuggingFaceEndpoint,
32
- HuggingFaceEmbeddings,
33
- )
34
- from langchain_community.vectorstores import SupabaseVectorStore
35
- from langchain_core.messages import SystemMessage, HumanMessage
36
  from langchain_core.tools import tool
37
- from langchain.tools.retriever import create_retriever_tool
38
- from supabase.client import Client, create_client
39
-
40
- load_dotenv()
41
-
42
- ### =============== BROWSER TOOLS =============== ###
43
-
44
-
45
- @tool
46
- def wiki_search(query: str) -> str:
47
- """Search Wikipedia for a query and return maximum 2 results.
48
-
49
- Args:
50
- query: The search query."""
51
- search_docs = WikipediaLoader(query=query, load_max_docs=2).load()
52
- formatted_search_docs = "\n\n---\n\n".join(
53
- [
54
- f'<Document source="{doc.metadata["source"]}" page="{doc.metadata.get("page", "")}"/>\n{doc.page_content}\n</Document>'
55
- for doc in search_docs
56
- ]
57
- )
58
- return {"wiki_results": formatted_search_docs}
59
-
60
-
61
- @tool
62
- def web_search(query: str) -> str:
63
- """Search Tavily for a query and return maximum 3 results.
64
-
65
- Args:
66
- query: The search query."""
67
- search_docs = TavilySearchResults(max_results=3).invoke(query=query)
68
- formatted_search_docs = "\n\n---\n\n".join(
69
- [
70
- f'<Document source="{doc.metadata["source"]}" page="{doc.metadata.get("page", "")}"/>\n{doc.page_content}\n</Document>'
71
- for doc in search_docs
72
- ]
73
- )
74
- return {"web_results": formatted_search_docs}
75
-
76
-
77
- @tool
78
- def arxiv_search(query: str) -> str:
79
- """Search Arxiv for a query and return maximum 3 result.
80
-
81
- Args:
82
- query: The search query."""
83
- search_docs = ArxivLoader(query=query, load_max_docs=3).load()
84
- formatted_search_docs = "\n\n---\n\n".join(
85
- [
86
- f'<Document source="{doc.metadata["source"]}" page="{doc.metadata.get("page", "")}"/>\n{doc.page_content[:1000]}\n</Document>'
87
- for doc in search_docs
88
- ]
89
- )
90
- return {"arxiv_results": formatted_search_docs}
91
-
92
-
93
- ### =============== CODE INTERPRETER TOOLS =============== ###
94
-
95
-
96
- @tool
97
- def execute_code_multilang(code: str, language: str = "python") -> str:
98
- """Execute code in multiple languages (Python, Bash, SQL, C, Java) and return results.
99
-
100
- Args:
101
- code (str): The source code to execute.
102
- language (str): The language of the code. Supported: "python", "bash", "sql", "c", "java".
103
-
104
- Returns:
105
- A string summarizing the execution results (stdout, stderr, errors, plots, dataframes if any).
106
- """
107
- supported_languages = ["python", "bash", "sql", "c", "java"]
108
- language = language.lower()
109
-
110
- if language not in supported_languages:
111
- return f"❌ Unsupported language: {language}. Supported languages are: {', '.join(supported_languages)}"
112
-
113
- result = interpreter_instance.execute_code(code, language=language)
114
-
115
- response = []
116
-
117
- if result["status"] == "success":
118
- response.append(f"βœ… Code executed successfully in **{language.upper()}**")
119
-
120
- if result.get("stdout"):
121
- response.append(
122
- "\n**Standard Output:**\n```\n" + result["stdout"].strip() + "\n```"
123
- )
124
-
125
- if result.get("stderr"):
126
- response.append(
127
- "\n**Standard Error (if any):**\n```\n"
128
- + result["stderr"].strip()
129
- + "\n```"
130
- )
131
-
132
- if result.get("result") is not None:
133
- response.append(
134
- "\n**Execution Result:**\n```\n"
135
- + str(result["result"]).strip()
136
- + "\n```"
137
- )
138
-
139
- if result.get("dataframes"):
140
- for df_info in result["dataframes"]:
141
- response.append(
142
- f"\n**DataFrame `{df_info['name']}` (Shape: {df_info['shape']})**"
143
- )
144
- df_preview = pd.DataFrame(df_info["head"])
145
- response.append("First 5 rows:\n```\n" + str(df_preview) + "\n```")
146
-
147
- if result.get("plots"):
148
- response.append(
149
- f"\n**Generated {len(result['plots'])} plot(s)** (Image data returned separately)"
150
- )
151
-
152
- else:
153
- response.append(f"❌ Code execution failed in **{language.upper()}**")
154
- if result.get("stderr"):
155
- response.append(
156
- "\n**Error Log:**\n```\n" + result["stderr"].strip() + "\n```"
157
- )
158
-
159
- return "\n".join(response)
160
-
161
-
162
- ### =============== MATHEMATICAL TOOLS =============== ###
163
-
164
-
165
- @tool
166
- def multiply(a: float, b: float) -> float:
167
- """
168
- Multiplies two numbers.
169
-
170
- Args:
171
- a (float): the first number
172
- b (float): the second number
173
- """
174
- return a * b
175
-
176
-
177
- @tool
178
- def add(a: float, b: float) -> float:
179
- """
180
- Adds two numbers.
181
-
182
- Args:
183
- a (float): the first number
184
- b (float): the second number
185
- """
186
- return a + b
187
-
188
-
189
- @tool
190
- def subtract(a: float, b: float) -> int:
191
- """
192
- Subtracts two numbers.
193
-
194
- Args:
195
- a (float): the first number
196
- b (float): the second number
197
- """
198
- return a - b
199
-
200
-
201
- @tool
202
- def divide(a: float, b: float) -> float:
203
- """
204
- Divides two numbers.
205
-
206
- Args:
207
- a (float): the first float number
208
- b (float): the second float number
209
- """
210
- if b == 0:
211
- raise ValueError("Cannot divided by zero.")
212
- return a / b
213
-
214
-
215
- @tool
216
- def modulus(a: int, b: int) -> int:
217
- """
218
- Get the modulus of two numbers.
219
-
220
- Args:
221
- a (int): the first number
222
- b (int): the second number
223
- """
224
- return a % b
225
-
226
-
227
- @tool
228
- def power(a: float, b: float) -> float:
229
- """
230
- Get the power of two numbers.
231
-
232
- Args:
233
- a (float): the first number
234
- b (float): the second number
235
- """
236
- return a**b
237
-
238
-
239
- @tool
240
- def square_root(a: float) -> float | complex:
241
- """
242
- Get the square root of a number.
243
-
244
- Args:
245
- a (float): the number to get the square root of
246
- """
247
- if a >= 0:
248
- return a**0.5
249
- return cmath.sqrt(a)
250
-
251
-
252
- ### =============== DOCUMENT PROCESSING TOOLS =============== ###
253
-
254
-
255
- @tool
256
- def save_and_read_file(content: str, filename: Optional[str] = None) -> str:
257
- """
258
- Save content to a file and return the path.
259
-
260
- Args:
261
- content (str): the content to save to the file
262
- filename (str, optional): the name of the file. If not provided, a random name file will be created.
263
- """
264
- temp_dir = tempfile.gettempdir()
265
- if filename is None:
266
- temp_file = tempfile.NamedTemporaryFile(delete=False, dir=temp_dir)
267
- filepath = temp_file.name
268
- else:
269
- filepath = os.path.join(temp_dir, filename)
270
-
271
- with open(filepath, "w") as f:
272
- f.write(content)
273
-
274
- return f"File saved to {filepath}. You can read this file to process its contents."
275
-
276
-
277
- @tool
278
- def download_file_from_url(url: str, filename: Optional[str] = None) -> str:
279
- """
280
- Download a file from a URL and save it to a temporary location.
281
-
282
- Args:
283
- url (str): the URL of the file to download.
284
- filename (str, optional): the name of the file. If not provided, a random name file will be created.
285
- """
286
- try:
287
- # Parse URL to get filename if not provided
288
- if not filename:
289
- path = urlparse(url).path
290
- filename = os.path.basename(path)
291
- if not filename:
292
- filename = f"downloaded_{uuid.uuid4().hex[:8]}"
293
-
294
- # Create temporary file
295
- temp_dir = tempfile.gettempdir()
296
- filepath = os.path.join(temp_dir, filename)
297
-
298
- # Download the file
299
- response = requests.get(url, stream=True)
300
- response.raise_for_status()
301
-
302
- # Save the file
303
- with open(filepath, "wb") as f:
304
- for chunk in response.iter_content(chunk_size=8192):
305
- f.write(chunk)
306
-
307
- return f"File downloaded to {filepath}. You can read this file to process its contents."
308
- except Exception as e:
309
- return f"Error downloading file: {str(e)}"
310
-
311
-
312
- @tool
313
- def extract_text_from_image(image_path: str) -> str:
314
- """
315
- Extract text from an image using OCR library pytesseract (if available).
316
-
317
- Args:
318
- image_path (str): the path to the image file.
319
- """
320
- try:
321
- # Open the image
322
- image = Image.open(image_path)
323
-
324
- # Extract text from the image
325
- text = pytesseract.image_to_string(image)
326
-
327
- return f"Extracted text from image:\n\n{text}"
328
- except Exception as e:
329
- return f"Error extracting text from image: {str(e)}"
330
-
331
-
332
- @tool
333
- def analyze_csv_file(file_path: str, query: str) -> str:
334
- """
335
- Analyze a CSV file using pandas and answer a question about it.
336
-
337
- Args:
338
- file_path (str): the path to the CSV file.
339
- query (str): Question about the data
340
- """
341
- try:
342
- # Read the CSV file
343
- df = pd.read_csv(file_path)
344
-
345
- # Run various analyses based on the query
346
- result = f"CSV file loaded with {len(df)} rows and {len(df.columns)} columns.\n"
347
- result += f"Columns: {', '.join(df.columns)}\n\n"
348
-
349
- # Add summary statistics
350
- result += "Summary statistics:\n"
351
- result += str(df.describe())
352
-
353
- return result
354
-
355
- except Exception as e:
356
- return f"Error analyzing CSV file: {str(e)}"
357
-
358
-
359
- @tool
360
- def analyze_excel_file(file_path: str, query: str) -> str:
361
- """
362
- Analyze an Excel file using pandas and answer a question about it.
363
-
364
- Args:
365
- file_path (str): the path to the Excel file.
366
- query (str): Question about the data
367
- """
368
- try:
369
- # Read the Excel file
370
- df = pd.read_excel(file_path)
371
-
372
- # Run various analyses based on the query
373
- result = (
374
- f"Excel file loaded with {len(df)} rows and {len(df.columns)} columns.\n"
375
- )
376
- result += f"Columns: {', '.join(df.columns)}\n\n"
377
 
378
- # Add summary statistics
379
- result += "Summary statistics:\n"
380
- result += str(df.describe())
 
 
381
 
382
- return result
383
 
384
- except Exception as e:
385
- return f"Error analyzing Excel file: {str(e)}"
 
 
386
 
387
 
388
- ### ============== IMAGE PROCESSING AND GENERATION TOOLS =============== ###
 
 
 
 
 
 
389
 
390
 
391
  @tool
392
  def analyze_image(image_base64: str) -> Dict[str, Any]:
393
  """
394
  Analyze basic properties of an image (size, mode, color analysis, thumbnail preview).
395
-
396
  Args:
397
  image_base64 (str): Base64 encoded image string
398
-
399
  Returns:
400
  Dictionary with analysis result
401
  """
@@ -438,12 +78,10 @@ def transform_image(
438
  ) -> Dict[str, Any]:
439
  """
440
  Apply transformations: resize, rotate, crop, flip, brightness, contrast, blur, sharpen, grayscale.
441
-
442
  Args:
443
  image_base64 (str): Base64 encoded input image
444
  operation (str): Transformation operation
445
  params (Dict[str, Any], optional): Parameters for the operation
446
-
447
  Returns:
448
  Dictionary with transformed image (base64)
449
  """
@@ -501,12 +139,10 @@ def draw_on_image(
501
  ) -> Dict[str, Any]:
502
  """
503
  Draw shapes (rectangle, circle, line) or text onto an image.
504
-
505
  Args:
506
  image_base64 (str): Base64 encoded input image
507
  drawing_type (str): Drawing type
508
  params (Dict[str, Any]): Drawing parameters
509
-
510
  Returns:
511
  Dictionary with result image (base64)
512
  """
@@ -571,12 +207,10 @@ def generate_simple_image(
571
  ) -> Dict[str, Any]:
572
  """
573
  Generate a simple image (gradient, noise, pattern, chart).
574
-
575
  Args:
576
  image_type (str): Type of image
577
  width (int), height (int)
578
  params (Dict[str, Any], optional): Specific parameters
579
-
580
  Returns:
581
  Dictionary with generated image (base64)
582
  """
@@ -637,12 +271,10 @@ def combine_images(
637
  ) -> Dict[str, Any]:
638
  """
639
  Combine multiple images (collage, stack, blend).
640
-
641
  Args:
642
  images_base64 (List[str]): List of base64 images
643
  operation (str): Combination type
644
  params (Dict[str, Any], optional)
645
-
646
  Returns:
647
  Dictionary with combined image (base64)
648
  """
@@ -677,125 +309,3 @@ def combine_images(
677
 
678
  except Exception as e:
679
  return {"error": str(e)}
680
-
681
-
682
- # load the system prompt from the file
683
- with open("system_prompt.txt", "r", encoding="utf-8") as f:
684
- system_prompt = f.read()
685
- print(system_prompt)
686
-
687
- # System message
688
- sys_msg = SystemMessage(content=system_prompt)
689
-
690
- # build a retriever
691
- embeddings = HuggingFaceEmbeddings(
692
- model_name="sentence-transformers/all-mpnet-base-v2"
693
- ) # dim=768
694
- supabase: Client = create_client(
695
- os.environ.get("SUPABASE_URL"), os.environ.get("SUPABASE_SERVICE_ROLE_KEY")
696
- )
697
- vector_store = SupabaseVectorStore(
698
- client=supabase,
699
- embedding=embeddings,
700
- table_name="documents2",
701
- query_name="match_documents_2",
702
- )
703
- create_retriever_tool = create_retriever_tool(
704
- retriever=vector_store.as_retriever(),
705
- name="Question Search",
706
- description="A tool to retrieve similar questions from a vector store.",
707
- )
708
-
709
-
710
- tools = [
711
- web_search,
712
- wiki_search,
713
- arxiv_search,
714
- multiply,
715
- add,
716
- subtract,
717
- divide,
718
- modulus,
719
- power,
720
- square_root,
721
- save_and_read_file,
722
- download_file_from_url,
723
- extract_text_from_image,
724
- analyze_csv_file,
725
- analyze_excel_file,
726
- execute_code_multilang,
727
- analyze_image,
728
- transform_image,
729
- draw_on_image,
730
- generate_simple_image,
731
- combine_images,
732
- ]
733
-
734
-
735
- # Build graph function
736
- def build_graph(provider: str = "groq"):
737
- """Build the graph"""
738
- # Load environment variables from .env file
739
- if provider == "groq":
740
- # Groq https://console.groq.com/docs/models
741
- llm = ChatGroq(model="qwen-qwq-32b", temperature=0)
742
- elif provider == "huggingface":
743
- # TODO: Add huggingface endpoint
744
- llm = ChatHuggingFace(
745
- llm=HuggingFaceEndpoint(
746
- repo_id="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
747
- task="text-generation", # for chat‐style use β€œtext-generation”
748
- max_new_tokens=1024,
749
- do_sample=False,
750
- repetition_penalty=1.03,
751
- temperature=0,
752
- ),
753
- verbose=True,
754
- )
755
- else:
756
- raise ValueError("Invalid provider. Choose 'groq' or 'huggingface'.")
757
- # Bind tools to LLM
758
- llm_with_tools = llm.bind_tools(tools)
759
-
760
- # Node
761
- def assistant(state: MessagesState):
762
- """Assistant node"""
763
- return {"messages": [llm_with_tools.invoke(state["messages"])]}
764
-
765
- def retriever(state: MessagesState):
766
- """Retriever node"""
767
- similar_question = vector_store.similarity_search(state["messages"][0].content)
768
-
769
- if similar_question: # Check if the list is not empty
770
- example_msg = HumanMessage(
771
- content=f"Here I provide a similar question and answer for reference: \n\n{similar_question[0].page_content}",
772
- )
773
- return {"messages": [sys_msg] + state["messages"] + [example_msg]}
774
- else:
775
- # Handle the case when no similar questions are found
776
- return {"messages": [sys_msg] + state["messages"]}
777
-
778
- builder = StateGraph(MessagesState)
779
- builder.add_node("retriever", retriever)
780
- builder.add_node("assistant", assistant)
781
- builder.add_node("tools", ToolNode(tools))
782
- builder.add_edge(START, "retriever")
783
- builder.add_edge("retriever", "assistant")
784
- builder.add_conditional_edges(
785
- "assistant",
786
- tools_condition,
787
- )
788
- builder.add_edge("tools", "assistant")
789
-
790
- # Compile graph
791
- return builder.compile()
792
-
793
-
794
- # test
795
- if __name__ == "__main__":
796
- question = "When was a picture of St. Thomas Aquinas first added to the Wikipedia page on the Principle of double effect?"
797
- graph = build_graph(provider="groq")
798
- messages = [HumanMessage(content=question)]
799
- messages = graph.invoke({"messages": messages})
800
- for m in messages["messages"]:
801
- m.pretty_print()
 
1
  import os
2
+ import io
 
 
 
 
 
 
 
 
 
 
3
  import uuid
4
+ import base64
5
  import numpy as np
6
+ from PIL import Image
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  from langchain_core.tools import tool
8
+ from typing import List, Dict, Any, Optional
9
+ from PIL import Image, ImageDraw, ImageFont, ImageEnhance, ImageFilter
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
+ # Helper functions for image processing
12
+ def encode_image(image_path: str) -> str:
13
+ """Convert an image file to base64 string."""
14
+ with open(image_path, "rb") as image_file:
15
+ return base64.b64encode(image_file.read()).decode("utf-8")
16
 
 
17
 
18
+ def decode_image(base64_string: str) -> Image.Image:
19
+ """Convert a base64 string to a PIL Image."""
20
+ image_data = base64.b64decode(base64_string)
21
+ return Image.open(io.BytesIO(image_data))
22
 
23
 
24
+ def save_image(image: Image.Image, directory: str = "image_outputs") -> str:
25
+ """Save a PIL Image to disk and return the path."""
26
+ os.makedirs(directory, exist_ok=True)
27
+ image_id = str(uuid.uuid4())
28
+ image_path = os.path.join(directory, f"{image_id}.png")
29
+ image.save(image_path)
30
+ return image_path
31
 
32
 
33
  @tool
34
  def analyze_image(image_base64: str) -> Dict[str, Any]:
35
  """
36
  Analyze basic properties of an image (size, mode, color analysis, thumbnail preview).
 
37
  Args:
38
  image_base64 (str): Base64 encoded image string
 
39
  Returns:
40
  Dictionary with analysis result
41
  """
 
78
  ) -> Dict[str, Any]:
79
  """
80
  Apply transformations: resize, rotate, crop, flip, brightness, contrast, blur, sharpen, grayscale.
 
81
  Args:
82
  image_base64 (str): Base64 encoded input image
83
  operation (str): Transformation operation
84
  params (Dict[str, Any], optional): Parameters for the operation
 
85
  Returns:
86
  Dictionary with transformed image (base64)
87
  """
 
139
  ) -> Dict[str, Any]:
140
  """
141
  Draw shapes (rectangle, circle, line) or text onto an image.
 
142
  Args:
143
  image_base64 (str): Base64 encoded input image
144
  drawing_type (str): Drawing type
145
  params (Dict[str, Any]): Drawing parameters
 
146
  Returns:
147
  Dictionary with result image (base64)
148
  """
 
207
  ) -> Dict[str, Any]:
208
  """
209
  Generate a simple image (gradient, noise, pattern, chart).
 
210
  Args:
211
  image_type (str): Type of image
212
  width (int), height (int)
213
  params (Dict[str, Any], optional): Specific parameters
 
214
  Returns:
215
  Dictionary with generated image (base64)
216
  """
 
271
  ) -> Dict[str, Any]:
272
  """
273
  Combine multiple images (collage, stack, blend).
 
274
  Args:
275
  images_base64 (List[str]): List of base64 images
276
  operation (str): Combination type
277
  params (Dict[str, Any], optional)
 
278
  Returns:
279
  Dictionary with combined image (base64)
280
  """
 
309
 
310
  except Exception as e:
311
  return {"error": str(e)}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tools/web_search.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from supabase.client import Client, create_client
3
+ from langchain_core.tools import tool
4
+ from langchain_community.tools.tavily_search import TavilySearchResults
5
+ from langchain_community.document_loaders import WikipediaLoader
6
+ from langchain_community.document_loaders import ArxivLoader
7
+ from langchain_huggingface import HuggingFaceEmbeddings
8
+ from langchain_community.vectorstores import SupabaseVectorStore
9
+ from langchain.tools.retriever import create_retriever_tool
10
+
11
+
12
+ @tool
13
+ def wiki_search(query: str) -> str:
14
+ """Search Wikipedia for a query and return maximum 2 results.
15
+
16
+ Args:
17
+ query: The search query."""
18
+ search_docs = WikipediaLoader(query=query, load_max_docs=2).load()
19
+ formatted_search_docs = "\n\n---\n\n".join([f'<Document source="{doc.metadata["source"]}" page="{doc.metadata.get("page", "")}"/>\n{doc.page_content}\n</Document>' for doc in search_docs])
20
+ return {"wiki_results": formatted_search_docs}
21
+
22
+ @tool
23
+ def web_search(query: str) -> str:
24
+ """Search Tavily for a query and return maximum 3 results.
25
+
26
+ Args:
27
+ query: The search query."""
28
+ search_docs = TavilySearchResults(max_results=3).invoke(query=query)
29
+ formatted_search_docs = "\n\n---\n\n".join([f'<Document source="{doc.metadata["source"]}" page="{doc.metadata.get("page", "")}"/>\n{doc.page_content}\n</Document>' for doc in search_docs])
30
+ return {"web_results": formatted_search_docs}
31
+
32
+ @tool
33
+ def arxiv_search(query: str) -> str:
34
+ """Search Arxiv for a query and return maximum 3 result.
35
+
36
+ Args:
37
+ query: The search query."""
38
+ search_docs = ArxivLoader(query=query, load_max_docs=3).load()
39
+ formatted_search_docs = "\n\n---\n\n".join([f'<Document source="{doc.metadata["source"]}" page="{doc.metadata.get("page", "")}"/>\n{doc.page_content[:1000]}\n</Document>' for doc in search_docs])
40
+ return {"arxiv_results": formatted_search_docs}
41
+
42
+ @tool
43
+ def similar_question_search(question: str) -> str:
44
+ """Search the vector database for similar questions and return the first results.
45
+
46
+ Args:
47
+ question: the question human provided."""
48
+ matched_docs = vector_store.similarity_search(question, 3)
49
+ formatted_search_docs = "\n\n---\n\n".join([f'<Document source="{doc.metadata["source"]}" page="{doc.metadata.get("page", "")}"/>\n{doc.page_content[:1000]}\n</Document>' for doc in matched_docs])
50
+ return {"similar_questions": formatted_search_docs}