# Graph Structure ## Overview This document describes the LangGraph structure for the agent system. ## Graph Flow ``` START → Agent Node → [Tool Node] → Agent Node → END ↓ END ``` The agent can either: 1. Respond directly and end 2. Call tools, process results, and then respond ## Node Descriptions ### Agent Node - **Purpose:** Process user input and generate responses - **Input:** User message and conversation history - **Output:** Agent response or tool calls - **Logic:** 1. Receive user input 2. Load agent definition from markdown 3. Apply system prompt and behavior guidelines 4. Generate response using LLM with tool access 5. Either return response or request tool execution ### Tool Node - **Purpose:** Execute tools requested by the agent - **Input:** Tool calls from agent - **Output:** Tool execution results - **Available Tools:** - `summarize_text`: Summarizes long text into concise summaries ## State Schema The graph maintains the following state: ```python { "messages": List[BaseMessage], # Conversation history "agent_name": str, # Name of the active agent "metadata": dict # Additional context/metadata } ``` ## Conditional Edges ### should_continue Routes the flow based on agent's decision: - **To "tools"**: If agent requests tool execution - **To "end"**: If agent provides final response This enables the agent to: 1. Use tools when needed for tasks like summarization 2. Respond directly for simple queries 3. Chain multiple tool calls if necessary ## Checkpointing The graph supports checkpointing for: - Conversation persistence - State recovery - Debugging and replay ## Extension Points To extend this graph: 1. Add new nodes for additional processing steps 2. Implement conditional edges for routing logic 3. Add tool nodes for external integrations 4. Create sub-graphs for complex workflows