| ## LangGraph Agent Chat UI: Your Gateway to Agent Interaction | |
| The Agent Chat UI,, is a React/Vite application that provides a clean, chat-based interface for interacting with your LangGraph agents. Here's why it's a valuable tool: | |
| * **Easy Connection:** Connect to local or deployed LangGraph agents with a simple URL and graph ID. | |
| * **Intuitive Chat:** Interact naturally with your agents, sending and receiving messages in a familiar chat format. | |
| * **Visualize Agent Actions:** See tool calls and their results rendered directly in the UI. | |
| * **Human-in-the-Loop Made Easy:** Seamlessly integrate human input using LangGraph's `interrupt` feature. The UI handles the presentation and interaction, allowing for approvals, edits, and responses. | |
| * **Explore Execution Paths:** Use the UI to travel through time, inspect checkpoints, and fork conversations, all powered by LangGraph's state management. | |
| * **Debug and Understand:** Inspect the full state of your LangGraph thread at any point. | |
| ## Get Started with the Agent Chat UI (and LangGraph!) | |
| You have several options to start using the UI: | |
| ### 1. Try the Deployed Version (No Setup Required!) | |
| * **Visit:** [agentchat.vercel.app](https://agentchat.vercel.app/) | |
| * **Connect:** Enter your LangGraph deployment URL and graph ID (the `path` you set with `langserve.add_routes`). If using a production deployment, also include your LangSmith API key. | |
| * **Chat!** You're ready to interact with your agent. | |
| ### 2. Run Locally (for Development and Customization) | |
| * **Option A: Clone the Repository:** | |
| ```bash | |
| git clone https://github.com/langchain-ai/agent-chat-ui.git | |
| cd agent-chat-ui | |
| pnpm install # Or npm install/yarn install | |
| pnpm dev # Or npm run dev/yarn dev | |
| ``` | |
| * **Option B: Quickstart with `npx`:** | |
| ```bash | |
| npx create-agent-chat-app | |
| cd agent-chat-app | |
| pnpm install # Or npm install/yarn install | |
| pnpm dev # Or npm run dev/yarn dev | |
| ``` | |
| Open your browser to `http://localhost:5173` (or the port indicated in your terminal). | |
| # LangGraph Agent Chat UI | |
| This project provides a simple, intuitive user interface (UI) for interacting with LangGraph agents. It's built with React and Vite, offering a responsive chat-like experience for testing and demonstrating your LangGraph deployments. It's designed to work seamlessly with LangGraph's core concepts, including checkpoints, thread management, and human-in-the-loop capabilities. | |
| ## Features | |
| * **Easy Connection:** Connect to both local and production LangGraph deployments by simply providing the deployment URL and graph ID (the path used when defining the graph). | |
| * **Chat Interface:** Interact with your agents through a familiar chat interface, sending and receiving messages in real-time. The UI manages the conversation thread, automatically using checkpoints for persistence. | |
| * **Tool Call Rendering:** The UI automatically renders tool calls and their results, making it easy to visualize the agent's actions. This is compatible with LangGraph's [tool calling and function calling capabilities](https://python.langchain.com/docs/guides/tools/custom_tools). | |
| * **Human-in-the-Loop Support:** Seamlessly integrate human intervention using LangGraph's `interrupt` function. The UI presents a dedicated interface for reviewing, editing, and responding to interrupt requests (e.g., for approval or modification of agent actions), following the standardized schema. | |
| * **Thread History:** View and navigate through past chat threads, enabling you to review previous interactions. This leverages LangGraph's checkpointing for persistent conversation history. | |
| * **Time Travel and Forking:** Leverage LangGraph's powerful state management features, including [checkpointing](https://python.langchain.com/docs/modules/agents/concepts#checkpointing) and thread manipulation. Run the graph from specific checkpoints, explore different execution paths, and edit previous messages. | |
| * **State Inspection:** Examine the current state of your LangGraph thread for debugging and understanding the agent's internal workings. This allows you to inspect the full state object managed by LangGraph. | |
| * **Multiple Deployment Options:** | |
| * **Deployed Site:** Use the hosted version at [agentchat.vercel.app](https://agentchat.vercel.app/) | |
| * **Local Development:** Clone the repository and run it locally for development and customization. | |
| * **Quick Setup:** Use `npx create-agent-chat-app` for a fast, streamlined setup. | |
| * **Langsmith API key:** When utilizing a product deployment you must provide an Langsmith API key. | |
| ## Getting Started | |
| There are three main ways to use the Agent Chat UI: | |
| ### 1. Using the Deployed Site (Easiest) | |
| 1. **Navigate:** Go to [agentchat.vercel.app](https://agentchat.vercel.app/). | |
| 2. **Enter Details:** | |
| * **Deployment URL:** The URL of your LangGraph deployment (e.g., `http://localhost:2024` for a local deployment using LangServe, or the URL provided by LangSmith for a production deployment). | |
| * **Assistant / Graph ID:** The path of the graph you want to interact with (e.g., `chat`, `email_agent`). This is defined when adding routes with `add_routes(..., path="/your_path")`. | |
| * **LangSmith API Key** (Production Deployments Only): If you are connecting to a deployment hosted on LangSmith, you will need to provide your LangSmith API key for authentication. *This is NOT required for local LangGraph servers.* The key is stored locally in your browser's storage. | |
| 3. **Click "Continue":** You'll be taken to the chat interface, ready to interact with your agent. | |
| ### 2. Local Development (Full Control) | |
| 1. **Clone the Repository:** | |
| ```bash | |
| git clone https://github.com/langchain-ai/agent-chat-ui.git | |
| cd agent-chat-ui | |
| ``` | |
| 2. **Install Dependencies:** | |
| ```bash | |
| pnpm install # Or npm install, or yarn install | |
| ``` | |
| 3. **Start the Development Server:** | |
| ```bash | |
| pnpm dev # Or npm run dev, or yarn dev | |
| ``` | |
| 4. **Open in Browser:** The application will typically be available at `http://localhost:5173` (the port may vary; check your terminal output). Follow the instructions in "Using the Deployed Site" to connect to your LangGraph. | |
| ### 3. Quick Setup with `npx create-agent-chat-app` | |
| This method creates a new project directory with the Agent Chat UI already set up. | |
| 1. **Run the Command:** | |
| ```bash | |
| npx create-agent-chat-app | |
| ``` | |
| 2. **Follow Prompts:** You'll be prompted for a project name (default is `agent-chat-app`). | |
| 3. **Navigate to Project Directory:** | |
| ```bash | |
| cd agent-chat-app | |
| ``` | |
| 4. **Install and Run:** | |
| ```bash | |
| pnpm install # Or npm install, or yarn install | |
| pnpm dev # Or npm run dev, or yarn dev | |
| ``` | |
| 5. **Open in Browser:** The application will be available at `http://localhost:5173`. Follow the instructions in "Using the Deployed Site" to connect. | |
| ## LangGraph Setup (Prerequisites) | |
| Before using the Agent Chat UI, you need a running LangGraph agent served via LangServe. Below are examples using both a simple agent and an agent with human-in-the-loop. | |
| ### Basic LangGraph Example (Python) | |
| ```python | |
| # agent.py (Example LangGraph agent - Python) | |
| from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder | |
| from langchain_core.runnables import chain | |
| from langchain_openai import ChatOpenAI | |
| from langchain_core.messages import AIMessage, HumanMessage | |
| from langgraph.prebuilt import create_agent_executor | |
| from langchain_core.tools import tool | |
| # FastAPI and LangServe for serving the graph | |
| from fastapi import FastAPI | |
| from langserve import add_routes | |
| @tool | |
| def get_weather(city: str): | |
| """ | |
| Gets the weather for a specified city | |
| """ | |
| if city.lower() == "new york": | |
| return "The weather in New York is nice today with a high of 75F." | |
| else: | |
| return "The weather for that city is not supported" | |
| # Define the tools | |
| tools = [get_weather] | |
| prompt = ChatPromptTemplate.from_messages( | |
| [ | |
| ("system", "You are a helpful assistant"), | |
| MessagesPlaceholder(variable_name="messages"), | |
| MessagesPlaceholder(variable_name="agent_scratchpad"), | |
| ] | |
| ) | |
| model = ChatOpenAI(temperature=0).bind_tools(tools) | |
| @chain | |
| def transform_messages(data): | |
| messages = data["messages"] | |
| if not isinstance(messages[-1], HumanMessage): | |
| messages.append( | |
| AIMessage( | |
| content="I don't know how to respond to messages other than a final answer" | |
| ) | |
| ) | |
| return {"messages": messages} | |
| agent = ( | |
| { | |
| "messages": transform_messages, | |
| "agent_scratchpad": lambda x: [], # No tools in this simple example | |
| } | |
| | prompt | |
| | model | |
| ) | |
| # Wrap the agent in a RunnableGraph | |
| app = create_agent_executor(agent, tools) | |
| # Serve the graph using FastAPI and langserve | |
| fastapi_app = FastAPI( | |
| title="LangGraph Agent", | |
| version="1.0", | |
| description="A simple LangGraph agent server", | |
| ) | |
| # Mount LangServe at the /agent endpoint | |
| add_routes( | |
| fastapi_app, | |
| app, | |
| path="/chat", # Matches the graph ID we'll use in the UI | |
| ) | |
| if __name__ == "__main__": | |
| import uvicorn | |
| uvicorn.run(fastapi_app, host="localhost", port=2024) | |
| ``` | |
| To run this example: | |
| 1. Save the code as `agent.py`. | |
| 2. Install necessary packages: `pip install langchain langchain-core langchain-openai langgraph fastapi uvicorn "langserve[all]"` (add any other packages for your tools). | |
| 3. Set your OpenAI API key: `export OPENAI_API_KEY="your-openai-api-key"` | |
| 4. Run the script: `python agent.py` | |
| 5. Your LangGraph agent will be running at `http://localhost:2024/chat`, and the graph ID to enter into the ui is `chat`. | |
| ### LangGraph with Human-in-the-Loop Example (Python) | |
| ```python | |
| from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder | |
| from langchain_core.runnables import chain | |
| from langchain_openai import ChatOpenAI | |
| from langchain_core.messages import AIMessage, HumanMessage | |
| from langgraph.prebuilt import create_agent_executor, ToolInvocation, interrupt | |
| from langchain_core.tools import tool | |
| from fastapi import FastAPI | |
| from langserve import add_routes | |
| @tool | |
| def write_email(subject: str, body: str, to: str): | |
| """ | |
| Drafts an email with a specified subject, body and recipient | |
| """ | |
| print(f"Writing email with subject '{subject}' to '{to}'") # Debugging | |
| return f"Draft email to {to} with subject {subject} sent." | |
| tools = [write_email] | |
| prompt = ChatPromptTemplate.from_messages( | |
| [ | |
| ("system", "You are a helpful assistant that drafts emails."), | |
| MessagesPlaceholder(variable_name="messages"), | |
| MessagesPlaceholder(variable_name="agent_scratchpad"), | |
| ] | |
| ) | |
| model = ChatOpenAI(temperature=0, model="gpt-4-turbo-preview").bind_tools(tools) | |
| @chain | |
| def transform_messages(data): | |
| messages = data["messages"] | |
| if not isinstance(messages[-1], HumanMessage): | |
| messages.append( | |
| AIMessage( | |
| content="I don't know how to respond to messages other than a final answer" | |
| ) | |
| ) | |
| return {"messages": messages} | |
| def handle_interrupt(state): | |
| """Handles human-in-the-loop interruptions.""" | |
| print("---INTERRUPT---") # Debugging | |
| messages = state["messages"] | |
| last_message = messages[-1] | |
| if isinstance(last_message, AIMessage) and isinstance( | |
| last_message.content, list | |
| ): | |
| # Find the tool call | |
| for msg in last_message.content: | |
| if isinstance(msg, ToolInvocation): | |
| tool_name = msg.name | |
| tool_args = msg.args | |
| if tool_name == "write_email": | |
| # Construct the human interrupt request | |
| interrupt_data = { | |
| "type": "interrupt", | |
| "args": { | |
| "type": "response", | |
| "studio": { # optional | |
| "subject": tool_args["subject"], | |
| "body": tool_args["body"], | |
| "to": tool_args["to"], | |
| }, | |
| "description": "Response Instruction: \n\n- **Response**: Any response submitted will be passed to an LLM to rewrite the email. It can rewrite the email body, subject, or recipient.\n\n- **Edit or Accept**: Editing/Accepting the email.", | |
| }, | |
| } | |
| # Call the interrupt function and return the new state | |
| return interrupt(messages, interrupt_data) | |
| return {"messages": messages} | |
| agent = ( | |
| { | |
| "messages": transform_messages, | |
| "agent_scratchpad": lambda x: x.get("agent_scratchpad", []), | |
| } | |
| | prompt | |
| | model | |
| | handle_interrupt # Add the interrupt handler | |
| ) | |
| # Wrap the agent in a RunnableGraph | |
| app = create_agent_executor(agent, tools) | |
| # Serve the graph using FastAPI and langserve | |
| fastapi_app = FastAPI( | |
| title="LangGraph Agent", | |
| version="1.0", | |
| description="A simple LangGraph agent server", | |
| ) | |
| # Mount LangServe at the /agent endpoint | |
| add_routes( | |
| fastapi_app, | |
| app, | |
| path="/email_agent", # Matches the graph ID we'll use in the UI | |
| ) | |
| if __name__ == "__main__": | |
| import uvicorn | |
| uvicorn.run(fastapi_app, host="localhost", port=2024) | |
| ``` | |
| To run this example: | |
| 1. Save the code as `agent.py`. | |
| 2. Install necessary packages: `pip install langchain langchain-core langchain-openai langgraph fastapi uvicorn "langserve[all]"` (add any other packages for your tools). | |
| 3. Set your OpenAI API key: `export OPENAI_API_KEY="your-openai-api-key"` | |
| 4. Run the script: `python agent.py` | |
| 5. Your LangGraph agent will be running at `http://localhost:2024/email_agent`, and the graph ID to enter into the ui is `email_agent`. | |
| ## Key Concepts (LangGraph Integration) | |
| * **Messages Key:** The Agent Chat UI expects your LangGraph state to include a `messages` key, which holds a list of `langchain_core.messages.BaseMessage` instances (e.g., `HumanMessage`, `AIMessage`, `SystemMessage`, `ToolMessage`). This is standard practice in LangChain and LangGraph for conversational agents. | |
| * **Checkpoints:** The UI automatically utilizes LangGraph's checkpointing mechanism to save and restore the conversation state. This ensures that you can resume conversations and explore different branches without losing progress. | |
| * **`add_routes` and `path`:** The `path` argument in `add_routes` (from `langserve`) determines the "Graph ID" that you'll enter in the UI. This is crucial for the UI to connect to the correct LangGraph endpoint. | |
| * **Tool Calling:** If you use `bind_tools` with your LLM, tool calls and tool results will be rendered in the UI, with clear labels showing the function call and the response. | |
| ## Human-in-the-Loop Details | |
| The Agent Chat UI supports human-in-the-loop interactions using the standard LangGraph interrupt schema. Here's how it works: | |
| 1. **Interrupt Schema:** Your LangGraph agent should call the `interrupt` function (from `langgraph.prebuilt`) with a specific schema to pause execution and request human input. The schema should include: | |
| * `type`: `interrupt`. | |
| * `args`: A dictionary containing information about the interruption. This is where you provide the data the human needs to review (e.g., a draft email, a proposed action). | |
| * `type`: Can be one of `"response"`, `"accept"`, or `"ignore"`. This indicates the type of human interaction expected. | |
| * `args`: Further arguments specific to the interrupt type. For instance, if the interrupt type is `response`, the `args` could contain a message to give to the user. | |
| * `studio`: *Optional.* If included, this must contain `subject`, `body`, and `to` keys for interrupt requests. | |
| * `description`: *Optional.* If used, this provides a static prompt to the user that displays the fields the human needs to complete. | |
| * `name` (optional): A name for the interrupt. | |
| * `id` (optional): A unique identifier for the interrupt. | |
| 2. **UI Rendering:** When the Agent Chat UI detects an interrupt with this schema, it will automatically render a user-friendly interface for human interaction. This interface allows the user to: | |
| * **Inspect:** View the data provided in the `args` of the interrupt (e.g., the content of a draft email). | |
| * **Edit:** Modify the data (if the interrupt schema allows for it). | |
| * **Respond:** Provide a response (if the interrupt type is `"response"`). | |
| * **Accept/Reject:** Approve or reject the proposed action (if the interrupt type is `"accept"`). | |
| * **Ignore:** Ignore the interrupt (if the interrupt type is `"ignore"`). | |
| 3. **Resuming Execution:** After the human interacts with the interrupt, the UI sends the response back to the LangGraph via LangServe, and execution resumes. | |