url
stringlengths 62
63
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
77
| comments_url
stringlengths 71
72
| events_url
stringlengths 69
70
| html_url
stringlengths 52
53
| id
int64 2.4B
2.87B
| node_id
stringlengths 18
18
| number
int64 953
3.56k
| title
stringlengths 5
300
| user
dict | labels
listlengths 0
2
| state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
listlengths 0
1
| milestone
null | comments
dict | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 0
classes | pull_request
dict | body
stringlengths 101
43.1k
| closed_by
dict | reactions
dict | timeline_url
stringlengths 71
72
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/langchain-ai/langgraph/issues/3557
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3557/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3557/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3557/events
|
https://github.com/langchain-ai/langgraph/issues/3557
| 2,873,182,151
|
I_kwDOKFU0lc6rQUfH
| 3,557
|
langgraph-checkpoint-postgres issue with version update
|
{
"login": "saurabhlalsaxena",
"id": 17300846,
"node_id": "MDQ6VXNlcjE3MzAwODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/17300846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saurabhlalsaxena",
"html_url": "https://github.com/saurabhlalsaxena",
"followers_url": "https://api.github.com/users/saurabhlalsaxena/followers",
"following_url": "https://api.github.com/users/saurabhlalsaxena/following{/other_user}",
"gists_url": "https://api.github.com/users/saurabhlalsaxena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saurabhlalsaxena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saurabhlalsaxena/subscriptions",
"organizations_url": "https://api.github.com/users/saurabhlalsaxena/orgs",
"repos_url": "https://api.github.com/users/saurabhlalsaxena/repos",
"events_url": "https://api.github.com/users/saurabhlalsaxena/events{/privacy}",
"received_events_url": "https://api.github.com/users/saurabhlalsaxena/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-23T13:07:57
| 2025-02-23T14:34:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
pip install -U langgraph-checkpoint-postgres
DB_URI ="" #Use an existing postgres checkpointer DB
from psycopg import Connection
from psycopg_pool import ConnectionPool
from psycopg.rows import dict_row
from langgraph.checkpoint.postgres import PostgresSaver
connection_kwargs ={
"autocommit": True,
"prepare_threshold": 0,
"row_factory": dict_row,
}
pool = ConnectionPool(
conninfo=DB_URI,
max_size=20,
kwargs=connection_kwargs
)
checkpointer = PostgresSaver(pool)
thread_id = "1"
config = {"configurable": {"thread_id": thread_id}}
checkpoint = checkpointer.get(config)
```
### Error Message and Stack Trace (if applicable)
```shell
/usr/local/lib/python3.11/dist-packages/langgraph/checkpoint/base/__init__.py in get(self, config)
234 Optional[Checkpoint]: The requested checkpoint, or None if not found.
235 """
--> 236 if value := self.get_tuple(config):
237 return value.checkpoint
238
/usr/local/lib/python3.11/dist-packages/langgraph/checkpoint/postgres/__init__.py in get_tuple(self, config)
218
219 with self._cursor() as cur:
--> 220 cur.execute(
221 self.SELECT_SQL + where,
222 args,
/usr/local/lib/python3.11/dist-packages/psycopg/cursor.py in execute(self, query, params, prepare, binary)
95 )
96 except e._NO_TRACEBACK as ex:
---> 97 raise ex.with_traceback(None)
98 return self
99
UndefinedColumn: column cw.task_path does not exist
LINE 27: ...array_agg(array[cw.type::bytea, cw.blob] order by cw.task_pa...
```
### Description
Faced the below issue with langgraph-checkpoint-postgres:
```error
UndefinedColumn: column cw.task_path does not exist
LINE 27: ...array_agg(array[cw.type::bytea, cw.blob] order by cw.task_pa...
```
Tried moving to an older version of langgraph-checkpoint-postgres (version - 2.0.8) and it is working fine.
**I.e. we are able to connect to the DB with langgraph-checkpoint-postgres == 2.0.8**
This is an issue with a recent upgrade to the library.
We have code running in production and will become a major problem for us.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.37
> langchain: 0.3.19
> langchain_community: 0.3.18
> langsmith: 0.3.8
> langchain_cli: 0.0.35
> langchain_openai: 0.3.6
> langchain_text_splitters: 0.3.6
> langgraph_sdk: 0.1.53
> langserve: 0.3.1
Other Dependencies
------------------
> aiohttp<4.0.0,>=3.8.3: Installed. No version info available.
> async-timeout<5.0.0,>=4.0.0;: Installed. No version info available.
> dataclasses-json<0.7,>=0.5.7: Installed. No version info available.
> fastapi: 0.115.8
> gitpython: 3.1.44
> gritql: 0.1.5
> httpx: 0.28.1
> httpx-sse<1.0.0,>=0.4.0: Installed. No version info available.
> jsonpatch<2.0,>=1.33: Installed. No version info available.
> langchain-anthropic;: Installed. No version info available.
> langchain-aws;: Installed. No version info available.
> langchain-cohere;: Installed. No version info available.
> langchain-community;: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.34: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.35: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.37: Installed. No version info available.
> langchain-deepseek;: Installed. No version info available.
> langchain-fireworks;: Installed. No version info available.
> langchain-google-genai;: Installed. No version info available.
> langchain-google-vertexai;: Installed. No version info available.
> langchain-groq;: Installed. No version info available.
> langchain-huggingface;: Installed. No version info available.
> langchain-mistralai;: Installed. No version info available.
> langchain-ollama;: Installed. No version info available.
> langchain-openai;: Installed. No version info available.
> langchain-text-splitters<1.0.0,>=0.3.6: Installed. No version info available.
> langchain-together;: Installed. No version info available.
> langchain-xai;: Installed. No version info available.
> langchain<1.0.0,>=0.3.19: Installed. No version info available.
> langserve[all]: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> langsmith<0.4,>=0.1.125: Installed. No version info available.
> langsmith<0.4,>=0.1.17: Installed. No version info available.
> numpy<2,>=1.26.4;: Installed. No version info available.
> numpy<3,>=1.26.2;: Installed. No version info available.
> openai<2.0.0,>=1.58.1: Installed. No version info available.
> orjson: 3.10.15
> packaging<25,>=23.2: Installed. No version info available.
> pydantic: 2.10.6
> pydantic-settings<3.0.0,>=2.4.0: Installed. No version info available.
> pydantic<3.0.0,>=2.5.2;: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4;: Installed. No version info available.
> pytest: 8.3.4
> PyYAML>=5.3: Installed. No version info available.
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> requests<3,>=2: Installed. No version info available.
> rich: 13.9.4
> SQLAlchemy<3,>=1.4: Installed. No version info available.
> sse-starlette: 1.8.2
> tenacity!=8.4.0,<10,>=8.1.0: Installed. No version info available.
> tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
> tiktoken<1,>=0.7: Installed. No version info available.
> tomlkit: 0.13.2
> typer[all]: Installed. No version info available.
> typing-extensions>=4.7: Installed. No version info available.
> uvicorn: 0.34.0
> zstandard: 0.23.0
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3557/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3556
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3556/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3556/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3556/events
|
https://github.com/langchain-ai/langgraph/issues/3556
| 2,872,863,878
|
I_kwDOKFU0lc6rPGyG
| 3,556
|
Tavily Search Error with async/await in LangGraph
|
{
"login": "kissycn",
"id": 89451328,
"node_id": "MDQ6VXNlcjg5NDUxMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/89451328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kissycn",
"html_url": "https://github.com/kissycn",
"followers_url": "https://api.github.com/users/kissycn/followers",
"following_url": "https://api.github.com/users/kissycn/following{/other_user}",
"gists_url": "https://api.github.com/users/kissycn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kissycn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kissycn/subscriptions",
"organizations_url": "https://api.github.com/users/kissycn/orgs",
"repos_url": "https://api.github.com/users/kissycn/repos",
"events_url": "https://api.github.com/users/kissycn/events{/privacy}",
"received_events_url": "https://api.github.com/users/kissycn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-23T02:19:28
| 2025-02-23T13:35:14
| 2025-02-23T13:35:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import os
import asyncio
from typing import Annotated
from typing_extensions import TypedDict
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, SystemMessage, AnyMessage
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langchain_community.tools import SearchAPIResults
from langchain_community.tools import TavilySearchResults
from langchain_openai import ChatOpenAI
os.environ['TAVILY_API_KEY'] = 'tvly-xxx'
class State(TypedDict):
messages: Annotated[list, add_messages]
llm = ChatOpenAI(model="gpt-4o-mini")
tool = TavilySearchResults(max_results=1, include_answer=False, include_raw_content=False)
#tool = SearchAPIResults(engine="google")
tools = [tool]
# 系统提示词
system_prompt = """You are a helpful assistant."""
#system_prompt = "你是小明。如果有用户问你是谁就回答:小明"
llm_with_tools = llm.bind_tools(tools)
async def chatbot(state: State):
response = await llm_with_tools.ainvoke(state["messages"])
return {"messages": [response]}
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
searchbot_graph = graph_builder.compile()
user_input = "what is langgraph"
# 创建初始消息列表,包含系统提示和用户输入
initial_messages = [
SystemMessage(content=system_prompt),
HumanMessage(content=user_input)
]
async def main():
# 使用包含系统提示的消息列表
async for chunk in searchbot_graph.astream(
input={"messages": initial_messages},
stream_mode="messages"
):
print(chunk)
print("\n\n")
if __name__ == "__main__":
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
```shell
(ToolMessage(content="ClientConnectorCertificateError(ConnectionKey(host='api.tavily.com', port=443, is_ssl=True, ssl=True, proxy=None, proxy_auth=None, proxy_headers_hash=None), SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)'))", name='tavily_search_results_json', id='e76f58ec-4299-4adb-9488-c6e03804fe59', tool_call_id='call_Mqle439YOLwiLT3i9TzwQhy8', artifact={}), {'langgraph_step': 2, 'langgraph_node': 'tools', 'langgraph_triggers': ['branch:chatbot:tools_condition:tools'], 'langgraph_path': ('__pregel_pull', 'tools'), 'langgraph_checkpoint_ns': 'tools:2743d809-8b93-3c97-bdb2-11d04607977d'})
```
### Description
When using LangGraph with async/await pattern and Tavily Search tool, the code fails with an SSL certificate verification error. However, the same code works fine when not using async implementation.
### System Info
Langgraph Version:0.2.69
langchain-community:0.3.17
|
{
"login": "kissycn",
"id": 89451328,
"node_id": "MDQ6VXNlcjg5NDUxMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/89451328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kissycn",
"html_url": "https://github.com/kissycn",
"followers_url": "https://api.github.com/users/kissycn/followers",
"following_url": "https://api.github.com/users/kissycn/following{/other_user}",
"gists_url": "https://api.github.com/users/kissycn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kissycn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kissycn/subscriptions",
"organizations_url": "https://api.github.com/users/kissycn/orgs",
"repos_url": "https://api.github.com/users/kissycn/repos",
"events_url": "https://api.github.com/users/kissycn/events{/privacy}",
"received_events_url": "https://api.github.com/users/kissycn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3556/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3543
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3543/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3543/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3543/events
|
https://github.com/langchain-ai/langgraph/issues/3543
| 2,867,904,556
|
I_kwDOKFU0lc6q8MAs
| 3,543
|
graph restart bug
|
{
"login": "andyzhou1982",
"id": 51428368,
"node_id": "MDQ6VXNlcjUxNDI4MzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/51428368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyzhou1982",
"html_url": "https://github.com/andyzhou1982",
"followers_url": "https://api.github.com/users/andyzhou1982/followers",
"following_url": "https://api.github.com/users/andyzhou1982/following{/other_user}",
"gists_url": "https://api.github.com/users/andyzhou1982/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andyzhou1982/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyzhou1982/subscriptions",
"organizations_url": "https://api.github.com/users/andyzhou1982/orgs",
"repos_url": "https://api.github.com/users/andyzhou1982/repos",
"events_url": "https://api.github.com/users/andyzhou1982/events{/privacy}",
"received_events_url": "https://api.github.com/users/andyzhou1982/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292643,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwIw",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/invalid",
"name": "invalid",
"color": "e4e669",
"default": true,
"description": "This doesn't seem right"
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-21T04:06:21
| 2025-02-21T14:48:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
def _init_graph(self)->CompiledStateGraph:
graph_builder = StateGraph(MessagesState)
tool_node = ToolNode(tools=self.tools)
graph_builder.add_node("reply", _reply)
graph_builder.add_node("route", _route)
graph_builder.add_node("complete", _complete)
graph_builder.add_node("approve", _approve)
graph_builder.add_node("tools", tool_node)
graph_builder.add_edge(START, "reply")
graph_builder.add_edge("reply", "route")
graph_builder.add_edge("complete", "reply")
graph_builder.add_edge("tools", "reply")
graph = graph_builder.compile(checkpointer=self.checkpointer,interrupt_before=["complete","approve"])
print(f"==========PreBookConsultant Graph==========")
graph.get_graph().print_ascii()
return graph
def _route(self,state: MessagesState) -> Command[Literal["complete", "approve", "tools", "__end__"]]:
......
if valid_messages:
return Command(
update={"messages": [AIMessage(content="\n".join(valid_messages),name=self.__class__.__name__)]},
goto="complete"
)
elif inquiry_messages:
self.cache_message = last_message
return Command(
update={"messages": [AIMessage(content="\n".join(inquiry_messages),name=self.__class__.__name__)]},
goto="approve"
)
else:
return Command(
goto="tools"
)
def _complete(self,state: MessagesState) -> MessagesState:
return state
else:
return Command(
goto=END
)
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
I designed a graph,and set interrupt_before=["complete"],when "_route" node finished,the graph is interrupted and the "snapshot.next" is "complete" indeed,then I use "graph.invoke(input=None,config=config)" to restart the graph,but the graph not enter into the "complete" node, it finished directly
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.26100
> Python Version: 3.11.11 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:34:19) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.15
> langchain_community: 0.3.15
> langsmith: 0.2.11
> langchain_experimental: 0.3.4
> langchain_milvus: 0.1.8
> langchain_openai: 0.3.1
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.53
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.8
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> pymilvus: 2.5.4
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3543/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3538
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3538/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3538/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3538/events
|
https://github.com/langchain-ai/langgraph/issues/3538
| 2,867,286,088
|
I_kwDOKFU0lc6q51BI
| 3,538
|
ToolNode not working. TypeError: Tool search returned unexpected type: <class 'str'>
|
{
"login": "henryclw",
"id": 97804910,
"node_id": "U_kgDOBdRibg",
"avatar_url": "https://avatars.githubusercontent.com/u/97804910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/henryclw",
"html_url": "https://github.com/henryclw",
"followers_url": "https://api.github.com/users/henryclw/followers",
"following_url": "https://api.github.com/users/henryclw/following{/other_user}",
"gists_url": "https://api.github.com/users/henryclw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/henryclw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/henryclw/subscriptions",
"organizations_url": "https://api.github.com/users/henryclw/orgs",
"repos_url": "https://api.github.com/users/henryclw/repos",
"events_url": "https://api.github.com/users/henryclw/events{/privacy}",
"received_events_url": "https://api.github.com/users/henryclw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-20T21:15:02
| 2025-02-21T03:26:02
| 2025-02-21T03:14:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from typing import Literal
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, StateGraph, MessagesState
from langgraph.prebuilt import ToolNode
# Define the tools for the agent to use
@tool
def search(query: str):
"""Call to surf the web."""
# This is a placeholder, but don't tell the LLM that...
if "sf" in query.lower() or "san francisco" in query.lower():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."
tools = [search]
tool_node = ToolNode(tools)
model = ChatAnthropic(model="claude-3-5-sonnet-latest", temperature=0).bind_tools(tools)
# Define the function that determines whether to continue or not
def should_continue(state: MessagesState) -> Literal["tools", END]:
messages = state['messages']
last_message = messages[-1]
# If the LLM makes a tool call, then we route to the "tools" node
if last_message.tool_calls:
return "tools"
# Otherwise, we stop (reply to the user)
return END
# Define the function that calls the model
def call_model(state: MessagesState):
messages = state['messages']
response = model.invoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("tools", 'agent')
# Initialize memory to persist state between graph runs
checkpointer = MemorySaver()
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable.
# Note that we're (optionally) passing the memory when compiling the graph
app = workflow.compile(checkpointer=checkpointer)
# Use the agent
final_state = app.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf"}]},
config={"configurable": {"thread_id": 42}}
)
final_state["messages"][-1].content
```
### Error Message and Stack Trace (if applicable)
```shell
Traceback (most recent call last):
File "main.py", line 81, in <module>
final_state = app.invoke(
^^^^^^^^^^^
File "langgraph\pregel\__init__.py", line 2142, in invoke
for chunk in self.stream(
File "langgraph\pregel\__init__.py", line 1797, in stream
for _ in runner.tick(
File "langgraph\pregel\runner.py", line 230, in tick
run_with_retry(
File "langgraph\pregel\retry.py", line 40, in run_with_retry
return task.proc.invoke(task.input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "langgraph\utils\runnable.py", line 546, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "langgraph\utils\runnable.py", line 310, in invoke
ret = context.run(self.func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "langgraph\prebuilt\tool_node.py", line 238, in _func
outputs = [
^
File "D:\FullStack\python\conda\envs\py311psa\Lib\concurrent\futures\_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\FullStack\python\conda\envs\py311psa\Lib\concurrent\futures\_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
^^^^^^^^^^^^^^^^^^^
File "D:\FullStack\python\conda\envs\py311psa\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "D:\FullStack\python\conda\envs\py311psa\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "D:\FullStack\python\conda\envs\py311psa\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "langchain_core\runnables\config.py", line 527, in _wrapped_fn
return contexts.pop().run(fn, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "langgraph\prebuilt\tool_node.py", line 347, in _run_one
raise TypeError(
TypeError: Tool search returned unexpected type: <class 'str'>
During task with name 'tools' and id '680e4f05-4aca-9c1d-d026-ac815f3b08c9'
```
### Description
I was running the **Low-level implementation** from the README.md file on https://github.com/langchain-ai/langgraph
I just copy paste the example code from README but change the llm to a local llm, it's sure that the llm is working fine.
The ToolNode should be working, instead, it throws a `TypeError: Tool search returned unexpected type: <class 'str'>`
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:06:23) [MSC v.1942 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.35
> langchain: 0.3.19
> langchain_community: 0.3.17
> langsmith: 0.2.11
> langchain_anthropic: 0.3.3
> langchain_aws: 0.2.12
> langchain_fireworks: 0.2.7
> langchain_google_genai: 2.0.8
> langchain_ollama: 0.2.2
> langchain_openai: 0.3.1
> langchain_text_splitters: 0.3.6
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.12
> aiohttp<4.0.0,>=3.8.3: Installed. No version info available.
> anthropic: 0.45.2
> async-timeout<5.0.0,>=4.0.0;: Installed. No version info available.
> boto3: 1.36.19
> dataclasses-json<0.7,>=0.5.7: Installed. No version info available.
> defusedxml: 0.7.1
> filetype: 1.2.0
> fireworks-ai: 0.15.12
> google-generativeai: 0.8.4
> httpx: 0.28.1
> httpx-sse<1.0.0,>=0.4.0: Installed. No version info available.
> jsonpatch<2.0,>=1.33: Installed. No version info available.
> langchain-anthropic;: Installed. No version info available.
> langchain-aws;: Installed. No version info available.
> langchain-cohere;: Installed. No version info available.
> langchain-community;: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.34: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.35: Installed. No version info available.
> langchain-deepseek;: Installed. No version info available.
> langchain-fireworks;: Installed. No version info available.
> langchain-google-genai;: Installed. No version info available.
> langchain-google-vertexai;: Installed. No version info available.
> langchain-groq;: Installed. No version info available.
> langchain-huggingface;: Installed. No version info available.
> langchain-mistralai;: Installed. No version info available.
> langchain-ollama;: Installed. No version info available.
> langchain-openai;: Installed. No version info available.
> langchain-text-splitters<1.0.0,>=0.3.6: Installed. No version info available.
> langchain-together;: Installed. No version info available.
> langchain-xai;: Installed. No version info available.
> langchain<1.0.0,>=0.3.18: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> langsmith<0.4,>=0.1.125: Installed. No version info available.
> langsmith<0.4,>=0.1.17: Installed. No version info available.
> numpy: 1.26.4
> numpy<2,>=1.26.4;: Installed. No version info available.
> numpy<3,>=1.26.2;: Installed. No version info available.
> ollama: 0.4.7
> openai: 1.62.0
> orjson: 3.10.15
> packaging<25,>=23.2: Installed. No version info available.
> pydantic: 2.10.6
> pydantic-settings<3.0.0,>=2.4.0: Installed. No version info available.
> pydantic<3.0.0,>=2.5.2;: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4;: Installed. No version info available.
> PyYAML>=5.3: Installed. No version info available.
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> requests<3,>=2: Installed. No version info available.
> SQLAlchemy<3,>=1.4: Installed. No version info available.
> tenacity!=8.4.0,<10,>=8.1.0: Installed. No version info available.
> tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
> tiktoken: 0.8.0
> typing-extensions>=4.7: Installed. No version info available.
> zstandard: Installed. No version info available.
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3538/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3537
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3537/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3537/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3537/events
|
https://github.com/langchain-ai/langgraph/issues/3537
| 2,867,234,202
|
I_kwDOKFU0lc6q5oWa
| 3,537
|
bad repo - On the Edge of AI Adaptability
|
{
"login": "jgwill",
"id": 23141173,
"node_id": "MDQ6VXNlcjIzMTQxMTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/23141173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jgwill",
"html_url": "https://github.com/jgwill",
"followers_url": "https://api.github.com/users/jgwill/followers",
"following_url": "https://api.github.com/users/jgwill/following{/other_user}",
"gists_url": "https://api.github.com/users/jgwill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jgwill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jgwill/subscriptions",
"organizations_url": "https://api.github.com/users/jgwill/orgs",
"repos_url": "https://api.github.com/users/jgwill/repos",
"events_url": "https://api.github.com/users/jgwill/events{/privacy}",
"received_events_url": "https://api.github.com/users/jgwill/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-20T20:47:22
| 2025-02-20T21:00:06
| 2025-02-20T20:49:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
oops bad repo
----
Though I feel like keeping a part of that mistaken repo here, love all your works guy
Title:
On the Edge of AI Adaptability: A Thought Experiment
Body:
Some of you may have seen an earlier issue here that has since been removed. While the details are no longer available, the core idea remains: How can AI move beyond rigid execution and into the realm of adaptive intelligence?
At its heart, the discussion was about AI workflows that don’t just execute but evolve, responding dynamically to uncertainty, pauses, and improvisational shifts. Instead of thinking in terms of predefined logic, we started exploring a more organic approach—one where AI processes are not just structured but also aware of their own narrative flow.
This isn’t the last time we’ll be thinking along these lines. If you’re intrigued, if this sparks something, let’s keep the conversation going. What does it mean for AI to be not just functional, but emergent?
|
{
"login": "hinthornw",
"id": 13333726,
"node_id": "MDQ6VXNlcjEzMzMzNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13333726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hinthornw",
"html_url": "https://github.com/hinthornw",
"followers_url": "https://api.github.com/users/hinthornw/followers",
"following_url": "https://api.github.com/users/hinthornw/following{/other_user}",
"gists_url": "https://api.github.com/users/hinthornw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hinthornw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hinthornw/subscriptions",
"organizations_url": "https://api.github.com/users/hinthornw/orgs",
"repos_url": "https://api.github.com/users/hinthornw/repos",
"events_url": "https://api.github.com/users/hinthornw/events{/privacy}",
"received_events_url": "https://api.github.com/users/hinthornw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3537/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3537/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3535
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3535/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3535/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3535/events
|
https://github.com/langchain-ai/langgraph/issues/3535
| 2,867,083,236
|
I_kwDOKFU0lc6q5Dfk
| 3,535
|
[Urgent!] Docker build doesn't create new images
|
{
"login": "nikita-wayhq",
"id": 196185042,
"node_id": "U_kgDOC7GL0g",
"avatar_url": "https://avatars.githubusercontent.com/u/196185042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikita-wayhq",
"html_url": "https://github.com/nikita-wayhq",
"followers_url": "https://api.github.com/users/nikita-wayhq/followers",
"following_url": "https://api.github.com/users/nikita-wayhq/following{/other_user}",
"gists_url": "https://api.github.com/users/nikita-wayhq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikita-wayhq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikita-wayhq/subscriptions",
"organizations_url": "https://api.github.com/users/nikita-wayhq/orgs",
"repos_url": "https://api.github.com/users/nikita-wayhq/repos",
"events_url": "https://api.github.com/users/nikita-wayhq/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikita-wayhq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-20T19:29:10
| 2025-02-20T21:41:07
| 2025-02-20T21:41:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
Again, we have problems with Docker build - our images don't launch.
We have a golive tomorrow morning and cannot deploy our main component: the agent.
This is very frustrating. And it already happened to us the second time within the last week:
https://github.com/langchain-ai/langgraph/issues/3382
```
### Error Message and Stack Trace (if applicable)
```shell
Defaulted container "support-agent" out of: support-agent, redis, cloud-sql-proxy (init)
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
run(
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
server.run()
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
await self._serve(sockets)
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
config.load()
File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/api/langgraph_api/server.py", line 31, in <module>
ImportError: cannot import name 'configure_loopback_transports' from 'langgraph_sdk.client' (/usr/local/lib/python3.11/site-packages/langgraph_sdk/client.py)
```
### Description
We already removed langgraph-api and langgraph-checkpoint as support suggested throughout the previous 2 times when we had this issue..
### System Info
```
python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:06:57 PDT 2024; root:xnu-11215.41.3~3/RELEASE_ARM64_T6041
> Python Version: 3.11.5 (main, Feb 5 2025, 16:10:32) [Clang 16.0.0 (clang-1600.0.26.6)]
Package Information
-------------------
> langchain_core: 0.3.33
> langchain: 0.3.17
> langchain_community: 0.3.16
> langsmith: 0.2.11
> langchain_anthropic: 0.3.0
> langchain_openai: 0.2.10
> langchain_text_splitters: 0.3.5
> langgraph_api: 0.0.15
> langgraph_cli: 0.1.65
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.48
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.12
> anthropic: 0.45.2
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.69
> langgraph-checkpoint: 2.0.10
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.55.3
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.6
> pydantic-settings: 2.7.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.27
> sse-starlette: 2.1.3
> starlette: 0.45.3
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
> watchfiles: 1.0.4
> zstandard: Installed. No version info available.
```
|
{
"login": "hinthornw",
"id": 13333726,
"node_id": "MDQ6VXNlcjEzMzMzNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13333726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hinthornw",
"html_url": "https://github.com/hinthornw",
"followers_url": "https://api.github.com/users/hinthornw/followers",
"following_url": "https://api.github.com/users/hinthornw/following{/other_user}",
"gists_url": "https://api.github.com/users/hinthornw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hinthornw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hinthornw/subscriptions",
"organizations_url": "https://api.github.com/users/hinthornw/orgs",
"repos_url": "https://api.github.com/users/hinthornw/repos",
"events_url": "https://api.github.com/users/hinthornw/events{/privacy}",
"received_events_url": "https://api.github.com/users/hinthornw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3535/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3532
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3532/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3532/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3532/events
|
https://github.com/langchain-ai/langgraph/issues/3532
| 2,866,915,274
|
I_kwDOKFU0lc6q4afK
| 3,532
|
Adding conditional edge from the entry point node does not seem to pass the Overall state to the validation function
|
{
"login": "sand-heap",
"id": 156697292,
"node_id": "U_kgDOCVcCzA",
"avatar_url": "https://avatars.githubusercontent.com/u/156697292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sand-heap",
"html_url": "https://github.com/sand-heap",
"followers_url": "https://api.github.com/users/sand-heap/followers",
"following_url": "https://api.github.com/users/sand-heap/following{/other_user}",
"gists_url": "https://api.github.com/users/sand-heap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sand-heap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sand-heap/subscriptions",
"organizations_url": "https://api.github.com/users/sand-heap/orgs",
"repos_url": "https://api.github.com/users/sand-heap/repos",
"events_url": "https://api.github.com/users/sand-heap/events{/privacy}",
"received_events_url": "https://api.github.com/users/sand-heap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292607,
"node_id": "LA_kwDOKFU0lc8AAAABW2Sv_w",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-20T18:06:22
| 2025-02-21T14:49:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Discussed in https://github.com/langchain-ai/langgraph/discussions/3387
<div type='discussions-op-text'>
<sup>Originally posted by **sand-heap** February 11, 2025</sup>
Pretty much what the title says. I can explain further with code
```
...
@dataclass
class InputState:
some_field: int
class OverAllState(messages):
some_field: int
def validate_transition(state: OverallState)->List:
if state.get("some_field") == 1:
return END
else:
return "other_node"
def some_node(input: InputState)->OverAllState:
return {"some_field": input.some_field+1}
workflow = StateGraph(OverAllState, input=InputState)
workflow.set_entry_point("node")
workflow.add("node", some_node)
workflow.add("other_node", other_node)
workflow.add_conditional_edges("node", validate_transition,...)
...
# add end
...
```
I would expect the `validate_transition` to receive the `OverallState` but for some reason it gets the `InputState` ?!
However if i add a `mock_node` between `node` and `other_node` and make the edge from `mock_node` conditional instead and have a sequential flow from `node` -> `mock_node`, the behavour is as expected.
can you please help? ++ @vbarda @hinthornw </div>
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3532/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3515
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3515/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3515/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3515/events
|
https://github.com/langchain-ai/langgraph/issues/3515
| 2,864,323,755
|
I_kwDOKFU0lc6quhyr
| 3,515
|
Lang Graph Nodes Experience High Latency Unrelated to LLM Execution
|
{
"login": "joshuamoreno1",
"id": 1558591,
"node_id": "MDQ6VXNlcjE1NTg1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1558591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshuamoreno1",
"html_url": "https://github.com/joshuamoreno1",
"followers_url": "https://api.github.com/users/joshuamoreno1/followers",
"following_url": "https://api.github.com/users/joshuamoreno1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshuamoreno1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshuamoreno1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshuamoreno1/subscriptions",
"organizations_url": "https://api.github.com/users/joshuamoreno1/orgs",
"repos_url": "https://api.github.com/users/joshuamoreno1/repos",
"events_url": "https://api.github.com/users/joshuamoreno1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshuamoreno1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-19T20:20:59
| 2025-02-19T21:33:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
We use langchain default observability.
```
### Error Message and Stack Trace (if applicable)
```shell
We don't have an error, the traces are our only evidence.
```
### Description
Hi there,
We're experiencing significant latency in our agent. When analyzing the trace in LangSmith, we notice that LLM operations execute relatively quickly, but the node exhibits unexpectedly high latency with no clear cause. We suspect that the issue might be related to the LangSmith integration, but we have no way to confirm it.
As shown in the attached image, there is a delay of over one second where seemingly nothing is happening.
Any insights or suggestions would be greatly appreciated.
<img width="390" alt="Image" src="https://github.com/user-attachments/assets/e6446d3e-f882-4aa5-b304-7162a96a04eb" />
### System Info
System Information
------------------
> OS: Ubuntu
> OS Version: Ubuntu 22.04.5 LTS jammy
> Python Version: 3.12.8 (main, Dec 3 2024, 18:42:41) [Clang 16.0.0 (clang-1600.0.26.4)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_anthropic: 0.3.1
> langchain_openai: 0.2.14
> langchain_text_splitters: 0.3.5
> langserve: 0.3.1
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.42.0
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: 0.115.6
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 2.2.1
> openai: 1.59.7
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> sse-starlette: 2.2.1
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3515/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3483
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3483/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3483/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3483/events
|
https://github.com/langchain-ai/langgraph/issues/3483
| 2,861,532,958
|
I_kwDOKFU0lc6qj4ce
| 3,483
|
Hidden State not Appearing in Conditional Edge
|
{
"login": "GiulioCMSanto",
"id": 32403782,
"node_id": "MDQ6VXNlcjMyNDAzNzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/32403782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GiulioCMSanto",
"html_url": "https://github.com/GiulioCMSanto",
"followers_url": "https://api.github.com/users/GiulioCMSanto/followers",
"following_url": "https://api.github.com/users/GiulioCMSanto/following{/other_user}",
"gists_url": "https://api.github.com/users/GiulioCMSanto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GiulioCMSanto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GiulioCMSanto/subscriptions",
"organizations_url": "https://api.github.com/users/GiulioCMSanto/orgs",
"repos_url": "https://api.github.com/users/GiulioCMSanto/repos",
"events_url": "https://api.github.com/users/GiulioCMSanto/events{/privacy}",
"received_events_url": "https://api.github.com/users/GiulioCMSanto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-18T20:06:57
| 2025-02-18T20:22:35
| 2025-02-18T20:22:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Discussed in https://github.com/langchain-ai/langgraph/discussions/3348
<div type='discussions-op-text'>
<sup>Originally posted by **GiulioCMSanto** February 7, 2025</sup>
Hello!! Before opening a Issue, I would like to start a discussion on something that looks like a bug! 😄
**The Issue**: when using `conditional_edge` the hidden state is ignored.
### Reproducing the Error
**Imports**
```
import uuid
from pydantic import BaseModel
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, MessagesState, StateGraph
```
**States**
```
class InputType(BaseModel):
input: str
class OutputType(BaseModel):
output: str
```
**Nodes**
```
def node_1(state: InputType) -> MessagesState :
print(f"Node 1 input: {state}")
return {"messages": ["out_node_1"]}
def node_2(state: MessagesState) -> OutputType:
print(f"Node 2 input: {state}")
return OutputType(output="out_node_2")
def router(state: MessagesState) -> str:
print(f"Router input: {state}")
return "node_2"
```
**Notice:** **MessagesState** is a **Hidden State** here.
**The Graph**
```
builder = StateGraph(input=InputType, output=OutputType)
builder.add_node("node_1", node_1)
builder.add_node("node_2", node_2)
builder.add_edge(START, "node_1")
builder.add_conditional_edges(
"node_1",
router,
["node_2"]
)
builder.add_edge("node_2", END)
graph = builder.compile(checkpointer=MemorySaver())
config = {
"configurable": {
"thread_id": uuid.uuid4(),
}
}
graph.invoke(
InputType(input="Hi"),
config,
stream_mode="values"
)
```
#### Outcomes
```
Node 1 input: input='Hi'
Router input: input='Hi'
Node 2 input: {'messages': [HumanMessage(content='out_node_1', additional_kwargs={}, response_metadata={}, id='7c1b5467-d55a-4af7-9888-0bff210ca726')]}
```
### Comments
Notice that `Router input: input='Hi'` and not `{'messages': [HumanMessage(content='out_node_1', additional_kwargs={}, response_metadata={}, id='7c1b5467-d55a-4af7-9888-0bff210ca726')]}`. Node 2 actually have access to the Hidden State, but **not the Conditional Edge**.
### My Setup
```
Python Version: 3.9
langgraph==0.2.61
pydantic==2.10.4
```
I have also tried with
```
Python Version: 3.12
langgraph==0.2.70
pydantic==2.10.6
```
### Similar Discussion
I found a somehow similar discussion here: https://github.com/langchain-ai/langgraph/discussions/2197</div>
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3483/timeline
| null |
duplicate
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3464
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3464/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3464/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3464/events
|
https://github.com/langchain-ai/langgraph/issues/3464
| 2,856,294,820
|
I_kwDOKFU0lc6qP5mk
| 3,464
|
[Urgent] LangGraph docker image fails to start
|
{
"login": "nikita-wayhq",
"id": 196185042,
"node_id": "U_kgDOC7GL0g",
"avatar_url": "https://avatars.githubusercontent.com/u/196185042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikita-wayhq",
"html_url": "https://github.com/nikita-wayhq",
"followers_url": "https://api.github.com/users/nikita-wayhq/followers",
"following_url": "https://api.github.com/users/nikita-wayhq/following{/other_user}",
"gists_url": "https://api.github.com/users/nikita-wayhq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikita-wayhq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikita-wayhq/subscriptions",
"organizations_url": "https://api.github.com/users/nikita-wayhq/orgs",
"repos_url": "https://api.github.com/users/nikita-wayhq/repos",
"events_url": "https://api.github.com/users/nikita-wayhq/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikita-wayhq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-16T19:44:34
| 2025-02-18T09:48:26
| 2025-02-16T20:42:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
Pretty much any code. The error is on your level.
```
### Error Message and Stack Trace (if applicable)
```shell
Traceback (most recent call last):
File "/usr/local/bin/uvicorn", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
run(
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
server.run()
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
await self._serve(sockets)
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
config.load()
File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/api/langgraph_api/server.py", line 16, in <module>
File "/api/langgraph_api/api/__init__.py", line 8, in <module>
File "/api/langgraph_api/api/meta.py", line 6, in <module>
File "/api/langgraph_api/queue.py", line 20, in <module>
'Exception raised by Queue.get(block=0)/get_nowait().'
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/api/langgraph_api/stream.py", line 35, in <module>
File "/usr/local/lib/python3.11/site-packages/langgraph_storage/store.py", line 8, in <module>
from langgraph.store.postgres.aio import AsyncPostgresStore, PostgresIndexConfig
File "/usr/local/lib/python3.11/site-packages/langgraph/store/postgres/__init__.py", line 1, in <module>
from langgraph.store.postgres.aio import AsyncPostgresStore
File "/usr/local/lib/python3.11/site-packages/langgraph/store/postgres/aio.py", line 12, in <module>
from langgraph.checkpoint.postgres import _ainternal
File "/usr/local/lib/python3.11/site-packages/langgraph/checkpoint/postgres/__init__.py", line 12, in <module>
from langgraph.checkpoint.base import (
ImportError: cannot import name 'get_checkpoint_metadata' from 'langgraph.checkpoint.base' (/usr/local/lib/python3.11/site-packages/langgraph/checkpoint/base/__init__.py)
```
### Description
After re-building the image today, I get this error.
We need to golive tomorrow morning and it blocks everything.
How can we build docker images w/o relying on a black-box dependency that ruined our production for **several** times since the last few days?
Is there a way to build it w/o this dependency?
### System Info
Note: the command output is done on a mac machine, while the error is inside docker.
```
python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:06:57 PDT 2024; root:xnu-11215.41.3~3/RELEASE_ARM64_T6041
> Python Version: 3.11.5 (main, Feb 5 2025, 16:10:32) [Clang 16.0.0 (clang-1600.0.26.6)]
Package Information
-------------------
> langchain_core: 0.3.33
> langchain: 0.3.17
> langchain_community: 0.3.16
> langsmith: 0.2.11
> langchain_anthropic: 0.3.0
> langchain_openai: 0.2.10
> langchain_text_splitters: 0.3.5
> langgraph_api: 0.0.15
> langgraph_cli: 0.1.65
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.48
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.12
> anthropic: 0.45.2
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.69
> langgraph-checkpoint: 2.0.10
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.55.3
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.6
> pydantic-settings: 2.7.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.27
> sse-starlette: 2.1.3
> starlette: 0.45.3
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
> watchfiles: 1.0.4
> zstandard: Installed. No version info available.
```
|
{
"login": "nfcampos",
"id": 56902,
"node_id": "MDQ6VXNlcjU2OTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/56902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nfcampos",
"html_url": "https://github.com/nfcampos",
"followers_url": "https://api.github.com/users/nfcampos/followers",
"following_url": "https://api.github.com/users/nfcampos/following{/other_user}",
"gists_url": "https://api.github.com/users/nfcampos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nfcampos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nfcampos/subscriptions",
"organizations_url": "https://api.github.com/users/nfcampos/orgs",
"repos_url": "https://api.github.com/users/nfcampos/repos",
"events_url": "https://api.github.com/users/nfcampos/events{/privacy}",
"received_events_url": "https://api.github.com/users/nfcampos/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3464/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3460
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3460/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3460/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3460/events
|
https://github.com/langchain-ai/langgraph/issues/3460
| 2,855,783,185
|
I_kwDOKFU0lc6qN8sR
| 3,460
|
Reducers for other state members apart from messages
|
{
"login": "binarybeastt",
"id": 53526506,
"node_id": "MDQ6VXNlcjUzNTI2NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/53526506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/binarybeastt",
"html_url": "https://github.com/binarybeastt",
"followers_url": "https://api.github.com/users/binarybeastt/followers",
"following_url": "https://api.github.com/users/binarybeastt/following{/other_user}",
"gists_url": "https://api.github.com/users/binarybeastt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/binarybeastt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/binarybeastt/subscriptions",
"organizations_url": "https://api.github.com/users/binarybeastt/orgs",
"repos_url": "https://api.github.com/users/binarybeastt/repos",
"events_url": "https://api.github.com/users/binarybeastt/events{/privacy}",
"received_events_url": "https://api.github.com/users/binarybeastt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-16T00:13:29
| 2025-02-17T18:54:54
| 2025-02-17T18:54:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Discussed in https://github.com/langchain-ai/langgraph/discussions/3459
<div type='discussions-op-text'>
<sup>Originally posted by **binarybeastt** February 16, 2025</sup>
from langgraph.graph.message import add_messages helps to append messages to the message list, how can we replicate this behavior for other types that are not messages, maybe documents, files, or images?</div>
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3460/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3457
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3457/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3457/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3457/events
|
https://github.com/langchain-ai/langgraph/issues/3457
| 2,855,748,193
|
I_kwDOKFU0lc6qN0Jh
| 3,457
|
exec format error during langgraph docker deployment
|
{
"login": "mingxuan-he",
"id": 60416206,
"node_id": "MDQ6VXNlcjYwNDE2MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/60416206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingxuan-he",
"html_url": "https://github.com/mingxuan-he",
"followers_url": "https://api.github.com/users/mingxuan-he/followers",
"following_url": "https://api.github.com/users/mingxuan-he/following{/other_user}",
"gists_url": "https://api.github.com/users/mingxuan-he/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mingxuan-he/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mingxuan-he/subscriptions",
"organizations_url": "https://api.github.com/users/mingxuan-he/orgs",
"repos_url": "https://api.github.com/users/mingxuan-he/repos",
"events_url": "https://api.github.com/users/mingxuan-he/events{/privacy}",
"received_events_url": "https://api.github.com/users/mingxuan-he/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-15T22:28:15
| 2025-02-18T05:08:01
| 2025-02-18T05:08:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
docker pull $DOCKERHUB_USERNAME/my-langgraph-server:latest
docker run -it --rm $DOCKERHUB_USERNAME/my-langgraph-server:latest
```
### Error Message and Stack Trace (if applicable)
```shell
exec /storage/entrypoint.sh: exec format error
exec /storage/entrypoint.sh: exec format error
exec /storage/entrypoint.sh: exec format error
```
### Description
I'm using langgraph-cli in Github Actions to build the docker image automatically with `langgraph build`. I'm following instructions (and using the docker-compose file) [from the docs here](https://langchain-ai.github.io/langgraph/how-tos/deploy-self-hosted/#using-docker-compose). Local testing with docker desktop works. However the built container keeps emitting `exec format error` when deployed on my Ubuntu server.
Debugging with Sonnet suggests it's an issue with incompatible architectures. I'm wondering if there is a way to use `langgraph build` with arguments. Any help appreciated!
.github/workflows/main.yml:
```
name: Build Docker Image
on:
push:
branches: [main]
jobs:
build-push:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Build Docker image with Langgraph
run: |
pip install -U langgraph-cli
cd yuichan
langgraph build -t my-langgraph-server
- name: Login to Docker Registry
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Push Docker image
run: |
docker tag my-langgraph-server ${{ secrets.DOCKERHUB_USERNAME }}/my-langgraph-server:latest
docker push ${{ secrets.DOCKERHUB_USERNAME }}/yui-langgraph-server:latest
```
langgraph.json:
```
{
"dependencies": ["."],
"graphs":{
"yui": "./yuichan.py:graph"
},
"env": ".env",
"python_version": "3.12"
}
```
### System Info
- Ubuntu 24.04
- arm64 VPS on Oracle Cloud
- Docker version 27.4.1
requirements.txt:
langchain-core
langchain[openai,anthropic,groq,google-genai]
langchain-community
langchain-experimental
langchain-google-community[gmail]
langchainhub
langsmith
langgraph
langgraph-sdk
langgraph-checkpoint-postgres
pydantic
|
{
"login": "mingxuan-he",
"id": 60416206,
"node_id": "MDQ6VXNlcjYwNDE2MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/60416206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingxuan-he",
"html_url": "https://github.com/mingxuan-he",
"followers_url": "https://api.github.com/users/mingxuan-he/followers",
"following_url": "https://api.github.com/users/mingxuan-he/following{/other_user}",
"gists_url": "https://api.github.com/users/mingxuan-he/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mingxuan-he/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mingxuan-he/subscriptions",
"organizations_url": "https://api.github.com/users/mingxuan-he/orgs",
"repos_url": "https://api.github.com/users/mingxuan-he/repos",
"events_url": "https://api.github.com/users/mingxuan-he/events{/privacy}",
"received_events_url": "https://api.github.com/users/mingxuan-he/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3457/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3452
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3452/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3452/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3452/events
|
https://github.com/langchain-ai/langgraph/issues/3452
| 2,855,329,344
|
I_kwDOKFU0lc6qMN5A
| 3,452
|
微信交流群
|
{
"login": "RobinYang11",
"id": 20513480,
"node_id": "MDQ6VXNlcjIwNTEzNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/20513480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobinYang11",
"html_url": "https://github.com/RobinYang11",
"followers_url": "https://api.github.com/users/RobinYang11/followers",
"following_url": "https://api.github.com/users/RobinYang11/following{/other_user}",
"gists_url": "https://api.github.com/users/RobinYang11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobinYang11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobinYang11/subscriptions",
"organizations_url": "https://api.github.com/users/RobinYang11/orgs",
"repos_url": "https://api.github.com/users/RobinYang11/repos",
"events_url": "https://api.github.com/users/RobinYang11/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobinYang11/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-15T08:23:52
| 2025-02-15T21:08:43
| 2025-02-15T21:08:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Issue with current documentation:
<img width="718" alt="Image" src="https://github.com/user-attachments/assets/9a045798-ea10-4c20-952c-c42b5eae070d" />
### Idea or request for content:
_No response_
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3452/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3442
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3442/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3442/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3442/events
|
https://github.com/langchain-ai/langgraph/issues/3442
| 2,854,167,894
|
I_kwDOKFU0lc6qHyVW
| 3,442
|
How to use 'artifact' attribute in ToolMessage to output both text and image?
|
{
"login": "Cybertyann",
"id": 17784405,
"node_id": "MDQ6VXNlcjE3Nzg0NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/17784405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cybertyann",
"html_url": "https://github.com/Cybertyann",
"followers_url": "https://api.github.com/users/Cybertyann/followers",
"following_url": "https://api.github.com/users/Cybertyann/following{/other_user}",
"gists_url": "https://api.github.com/users/Cybertyann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cybertyann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cybertyann/subscriptions",
"organizations_url": "https://api.github.com/users/Cybertyann/orgs",
"repos_url": "https://api.github.com/users/Cybertyann/repos",
"events_url": "https://api.github.com/users/Cybertyann/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cybertyann/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| true
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-14T16:18:56
| 2025-02-14T17:16:27
| 2025-02-14T17:16:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Issue with current documentation:
In https://python.langchain.com/api_reference/core/messages/langchain_core.messages.tool.ToolMessage.html, it introduces the concept of 'arifact' and give a simple example:
> from langchain_core.messages import ToolMessage
>tool_output = {
"stdout": "From the graph we can see that the correlation between x and y is ...",
"stderr": None,
"artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
}
>ToolMessage(
content=tool_output["stdout"],
artifact=tool_output,
tool_call_id='call_Jja7J89XsjrOLA5r!MEOW!SL',
)
,
if the tool function called by llm return an output likes `tool_output = {
"stdout": "From the graph we can see that the correlation between x and y is ...",
"stderr": None,
"artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
}`
, can the agent automatically return a toolmessage contains 'artifact'? Or how to use 'artifact' attribute in ToolMessage to output both text and image? There are few examples about how to get multimodal output. Thank you very much.
### Idea or request for content:
_No response_
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3442/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3441
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3441/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3441/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3441/events
|
https://github.com/langchain-ai/langgraph/issues/3441
| 2,853,780,944
|
I_kwDOKFU0lc6qGT3Q
| 3,441
|
Checkpointer attempts to serialize RunnableConfig -- Object of type Foobar is not JSON serializable
|
{
"login": "aguadoenzo",
"id": 11411723,
"node_id": "MDQ6VXNlcjExNDExNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/11411723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aguadoenzo",
"html_url": "https://github.com/aguadoenzo",
"followers_url": "https://api.github.com/users/aguadoenzo/followers",
"following_url": "https://api.github.com/users/aguadoenzo/following{/other_user}",
"gists_url": "https://api.github.com/users/aguadoenzo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aguadoenzo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aguadoenzo/subscriptions",
"organizations_url": "https://api.github.com/users/aguadoenzo/orgs",
"repos_url": "https://api.github.com/users/aguadoenzo/repos",
"events_url": "https://api.github.com/users/aguadoenzo/events{/privacy}",
"received_events_url": "https://api.github.com/users/aguadoenzo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-14T13:50:34
| 2025-02-16T08:41:45
| 2025-02-14T17:22:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from typing import TypedDict
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.constants import START
from langgraph.graph import StateGraph
class State(TypedDict):
foo: str
class Foobar:
"""
Non-JSON serializable class
"""
hello: str
def node_a(state: State, config: RunnableConfig):
return {"foo": "bar"}
with SqliteSaver.from_conn_string(":memory:") as memory:
workflow_builder = StateGraph(State)
workflow_builder.add_node(node_a)
workflow_builder.add_edge(START, "node_a")
graph = workflow_builder.compile(checkpointer=memory)
config = {"configurable": {"thread_id": "1", "foobar": Foobar()}}
for s in graph.stream({"foo": ""}, config):
print(s)
```
### Error Message and Stack Trace (if applicable)
```shell
Traceback (most recent call last):
File "/Users/enzoaguado/Code/redacted/repro.py", line 33, in <module>
for s in graph.stream({"foo": ""}, config):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 1668, in stream
with SyncPregelLoop(
^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/loop.py", line 959, in __exit__
return self.stack.__exit__(exc_type, exc_value, traceback)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 610, in __exit__
raise exc_details[1]
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 595, in __exit__
if cb(*exc_details):
^^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/executor.py", line 120, in __exit__
task.result()
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/loop.py", line 884, in _checkpointer_put_after_previous
prev.result()
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/executor.py", line 83, in done
task.result()
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/loop.py", line 886, in _checkpointer_put_after_previous
cast(BaseCheckpointSaver, self.checkpointer).put(
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/checkpoint/sqlite/__init__.py", line 400, in put
serialized_metadata = self.jsonplus_serde.dumps(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/checkpoint/serde/jsonplus.py", line 182, in dumps
return json.dumps(obj, default=self._default, ensure_ascii=False).encode(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/json/__init__.py", line 238, in dumps
**kw).encode(obj)
^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/json/encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/json/encoder.py", line 258, in iterencode
return _iterencode(o, 0)
^^^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/checkpoint/serde/jsonplus.py", line 132, in _default
raise TypeError(
TypeError: Object of type Foobar is not JSON serializable
```
### Description
According to https://github.com/langchain-ai/langgraph/issues/2135#issuecomment-2426958645 the correct way to pass around objects that are not JSON serializable is to put them in `RunnableConfig` instead of the graph state.
However, even when doing that an exception is still thrown.
What is the correct way to pass objects around at runtime without storing them in the checkpointer ?
### System Info
langgraph = "^0.2.60"
langgraph-checkpoint-sqlite = "^2.0.4"
langgraph-checkpoint-postgres = "^2.0.13"
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3441/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3557
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3557/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3557/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3557/events
|
https://github.com/langchain-ai/langgraph/issues/3557
| 2,873,182,151
|
I_kwDOKFU0lc6rQUfH
| 3,557
|
langgraph-checkpoint-postgres issue with version update
|
{
"login": "saurabhlalsaxena",
"id": 17300846,
"node_id": "MDQ6VXNlcjE3MzAwODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/17300846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saurabhlalsaxena",
"html_url": "https://github.com/saurabhlalsaxena",
"followers_url": "https://api.github.com/users/saurabhlalsaxena/followers",
"following_url": "https://api.github.com/users/saurabhlalsaxena/following{/other_user}",
"gists_url": "https://api.github.com/users/saurabhlalsaxena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saurabhlalsaxena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saurabhlalsaxena/subscriptions",
"organizations_url": "https://api.github.com/users/saurabhlalsaxena/orgs",
"repos_url": "https://api.github.com/users/saurabhlalsaxena/repos",
"events_url": "https://api.github.com/users/saurabhlalsaxena/events{/privacy}",
"received_events_url": "https://api.github.com/users/saurabhlalsaxena/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-23T13:07:57
| 2025-02-23T14:34:50
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
pip install -U langgraph-checkpoint-postgres
DB_URI ="" #Use an existing postgres checkpointer DB
from psycopg import Connection
from psycopg_pool import ConnectionPool
from psycopg.rows import dict_row
from langgraph.checkpoint.postgres import PostgresSaver
connection_kwargs ={
"autocommit": True,
"prepare_threshold": 0,
"row_factory": dict_row,
}
pool = ConnectionPool(
conninfo=DB_URI,
max_size=20,
kwargs=connection_kwargs
)
checkpointer = PostgresSaver(pool)
thread_id = "1"
config = {"configurable": {"thread_id": thread_id}}
checkpoint = checkpointer.get(config)
```
### Error Message and Stack Trace (if applicable)
```shell
/usr/local/lib/python3.11/dist-packages/langgraph/checkpoint/base/__init__.py in get(self, config)
234 Optional[Checkpoint]: The requested checkpoint, or None if not found.
235 """
--> 236 if value := self.get_tuple(config):
237 return value.checkpoint
238
/usr/local/lib/python3.11/dist-packages/langgraph/checkpoint/postgres/__init__.py in get_tuple(self, config)
218
219 with self._cursor() as cur:
--> 220 cur.execute(
221 self.SELECT_SQL + where,
222 args,
/usr/local/lib/python3.11/dist-packages/psycopg/cursor.py in execute(self, query, params, prepare, binary)
95 )
96 except e._NO_TRACEBACK as ex:
---> 97 raise ex.with_traceback(None)
98 return self
99
UndefinedColumn: column cw.task_path does not exist
LINE 27: ...array_agg(array[cw.type::bytea, cw.blob] order by cw.task_pa...
```
### Description
Faced the below issue with langgraph-checkpoint-postgres:
```error
UndefinedColumn: column cw.task_path does not exist
LINE 27: ...array_agg(array[cw.type::bytea, cw.blob] order by cw.task_pa...
```
Tried moving to an older version of langgraph-checkpoint-postgres (version - 2.0.8) and it is working fine.
**I.e. we are able to connect to the DB with langgraph-checkpoint-postgres == 2.0.8**
This is an issue with a recent upgrade to the library.
We have code running in production and will become a major problem for us.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.37
> langchain: 0.3.19
> langchain_community: 0.3.18
> langsmith: 0.3.8
> langchain_cli: 0.0.35
> langchain_openai: 0.3.6
> langchain_text_splitters: 0.3.6
> langgraph_sdk: 0.1.53
> langserve: 0.3.1
Other Dependencies
------------------
> aiohttp<4.0.0,>=3.8.3: Installed. No version info available.
> async-timeout<5.0.0,>=4.0.0;: Installed. No version info available.
> dataclasses-json<0.7,>=0.5.7: Installed. No version info available.
> fastapi: 0.115.8
> gitpython: 3.1.44
> gritql: 0.1.5
> httpx: 0.28.1
> httpx-sse<1.0.0,>=0.4.0: Installed. No version info available.
> jsonpatch<2.0,>=1.33: Installed. No version info available.
> langchain-anthropic;: Installed. No version info available.
> langchain-aws;: Installed. No version info available.
> langchain-cohere;: Installed. No version info available.
> langchain-community;: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.34: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.35: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.37: Installed. No version info available.
> langchain-deepseek;: Installed. No version info available.
> langchain-fireworks;: Installed. No version info available.
> langchain-google-genai;: Installed. No version info available.
> langchain-google-vertexai;: Installed. No version info available.
> langchain-groq;: Installed. No version info available.
> langchain-huggingface;: Installed. No version info available.
> langchain-mistralai;: Installed. No version info available.
> langchain-ollama;: Installed. No version info available.
> langchain-openai;: Installed. No version info available.
> langchain-text-splitters<1.0.0,>=0.3.6: Installed. No version info available.
> langchain-together;: Installed. No version info available.
> langchain-xai;: Installed. No version info available.
> langchain<1.0.0,>=0.3.19: Installed. No version info available.
> langserve[all]: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> langsmith<0.4,>=0.1.125: Installed. No version info available.
> langsmith<0.4,>=0.1.17: Installed. No version info available.
> numpy<2,>=1.26.4;: Installed. No version info available.
> numpy<3,>=1.26.2;: Installed. No version info available.
> openai<2.0.0,>=1.58.1: Installed. No version info available.
> orjson: 3.10.15
> packaging<25,>=23.2: Installed. No version info available.
> pydantic: 2.10.6
> pydantic-settings<3.0.0,>=2.4.0: Installed. No version info available.
> pydantic<3.0.0,>=2.5.2;: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4;: Installed. No version info available.
> pytest: 8.3.4
> PyYAML>=5.3: Installed. No version info available.
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> requests<3,>=2: Installed. No version info available.
> rich: 13.9.4
> SQLAlchemy<3,>=1.4: Installed. No version info available.
> sse-starlette: 1.8.2
> tenacity!=8.4.0,<10,>=8.1.0: Installed. No version info available.
> tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
> tiktoken<1,>=0.7: Installed. No version info available.
> tomlkit: 0.13.2
> typer[all]: Installed. No version info available.
> typing-extensions>=4.7: Installed. No version info available.
> uvicorn: 0.34.0
> zstandard: 0.23.0
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3557/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3556
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3556/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3556/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3556/events
|
https://github.com/langchain-ai/langgraph/issues/3556
| 2,872,863,878
|
I_kwDOKFU0lc6rPGyG
| 3,556
|
Tavily Search Error with async/await in LangGraph
|
{
"login": "kissycn",
"id": 89451328,
"node_id": "MDQ6VXNlcjg5NDUxMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/89451328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kissycn",
"html_url": "https://github.com/kissycn",
"followers_url": "https://api.github.com/users/kissycn/followers",
"following_url": "https://api.github.com/users/kissycn/following{/other_user}",
"gists_url": "https://api.github.com/users/kissycn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kissycn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kissycn/subscriptions",
"organizations_url": "https://api.github.com/users/kissycn/orgs",
"repos_url": "https://api.github.com/users/kissycn/repos",
"events_url": "https://api.github.com/users/kissycn/events{/privacy}",
"received_events_url": "https://api.github.com/users/kissycn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-23T02:19:28
| 2025-02-23T13:35:14
| 2025-02-23T13:35:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import os
import asyncio
from typing import Annotated
from typing_extensions import TypedDict
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, SystemMessage, AnyMessage
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langchain_community.tools import SearchAPIResults
from langchain_community.tools import TavilySearchResults
from langchain_openai import ChatOpenAI
os.environ['TAVILY_API_KEY'] = 'tvly-xxx'
class State(TypedDict):
messages: Annotated[list, add_messages]
llm = ChatOpenAI(model="gpt-4o-mini")
tool = TavilySearchResults(max_results=1, include_answer=False, include_raw_content=False)
#tool = SearchAPIResults(engine="google")
tools = [tool]
# 系统提示词
system_prompt = """You are a helpful assistant."""
#system_prompt = "你是小明。如果有用户问你是谁就回答:小明"
llm_with_tools = llm.bind_tools(tools)
async def chatbot(state: State):
response = await llm_with_tools.ainvoke(state["messages"])
return {"messages": [response]}
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
searchbot_graph = graph_builder.compile()
user_input = "what is langgraph"
# 创建初始消息列表,包含系统提示和用户输入
initial_messages = [
SystemMessage(content=system_prompt),
HumanMessage(content=user_input)
]
async def main():
# 使用包含系统提示的消息列表
async for chunk in searchbot_graph.astream(
input={"messages": initial_messages},
stream_mode="messages"
):
print(chunk)
print("\n\n")
if __name__ == "__main__":
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
```shell
(ToolMessage(content="ClientConnectorCertificateError(ConnectionKey(host='api.tavily.com', port=443, is_ssl=True, ssl=True, proxy=None, proxy_auth=None, proxy_headers_hash=None), SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)'))", name='tavily_search_results_json', id='e76f58ec-4299-4adb-9488-c6e03804fe59', tool_call_id='call_Mqle439YOLwiLT3i9TzwQhy8', artifact={}), {'langgraph_step': 2, 'langgraph_node': 'tools', 'langgraph_triggers': ['branch:chatbot:tools_condition:tools'], 'langgraph_path': ('__pregel_pull', 'tools'), 'langgraph_checkpoint_ns': 'tools:2743d809-8b93-3c97-bdb2-11d04607977d'})
```
### Description
When using LangGraph with async/await pattern and Tavily Search tool, the code fails with an SSL certificate verification error. However, the same code works fine when not using async implementation.
### System Info
Langgraph Version:0.2.69
langchain-community:0.3.17
|
{
"login": "kissycn",
"id": 89451328,
"node_id": "MDQ6VXNlcjg5NDUxMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/89451328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kissycn",
"html_url": "https://github.com/kissycn",
"followers_url": "https://api.github.com/users/kissycn/followers",
"following_url": "https://api.github.com/users/kissycn/following{/other_user}",
"gists_url": "https://api.github.com/users/kissycn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kissycn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kissycn/subscriptions",
"organizations_url": "https://api.github.com/users/kissycn/orgs",
"repos_url": "https://api.github.com/users/kissycn/repos",
"events_url": "https://api.github.com/users/kissycn/events{/privacy}",
"received_events_url": "https://api.github.com/users/kissycn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3556/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3543
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3543/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3543/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3543/events
|
https://github.com/langchain-ai/langgraph/issues/3543
| 2,867,904,556
|
I_kwDOKFU0lc6q8MAs
| 3,543
|
graph restart bug
|
{
"login": "andyzhou1982",
"id": 51428368,
"node_id": "MDQ6VXNlcjUxNDI4MzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/51428368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyzhou1982",
"html_url": "https://github.com/andyzhou1982",
"followers_url": "https://api.github.com/users/andyzhou1982/followers",
"following_url": "https://api.github.com/users/andyzhou1982/following{/other_user}",
"gists_url": "https://api.github.com/users/andyzhou1982/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andyzhou1982/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyzhou1982/subscriptions",
"organizations_url": "https://api.github.com/users/andyzhou1982/orgs",
"repos_url": "https://api.github.com/users/andyzhou1982/repos",
"events_url": "https://api.github.com/users/andyzhou1982/events{/privacy}",
"received_events_url": "https://api.github.com/users/andyzhou1982/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292643,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwIw",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/invalid",
"name": "invalid",
"color": "e4e669",
"default": true,
"description": "This doesn't seem right"
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-21T04:06:21
| 2025-02-21T14:48:56
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
def _init_graph(self)->CompiledStateGraph:
graph_builder = StateGraph(MessagesState)
tool_node = ToolNode(tools=self.tools)
graph_builder.add_node("reply", _reply)
graph_builder.add_node("route", _route)
graph_builder.add_node("complete", _complete)
graph_builder.add_node("approve", _approve)
graph_builder.add_node("tools", tool_node)
graph_builder.add_edge(START, "reply")
graph_builder.add_edge("reply", "route")
graph_builder.add_edge("complete", "reply")
graph_builder.add_edge("tools", "reply")
graph = graph_builder.compile(checkpointer=self.checkpointer,interrupt_before=["complete","approve"])
print(f"==========PreBookConsultant Graph==========")
graph.get_graph().print_ascii()
return graph
def _route(self,state: MessagesState) -> Command[Literal["complete", "approve", "tools", "__end__"]]:
......
if valid_messages:
return Command(
update={"messages": [AIMessage(content="\n".join(valid_messages),name=self.__class__.__name__)]},
goto="complete"
)
elif inquiry_messages:
self.cache_message = last_message
return Command(
update={"messages": [AIMessage(content="\n".join(inquiry_messages),name=self.__class__.__name__)]},
goto="approve"
)
else:
return Command(
goto="tools"
)
def _complete(self,state: MessagesState) -> MessagesState:
return state
else:
return Command(
goto=END
)
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
I designed a graph,and set interrupt_before=["complete"],when "_route" node finished,the graph is interrupted and the "snapshot.next" is "complete" indeed,then I use "graph.invoke(input=None,config=config)" to restart the graph,but the graph not enter into the "complete" node, it finished directly
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.26100
> Python Version: 3.11.11 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:34:19) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.15
> langchain_community: 0.3.15
> langsmith: 0.2.11
> langchain_experimental: 0.3.4
> langchain_milvus: 0.1.8
> langchain_openai: 0.3.1
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.53
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.8
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> pymilvus: 2.5.4
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3543/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3538
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3538/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3538/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3538/events
|
https://github.com/langchain-ai/langgraph/issues/3538
| 2,867,286,088
|
I_kwDOKFU0lc6q51BI
| 3,538
|
ToolNode not working. TypeError: Tool search returned unexpected type: <class 'str'>
|
{
"login": "henryclw",
"id": 97804910,
"node_id": "U_kgDOBdRibg",
"avatar_url": "https://avatars.githubusercontent.com/u/97804910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/henryclw",
"html_url": "https://github.com/henryclw",
"followers_url": "https://api.github.com/users/henryclw/followers",
"following_url": "https://api.github.com/users/henryclw/following{/other_user}",
"gists_url": "https://api.github.com/users/henryclw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/henryclw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/henryclw/subscriptions",
"organizations_url": "https://api.github.com/users/henryclw/orgs",
"repos_url": "https://api.github.com/users/henryclw/repos",
"events_url": "https://api.github.com/users/henryclw/events{/privacy}",
"received_events_url": "https://api.github.com/users/henryclw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/issues/comments#list-issue-comments",
"message": "Not Found",
"status": "404"
}
| 2025-02-20T21:15:02
| 2025-02-21T03:26:02
| 2025-02-21T03:14:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from typing import Literal
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, StateGraph, MessagesState
from langgraph.prebuilt import ToolNode
# Define the tools for the agent to use
@tool
def search(query: str):
"""Call to surf the web."""
# This is a placeholder, but don't tell the LLM that...
if "sf" in query.lower() or "san francisco" in query.lower():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."
tools = [search]
tool_node = ToolNode(tools)
model = ChatAnthropic(model="claude-3-5-sonnet-latest", temperature=0).bind_tools(tools)
# Define the function that determines whether to continue or not
def should_continue(state: MessagesState) -> Literal["tools", END]:
messages = state['messages']
last_message = messages[-1]
# If the LLM makes a tool call, then we route to the "tools" node
if last_message.tool_calls:
return "tools"
# Otherwise, we stop (reply to the user)
return END
# Define the function that calls the model
def call_model(state: MessagesState):
messages = state['messages']
response = model.invoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("tools", 'agent')
# Initialize memory to persist state between graph runs
checkpointer = MemorySaver()
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable.
# Note that we're (optionally) passing the memory when compiling the graph
app = workflow.compile(checkpointer=checkpointer)
# Use the agent
final_state = app.invoke(
{"messages": [{"role": "user", "content": "what is the weather in sf"}]},
config={"configurable": {"thread_id": 42}}
)
final_state["messages"][-1].content
```
### Error Message and Stack Trace (if applicable)
```shell
Traceback (most recent call last):
File "main.py", line 81, in <module>
final_state = app.invoke(
^^^^^^^^^^^
File "langgraph\pregel\__init__.py", line 2142, in invoke
for chunk in self.stream(
File "langgraph\pregel\__init__.py", line 1797, in stream
for _ in runner.tick(
File "langgraph\pregel\runner.py", line 230, in tick
run_with_retry(
File "langgraph\pregel\retry.py", line 40, in run_with_retry
return task.proc.invoke(task.input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "langgraph\utils\runnable.py", line 546, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "langgraph\utils\runnable.py", line 310, in invoke
ret = context.run(self.func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "langgraph\prebuilt\tool_node.py", line 238, in _func
outputs = [
^
File "D:\FullStack\python\conda\envs\py311psa\Lib\concurrent\futures\_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\FullStack\python\conda\envs\py311psa\Lib\concurrent\futures\_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
^^^^^^^^^^^^^^^^^^^
File "D:\FullStack\python\conda\envs\py311psa\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "D:\FullStack\python\conda\envs\py311psa\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "D:\FullStack\python\conda\envs\py311psa\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "langchain_core\runnables\config.py", line 527, in _wrapped_fn
return contexts.pop().run(fn, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "langgraph\prebuilt\tool_node.py", line 347, in _run_one
raise TypeError(
TypeError: Tool search returned unexpected type: <class 'str'>
During task with name 'tools' and id '680e4f05-4aca-9c1d-d026-ac815f3b08c9'
```
### Description
I was running the **Low-level implementation** from the README.md file on https://github.com/langchain-ai/langgraph
I just copy paste the example code from README but change the llm to a local llm, it's sure that the llm is working fine.
The ToolNode should be working, instead, it throws a `TypeError: Tool search returned unexpected type: <class 'str'>`
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:06:23) [MSC v.1942 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.35
> langchain: 0.3.19
> langchain_community: 0.3.17
> langsmith: 0.2.11
> langchain_anthropic: 0.3.3
> langchain_aws: 0.2.12
> langchain_fireworks: 0.2.7
> langchain_google_genai: 2.0.8
> langchain_ollama: 0.2.2
> langchain_openai: 0.3.1
> langchain_text_splitters: 0.3.6
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.12
> aiohttp<4.0.0,>=3.8.3: Installed. No version info available.
> anthropic: 0.45.2
> async-timeout<5.0.0,>=4.0.0;: Installed. No version info available.
> boto3: 1.36.19
> dataclasses-json<0.7,>=0.5.7: Installed. No version info available.
> defusedxml: 0.7.1
> filetype: 1.2.0
> fireworks-ai: 0.15.12
> google-generativeai: 0.8.4
> httpx: 0.28.1
> httpx-sse<1.0.0,>=0.4.0: Installed. No version info available.
> jsonpatch<2.0,>=1.33: Installed. No version info available.
> langchain-anthropic;: Installed. No version info available.
> langchain-aws;: Installed. No version info available.
> langchain-cohere;: Installed. No version info available.
> langchain-community;: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.34: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.35: Installed. No version info available.
> langchain-deepseek;: Installed. No version info available.
> langchain-fireworks;: Installed. No version info available.
> langchain-google-genai;: Installed. No version info available.
> langchain-google-vertexai;: Installed. No version info available.
> langchain-groq;: Installed. No version info available.
> langchain-huggingface;: Installed. No version info available.
> langchain-mistralai;: Installed. No version info available.
> langchain-ollama;: Installed. No version info available.
> langchain-openai;: Installed. No version info available.
> langchain-text-splitters<1.0.0,>=0.3.6: Installed. No version info available.
> langchain-together;: Installed. No version info available.
> langchain-xai;: Installed. No version info available.
> langchain<1.0.0,>=0.3.18: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> langsmith<0.4,>=0.1.125: Installed. No version info available.
> langsmith<0.4,>=0.1.17: Installed. No version info available.
> numpy: 1.26.4
> numpy<2,>=1.26.4;: Installed. No version info available.
> numpy<3,>=1.26.2;: Installed. No version info available.
> ollama: 0.4.7
> openai: 1.62.0
> orjson: 3.10.15
> packaging<25,>=23.2: Installed. No version info available.
> pydantic: 2.10.6
> pydantic-settings<3.0.0,>=2.4.0: Installed. No version info available.
> pydantic<3.0.0,>=2.5.2;: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4;: Installed. No version info available.
> PyYAML>=5.3: Installed. No version info available.
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> requests<3,>=2: Installed. No version info available.
> SQLAlchemy<3,>=1.4: Installed. No version info available.
> tenacity!=8.4.0,<10,>=8.1.0: Installed. No version info available.
> tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
> tiktoken: 0.8.0
> typing-extensions>=4.7: Installed. No version info available.
> zstandard: Installed. No version info available.
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3538/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3537
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3537/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3537/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3537/events
|
https://github.com/langchain-ai/langgraph/issues/3537
| 2,867,234,202
|
I_kwDOKFU0lc6q5oWa
| 3,537
|
bad repo - On the Edge of AI Adaptability
|
{
"login": "jgwill",
"id": 23141173,
"node_id": "MDQ6VXNlcjIzMTQxMTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/23141173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jgwill",
"html_url": "https://github.com/jgwill",
"followers_url": "https://api.github.com/users/jgwill/followers",
"following_url": "https://api.github.com/users/jgwill/following{/other_user}",
"gists_url": "https://api.github.com/users/jgwill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jgwill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jgwill/subscriptions",
"organizations_url": "https://api.github.com/users/jgwill/orgs",
"repos_url": "https://api.github.com/users/jgwill/repos",
"events_url": "https://api.github.com/users/jgwill/events{/privacy}",
"received_events_url": "https://api.github.com/users/jgwill/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-20T20:47:22
| 2025-02-20T21:00:06
| 2025-02-20T20:49:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
oops bad repo
----
Though I feel like keeping a part of that mistaken repo here, love all your works guy
Title:
On the Edge of AI Adaptability: A Thought Experiment
Body:
Some of you may have seen an earlier issue here that has since been removed. While the details are no longer available, the core idea remains: How can AI move beyond rigid execution and into the realm of adaptive intelligence?
At its heart, the discussion was about AI workflows that don’t just execute but evolve, responding dynamically to uncertainty, pauses, and improvisational shifts. Instead of thinking in terms of predefined logic, we started exploring a more organic approach—one where AI processes are not just structured but also aware of their own narrative flow.
This isn’t the last time we’ll be thinking along these lines. If you’re intrigued, if this sparks something, let’s keep the conversation going. What does it mean for AI to be not just functional, but emergent?
|
{
"login": "hinthornw",
"id": 13333726,
"node_id": "MDQ6VXNlcjEzMzMzNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13333726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hinthornw",
"html_url": "https://github.com/hinthornw",
"followers_url": "https://api.github.com/users/hinthornw/followers",
"following_url": "https://api.github.com/users/hinthornw/following{/other_user}",
"gists_url": "https://api.github.com/users/hinthornw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hinthornw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hinthornw/subscriptions",
"organizations_url": "https://api.github.com/users/hinthornw/orgs",
"repos_url": "https://api.github.com/users/hinthornw/repos",
"events_url": "https://api.github.com/users/hinthornw/events{/privacy}",
"received_events_url": "https://api.github.com/users/hinthornw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3537/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3537/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3535
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3535/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3535/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3535/events
|
https://github.com/langchain-ai/langgraph/issues/3535
| 2,867,083,236
|
I_kwDOKFU0lc6q5Dfk
| 3,535
|
[Urgent!] Docker build doesn't create new images
|
{
"login": "nikita-wayhq",
"id": 196185042,
"node_id": "U_kgDOC7GL0g",
"avatar_url": "https://avatars.githubusercontent.com/u/196185042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikita-wayhq",
"html_url": "https://github.com/nikita-wayhq",
"followers_url": "https://api.github.com/users/nikita-wayhq/followers",
"following_url": "https://api.github.com/users/nikita-wayhq/following{/other_user}",
"gists_url": "https://api.github.com/users/nikita-wayhq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikita-wayhq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikita-wayhq/subscriptions",
"organizations_url": "https://api.github.com/users/nikita-wayhq/orgs",
"repos_url": "https://api.github.com/users/nikita-wayhq/repos",
"events_url": "https://api.github.com/users/nikita-wayhq/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikita-wayhq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-20T19:29:10
| 2025-02-20T21:41:07
| 2025-02-20T21:41:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
Again, we have problems with Docker build - our images don't launch.
We have a golive tomorrow morning and cannot deploy our main component: the agent.
This is very frustrating. And it already happened to us the second time within the last week:
https://github.com/langchain-ai/langgraph/issues/3382
```
### Error Message and Stack Trace (if applicable)
```shell
Defaulted container "support-agent" out of: support-agent, redis, cloud-sql-proxy (init)
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
run(
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
server.run()
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
await self._serve(sockets)
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
config.load()
File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/api/langgraph_api/server.py", line 31, in <module>
ImportError: cannot import name 'configure_loopback_transports' from 'langgraph_sdk.client' (/usr/local/lib/python3.11/site-packages/langgraph_sdk/client.py)
```
### Description
We already removed langgraph-api and langgraph-checkpoint as support suggested throughout the previous 2 times when we had this issue..
### System Info
```
python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:06:57 PDT 2024; root:xnu-11215.41.3~3/RELEASE_ARM64_T6041
> Python Version: 3.11.5 (main, Feb 5 2025, 16:10:32) [Clang 16.0.0 (clang-1600.0.26.6)]
Package Information
-------------------
> langchain_core: 0.3.33
> langchain: 0.3.17
> langchain_community: 0.3.16
> langsmith: 0.2.11
> langchain_anthropic: 0.3.0
> langchain_openai: 0.2.10
> langchain_text_splitters: 0.3.5
> langgraph_api: 0.0.15
> langgraph_cli: 0.1.65
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.48
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.12
> anthropic: 0.45.2
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.69
> langgraph-checkpoint: 2.0.10
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.55.3
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.6
> pydantic-settings: 2.7.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.27
> sse-starlette: 2.1.3
> starlette: 0.45.3
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
> watchfiles: 1.0.4
> zstandard: Installed. No version info available.
```
|
{
"login": "hinthornw",
"id": 13333726,
"node_id": "MDQ6VXNlcjEzMzMzNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13333726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hinthornw",
"html_url": "https://github.com/hinthornw",
"followers_url": "https://api.github.com/users/hinthornw/followers",
"following_url": "https://api.github.com/users/hinthornw/following{/other_user}",
"gists_url": "https://api.github.com/users/hinthornw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hinthornw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hinthornw/subscriptions",
"organizations_url": "https://api.github.com/users/hinthornw/orgs",
"repos_url": "https://api.github.com/users/hinthornw/repos",
"events_url": "https://api.github.com/users/hinthornw/events{/privacy}",
"received_events_url": "https://api.github.com/users/hinthornw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3535/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3532
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3532/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3532/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3532/events
|
https://github.com/langchain-ai/langgraph/issues/3532
| 2,866,915,274
|
I_kwDOKFU0lc6q4afK
| 3,532
|
Adding conditional edge from the entry point node does not seem to pass the Overall state to the validation function
|
{
"login": "sand-heap",
"id": 156697292,
"node_id": "U_kgDOCVcCzA",
"avatar_url": "https://avatars.githubusercontent.com/u/156697292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sand-heap",
"html_url": "https://github.com/sand-heap",
"followers_url": "https://api.github.com/users/sand-heap/followers",
"following_url": "https://api.github.com/users/sand-heap/following{/other_user}",
"gists_url": "https://api.github.com/users/sand-heap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sand-heap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sand-heap/subscriptions",
"organizations_url": "https://api.github.com/users/sand-heap/orgs",
"repos_url": "https://api.github.com/users/sand-heap/repos",
"events_url": "https://api.github.com/users/sand-heap/events{/privacy}",
"received_events_url": "https://api.github.com/users/sand-heap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292607,
"node_id": "LA_kwDOKFU0lc8AAAABW2Sv_w",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-20T18:06:22
| 2025-02-21T14:49:06
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Discussed in https://github.com/langchain-ai/langgraph/discussions/3387
<div type='discussions-op-text'>
<sup>Originally posted by **sand-heap** February 11, 2025</sup>
Pretty much what the title says. I can explain further with code
```
...
@dataclass
class InputState:
some_field: int
class OverAllState(messages):
some_field: int
def validate_transition(state: OverallState)->List:
if state.get("some_field") == 1:
return END
else:
return "other_node"
def some_node(input: InputState)->OverAllState:
return {"some_field": input.some_field+1}
workflow = StateGraph(OverAllState, input=InputState)
workflow.set_entry_point("node")
workflow.add("node", some_node)
workflow.add("other_node", other_node)
workflow.add_conditional_edges("node", validate_transition,...)
...
# add end
...
```
I would expect the `validate_transition` to receive the `OverallState` but for some reason it gets the `InputState` ?!
However if i add a `mock_node` between `node` and `other_node` and make the edge from `mock_node` conditional instead and have a sequential flow from `node` -> `mock_node`, the behavour is as expected.
can you please help? ++ @vbarda @hinthornw </div>
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3532/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3515
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3515/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3515/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3515/events
|
https://github.com/langchain-ai/langgraph/issues/3515
| 2,864,323,755
|
I_kwDOKFU0lc6quhyr
| 3,515
|
Lang Graph Nodes Experience High Latency Unrelated to LLM Execution
|
{
"login": "joshuamoreno1",
"id": 1558591,
"node_id": "MDQ6VXNlcjE1NTg1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1558591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshuamoreno1",
"html_url": "https://github.com/joshuamoreno1",
"followers_url": "https://api.github.com/users/joshuamoreno1/followers",
"following_url": "https://api.github.com/users/joshuamoreno1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshuamoreno1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshuamoreno1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshuamoreno1/subscriptions",
"organizations_url": "https://api.github.com/users/joshuamoreno1/orgs",
"repos_url": "https://api.github.com/users/joshuamoreno1/repos",
"events_url": "https://api.github.com/users/joshuamoreno1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshuamoreno1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-19T20:20:59
| 2025-02-19T21:33:18
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
We use langchain default observability.
```
### Error Message and Stack Trace (if applicable)
```shell
We don't have an error, the traces are our only evidence.
```
### Description
Hi there,
We're experiencing significant latency in our agent. When analyzing the trace in LangSmith, we notice that LLM operations execute relatively quickly, but the node exhibits unexpectedly high latency with no clear cause. We suspect that the issue might be related to the LangSmith integration, but we have no way to confirm it.
As shown in the attached image, there is a delay of over one second where seemingly nothing is happening.
Any insights or suggestions would be greatly appreciated.
<img width="390" alt="Image" src="https://github.com/user-attachments/assets/e6446d3e-f882-4aa5-b304-7162a96a04eb" />
### System Info
System Information
------------------
> OS: Ubuntu
> OS Version: Ubuntu 22.04.5 LTS jammy
> Python Version: 3.12.8 (main, Dec 3 2024, 18:42:41) [Clang 16.0.0 (clang-1600.0.26.4)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_anthropic: 0.3.1
> langchain_openai: 0.2.14
> langchain_text_splitters: 0.3.5
> langserve: 0.3.1
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.42.0
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: 0.115.6
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 2.2.1
> openai: 1.59.7
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> sse-starlette: 2.2.1
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3515/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3483
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3483/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3483/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3483/events
|
https://github.com/langchain-ai/langgraph/issues/3483
| 2,861,532,958
|
I_kwDOKFU0lc6qj4ce
| 3,483
|
Hidden State not Appearing in Conditional Edge
|
{
"login": "GiulioCMSanto",
"id": 32403782,
"node_id": "MDQ6VXNlcjMyNDAzNzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/32403782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GiulioCMSanto",
"html_url": "https://github.com/GiulioCMSanto",
"followers_url": "https://api.github.com/users/GiulioCMSanto/followers",
"following_url": "https://api.github.com/users/GiulioCMSanto/following{/other_user}",
"gists_url": "https://api.github.com/users/GiulioCMSanto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GiulioCMSanto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GiulioCMSanto/subscriptions",
"organizations_url": "https://api.github.com/users/GiulioCMSanto/orgs",
"repos_url": "https://api.github.com/users/GiulioCMSanto/repos",
"events_url": "https://api.github.com/users/GiulioCMSanto/events{/privacy}",
"received_events_url": "https://api.github.com/users/GiulioCMSanto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-18T20:06:57
| 2025-02-18T20:22:35
| 2025-02-18T20:22:32
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Discussed in https://github.com/langchain-ai/langgraph/discussions/3348
<div type='discussions-op-text'>
<sup>Originally posted by **GiulioCMSanto** February 7, 2025</sup>
Hello!! Before opening a Issue, I would like to start a discussion on something that looks like a bug! 😄
**The Issue**: when using `conditional_edge` the hidden state is ignored.
### Reproducing the Error
**Imports**
```
import uuid
from pydantic import BaseModel
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, MessagesState, StateGraph
```
**States**
```
class InputType(BaseModel):
input: str
class OutputType(BaseModel):
output: str
```
**Nodes**
```
def node_1(state: InputType) -> MessagesState :
print(f"Node 1 input: {state}")
return {"messages": ["out_node_1"]}
def node_2(state: MessagesState) -> OutputType:
print(f"Node 2 input: {state}")
return OutputType(output="out_node_2")
def router(state: MessagesState) -> str:
print(f"Router input: {state}")
return "node_2"
```
**Notice:** **MessagesState** is a **Hidden State** here.
**The Graph**
```
builder = StateGraph(input=InputType, output=OutputType)
builder.add_node("node_1", node_1)
builder.add_node("node_2", node_2)
builder.add_edge(START, "node_1")
builder.add_conditional_edges(
"node_1",
router,
["node_2"]
)
builder.add_edge("node_2", END)
graph = builder.compile(checkpointer=MemorySaver())
config = {
"configurable": {
"thread_id": uuid.uuid4(),
}
}
graph.invoke(
InputType(input="Hi"),
config,
stream_mode="values"
)
```
#### Outcomes
```
Node 1 input: input='Hi'
Router input: input='Hi'
Node 2 input: {'messages': [HumanMessage(content='out_node_1', additional_kwargs={}, response_metadata={}, id='7c1b5467-d55a-4af7-9888-0bff210ca726')]}
```
### Comments
Notice that `Router input: input='Hi'` and not `{'messages': [HumanMessage(content='out_node_1', additional_kwargs={}, response_metadata={}, id='7c1b5467-d55a-4af7-9888-0bff210ca726')]}`. Node 2 actually have access to the Hidden State, but **not the Conditional Edge**.
### My Setup
```
Python Version: 3.9
langgraph==0.2.61
pydantic==2.10.4
```
I have also tried with
```
Python Version: 3.12
langgraph==0.2.70
pydantic==2.10.6
```
### Similar Discussion
I found a somehow similar discussion here: https://github.com/langchain-ai/langgraph/discussions/2197</div>
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3483/timeline
| null |
duplicate
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3464
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3464/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3464/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3464/events
|
https://github.com/langchain-ai/langgraph/issues/3464
| 2,856,294,820
|
I_kwDOKFU0lc6qP5mk
| 3,464
|
[Urgent] LangGraph docker image fails to start
|
{
"login": "nikita-wayhq",
"id": 196185042,
"node_id": "U_kgDOC7GL0g",
"avatar_url": "https://avatars.githubusercontent.com/u/196185042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikita-wayhq",
"html_url": "https://github.com/nikita-wayhq",
"followers_url": "https://api.github.com/users/nikita-wayhq/followers",
"following_url": "https://api.github.com/users/nikita-wayhq/following{/other_user}",
"gists_url": "https://api.github.com/users/nikita-wayhq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikita-wayhq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikita-wayhq/subscriptions",
"organizations_url": "https://api.github.com/users/nikita-wayhq/orgs",
"repos_url": "https://api.github.com/users/nikita-wayhq/repos",
"events_url": "https://api.github.com/users/nikita-wayhq/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikita-wayhq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-16T19:44:34
| 2025-02-18T09:48:26
| 2025-02-16T20:42:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
Pretty much any code. The error is on your level.
```
### Error Message and Stack Trace (if applicable)
```shell
Traceback (most recent call last):
File "/usr/local/bin/uvicorn", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
run(
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
server.run()
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
await self._serve(sockets)
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
config.load()
File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/api/langgraph_api/server.py", line 16, in <module>
File "/api/langgraph_api/api/__init__.py", line 8, in <module>
File "/api/langgraph_api/api/meta.py", line 6, in <module>
File "/api/langgraph_api/queue.py", line 20, in <module>
'Exception raised by Queue.get(block=0)/get_nowait().'
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/api/langgraph_api/stream.py", line 35, in <module>
File "/usr/local/lib/python3.11/site-packages/langgraph_storage/store.py", line 8, in <module>
from langgraph.store.postgres.aio import AsyncPostgresStore, PostgresIndexConfig
File "/usr/local/lib/python3.11/site-packages/langgraph/store/postgres/__init__.py", line 1, in <module>
from langgraph.store.postgres.aio import AsyncPostgresStore
File "/usr/local/lib/python3.11/site-packages/langgraph/store/postgres/aio.py", line 12, in <module>
from langgraph.checkpoint.postgres import _ainternal
File "/usr/local/lib/python3.11/site-packages/langgraph/checkpoint/postgres/__init__.py", line 12, in <module>
from langgraph.checkpoint.base import (
ImportError: cannot import name 'get_checkpoint_metadata' from 'langgraph.checkpoint.base' (/usr/local/lib/python3.11/site-packages/langgraph/checkpoint/base/__init__.py)
```
### Description
After re-building the image today, I get this error.
We need to golive tomorrow morning and it blocks everything.
How can we build docker images w/o relying on a black-box dependency that ruined our production for **several** times since the last few days?
Is there a way to build it w/o this dependency?
### System Info
Note: the command output is done on a mac machine, while the error is inside docker.
```
python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:06:57 PDT 2024; root:xnu-11215.41.3~3/RELEASE_ARM64_T6041
> Python Version: 3.11.5 (main, Feb 5 2025, 16:10:32) [Clang 16.0.0 (clang-1600.0.26.6)]
Package Information
-------------------
> langchain_core: 0.3.33
> langchain: 0.3.17
> langchain_community: 0.3.16
> langsmith: 0.2.11
> langchain_anthropic: 0.3.0
> langchain_openai: 0.2.10
> langchain_text_splitters: 0.3.5
> langgraph_api: 0.0.15
> langgraph_cli: 0.1.65
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.48
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.12
> anthropic: 0.45.2
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.69
> langgraph-checkpoint: 2.0.10
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.55.3
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.6
> pydantic-settings: 2.7.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.27
> sse-starlette: 2.1.3
> starlette: 0.45.3
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
> watchfiles: 1.0.4
> zstandard: Installed. No version info available.
```
|
{
"login": "nfcampos",
"id": 56902,
"node_id": "MDQ6VXNlcjU2OTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/56902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nfcampos",
"html_url": "https://github.com/nfcampos",
"followers_url": "https://api.github.com/users/nfcampos/followers",
"following_url": "https://api.github.com/users/nfcampos/following{/other_user}",
"gists_url": "https://api.github.com/users/nfcampos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nfcampos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nfcampos/subscriptions",
"organizations_url": "https://api.github.com/users/nfcampos/orgs",
"repos_url": "https://api.github.com/users/nfcampos/repos",
"events_url": "https://api.github.com/users/nfcampos/events{/privacy}",
"received_events_url": "https://api.github.com/users/nfcampos/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3464/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3460
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3460/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3460/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3460/events
|
https://github.com/langchain-ai/langgraph/issues/3460
| 2,855,783,185
|
I_kwDOKFU0lc6qN8sR
| 3,460
|
Reducers for other state members apart from messages
|
{
"login": "binarybeastt",
"id": 53526506,
"node_id": "MDQ6VXNlcjUzNTI2NTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/53526506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/binarybeastt",
"html_url": "https://github.com/binarybeastt",
"followers_url": "https://api.github.com/users/binarybeastt/followers",
"following_url": "https://api.github.com/users/binarybeastt/following{/other_user}",
"gists_url": "https://api.github.com/users/binarybeastt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/binarybeastt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/binarybeastt/subscriptions",
"organizations_url": "https://api.github.com/users/binarybeastt/orgs",
"repos_url": "https://api.github.com/users/binarybeastt/repos",
"events_url": "https://api.github.com/users/binarybeastt/events{/privacy}",
"received_events_url": "https://api.github.com/users/binarybeastt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-16T00:13:29
| 2025-02-17T18:54:54
| 2025-02-17T18:54:53
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Discussed in https://github.com/langchain-ai/langgraph/discussions/3459
<div type='discussions-op-text'>
<sup>Originally posted by **binarybeastt** February 16, 2025</sup>
from langgraph.graph.message import add_messages helps to append messages to the message list, how can we replicate this behavior for other types that are not messages, maybe documents, files, or images?</div>
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3460/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3457
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3457/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3457/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3457/events
|
https://github.com/langchain-ai/langgraph/issues/3457
| 2,855,748,193
|
I_kwDOKFU0lc6qN0Jh
| 3,457
|
exec format error during langgraph docker deployment
|
{
"login": "mingxuan-he",
"id": 60416206,
"node_id": "MDQ6VXNlcjYwNDE2MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/60416206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingxuan-he",
"html_url": "https://github.com/mingxuan-he",
"followers_url": "https://api.github.com/users/mingxuan-he/followers",
"following_url": "https://api.github.com/users/mingxuan-he/following{/other_user}",
"gists_url": "https://api.github.com/users/mingxuan-he/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mingxuan-he/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mingxuan-he/subscriptions",
"organizations_url": "https://api.github.com/users/mingxuan-he/orgs",
"repos_url": "https://api.github.com/users/mingxuan-he/repos",
"events_url": "https://api.github.com/users/mingxuan-he/events{/privacy}",
"received_events_url": "https://api.github.com/users/mingxuan-he/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-15T22:28:15
| 2025-02-18T05:08:01
| 2025-02-18T05:08:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
docker pull $DOCKERHUB_USERNAME/my-langgraph-server:latest
docker run -it --rm $DOCKERHUB_USERNAME/my-langgraph-server:latest
```
### Error Message and Stack Trace (if applicable)
```shell
exec /storage/entrypoint.sh: exec format error
exec /storage/entrypoint.sh: exec format error
exec /storage/entrypoint.sh: exec format error
```
### Description
I'm using langgraph-cli in Github Actions to build the docker image automatically with `langgraph build`. I'm following instructions (and using the docker-compose file) [from the docs here](https://langchain-ai.github.io/langgraph/how-tos/deploy-self-hosted/#using-docker-compose). Local testing with docker desktop works. However the built container keeps emitting `exec format error` when deployed on my Ubuntu server.
Debugging with Sonnet suggests it's an issue with incompatible architectures. I'm wondering if there is a way to use `langgraph build` with arguments. Any help appreciated!
.github/workflows/main.yml:
```
name: Build Docker Image
on:
push:
branches: [main]
jobs:
build-push:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Build Docker image with Langgraph
run: |
pip install -U langgraph-cli
cd yuichan
langgraph build -t my-langgraph-server
- name: Login to Docker Registry
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Push Docker image
run: |
docker tag my-langgraph-server ${{ secrets.DOCKERHUB_USERNAME }}/my-langgraph-server:latest
docker push ${{ secrets.DOCKERHUB_USERNAME }}/yui-langgraph-server:latest
```
langgraph.json:
```
{
"dependencies": ["."],
"graphs":{
"yui": "./yuichan.py:graph"
},
"env": ".env",
"python_version": "3.12"
}
```
### System Info
- Ubuntu 24.04
- arm64 VPS on Oracle Cloud
- Docker version 27.4.1
requirements.txt:
langchain-core
langchain[openai,anthropic,groq,google-genai]
langchain-community
langchain-experimental
langchain-google-community[gmail]
langchainhub
langsmith
langgraph
langgraph-sdk
langgraph-checkpoint-postgres
pydantic
|
{
"login": "mingxuan-he",
"id": 60416206,
"node_id": "MDQ6VXNlcjYwNDE2MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/60416206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingxuan-he",
"html_url": "https://github.com/mingxuan-he",
"followers_url": "https://api.github.com/users/mingxuan-he/followers",
"following_url": "https://api.github.com/users/mingxuan-he/following{/other_user}",
"gists_url": "https://api.github.com/users/mingxuan-he/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mingxuan-he/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mingxuan-he/subscriptions",
"organizations_url": "https://api.github.com/users/mingxuan-he/orgs",
"repos_url": "https://api.github.com/users/mingxuan-he/repos",
"events_url": "https://api.github.com/users/mingxuan-he/events{/privacy}",
"received_events_url": "https://api.github.com/users/mingxuan-he/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3457/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3452
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3452/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3452/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3452/events
|
https://github.com/langchain-ai/langgraph/issues/3452
| 2,855,329,344
|
I_kwDOKFU0lc6qMN5A
| 3,452
|
微信交流群
|
{
"login": "RobinYang11",
"id": 20513480,
"node_id": "MDQ6VXNlcjIwNTEzNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/20513480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobinYang11",
"html_url": "https://github.com/RobinYang11",
"followers_url": "https://api.github.com/users/RobinYang11/followers",
"following_url": "https://api.github.com/users/RobinYang11/following{/other_user}",
"gists_url": "https://api.github.com/users/RobinYang11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobinYang11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobinYang11/subscriptions",
"organizations_url": "https://api.github.com/users/RobinYang11/orgs",
"repos_url": "https://api.github.com/users/RobinYang11/repos",
"events_url": "https://api.github.com/users/RobinYang11/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobinYang11/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-15T08:23:52
| 2025-02-15T21:08:43
| 2025-02-15T21:08:42
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Issue with current documentation:
<img width="718" alt="Image" src="https://github.com/user-attachments/assets/9a045798-ea10-4c20-952c-c42b5eae070d" />
### Idea or request for content:
_No response_
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3452/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3442
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3442/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3442/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3442/events
|
https://github.com/langchain-ai/langgraph/issues/3442
| 2,854,167,894
|
I_kwDOKFU0lc6qHyVW
| 3,442
|
How to use 'artifact' attribute in ToolMessage to output both text and image?
|
{
"login": "Cybertyann",
"id": 17784405,
"node_id": "MDQ6VXNlcjE3Nzg0NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/17784405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cybertyann",
"html_url": "https://github.com/Cybertyann",
"followers_url": "https://api.github.com/users/Cybertyann/followers",
"following_url": "https://api.github.com/users/Cybertyann/following{/other_user}",
"gists_url": "https://api.github.com/users/Cybertyann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cybertyann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cybertyann/subscriptions",
"organizations_url": "https://api.github.com/users/Cybertyann/orgs",
"repos_url": "https://api.github.com/users/Cybertyann/repos",
"events_url": "https://api.github.com/users/Cybertyann/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cybertyann/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| true
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-14T16:18:56
| 2025-02-14T17:16:27
| 2025-02-14T17:16:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Issue with current documentation:
In https://python.langchain.com/api_reference/core/messages/langchain_core.messages.tool.ToolMessage.html, it introduces the concept of 'arifact' and give a simple example:
> from langchain_core.messages import ToolMessage
>tool_output = {
"stdout": "From the graph we can see that the correlation between x and y is ...",
"stderr": None,
"artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
}
>ToolMessage(
content=tool_output["stdout"],
artifact=tool_output,
tool_call_id='call_Jja7J89XsjrOLA5r!MEOW!SL',
)
,
if the tool function called by llm return an output likes `tool_output = {
"stdout": "From the graph we can see that the correlation between x and y is ...",
"stderr": None,
"artifacts": {"type": "image", "base64_data": "/9j/4gIcSU..."},
}`
, can the agent automatically return a toolmessage contains 'artifact'? Or how to use 'artifact' attribute in ToolMessage to output both text and image? There are few examples about how to get multimodal output. Thank you very much.
### Idea or request for content:
_No response_
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3442/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3441
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3441/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3441/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3441/events
|
https://github.com/langchain-ai/langgraph/issues/3441
| 2,853,780,944
|
I_kwDOKFU0lc6qGT3Q
| 3,441
|
Checkpointer attempts to serialize RunnableConfig -- Object of type Foobar is not JSON serializable
|
{
"login": "aguadoenzo",
"id": 11411723,
"node_id": "MDQ6VXNlcjExNDExNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/11411723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aguadoenzo",
"html_url": "https://github.com/aguadoenzo",
"followers_url": "https://api.github.com/users/aguadoenzo/followers",
"following_url": "https://api.github.com/users/aguadoenzo/following{/other_user}",
"gists_url": "https://api.github.com/users/aguadoenzo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aguadoenzo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aguadoenzo/subscriptions",
"organizations_url": "https://api.github.com/users/aguadoenzo/orgs",
"repos_url": "https://api.github.com/users/aguadoenzo/repos",
"events_url": "https://api.github.com/users/aguadoenzo/events{/privacy}",
"received_events_url": "https://api.github.com/users/aguadoenzo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-14T13:50:34
| 2025-02-16T08:41:45
| 2025-02-14T17:22:28
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from typing import TypedDict
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.sqlite import SqliteSaver
from langgraph.constants import START
from langgraph.graph import StateGraph
class State(TypedDict):
foo: str
class Foobar:
"""
Non-JSON serializable class
"""
hello: str
def node_a(state: State, config: RunnableConfig):
return {"foo": "bar"}
with SqliteSaver.from_conn_string(":memory:") as memory:
workflow_builder = StateGraph(State)
workflow_builder.add_node(node_a)
workflow_builder.add_edge(START, "node_a")
graph = workflow_builder.compile(checkpointer=memory)
config = {"configurable": {"thread_id": "1", "foobar": Foobar()}}
for s in graph.stream({"foo": ""}, config):
print(s)
```
### Error Message and Stack Trace (if applicable)
```shell
Traceback (most recent call last):
File "/Users/enzoaguado/Code/redacted/repro.py", line 33, in <module>
for s in graph.stream({"foo": ""}, config):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 1668, in stream
with SyncPregelLoop(
^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/loop.py", line 959, in __exit__
return self.stack.__exit__(exc_type, exc_value, traceback)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 610, in __exit__
raise exc_details[1]
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 595, in __exit__
if cb(*exc_details):
^^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/executor.py", line 120, in __exit__
task.result()
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/loop.py", line 884, in _checkpointer_put_after_previous
prev.result()
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/executor.py", line 83, in done
task.result()
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/pregel/loop.py", line 886, in _checkpointer_put_after_previous
cast(BaseCheckpointSaver, self.checkpointer).put(
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/checkpoint/sqlite/__init__.py", line 400, in put
serialized_metadata = self.jsonplus_serde.dumps(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/checkpoint/serde/jsonplus.py", line 182, in dumps
return json.dumps(obj, default=self._default, ensure_ascii=False).encode(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/json/__init__.py", line 238, in dumps
**kw).encode(obj)
^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/json/encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.9/Frameworks/Python.framework/Versions/3.12/lib/python3.12/json/encoder.py", line 258, in iterencode
return _iterencode(o, 0)
^^^^^^^^^^^^^^^^^
File "/Users/enzoaguado/Code/redacted/.venv/lib/python3.12/site-packages/langgraph/checkpoint/serde/jsonplus.py", line 132, in _default
raise TypeError(
TypeError: Object of type Foobar is not JSON serializable
```
### Description
According to https://github.com/langchain-ai/langgraph/issues/2135#issuecomment-2426958645 the correct way to pass around objects that are not JSON serializable is to put them in `RunnableConfig` instead of the graph state.
However, even when doing that an exception is still thrown.
What is the correct way to pass objects around at runtime without storing them in the checkpointer ?
### System Info
langgraph = "^0.2.60"
langgraph-checkpoint-sqlite = "^2.0.4"
langgraph-checkpoint-postgres = "^2.0.13"
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3441/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3421
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3421/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3421/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3421/events
|
https://github.com/langchain-ai/langgraph/issues/3421
| 2,851,641,369
|
I_kwDOKFU0lc6p-JgZ
| 3,421
|
LangGraph custom auth not working for self-hosted deployment without version update
|
{
"login": "BobMerkus",
"id": 23738320,
"node_id": "MDQ6VXNlcjIzNzM4MzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/23738320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobMerkus",
"html_url": "https://github.com/BobMerkus",
"followers_url": "https://api.github.com/users/BobMerkus/followers",
"following_url": "https://api.github.com/users/BobMerkus/following{/other_user}",
"gists_url": "https://api.github.com/users/BobMerkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobMerkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobMerkus/subscriptions",
"organizations_url": "https://api.github.com/users/BobMerkus/orgs",
"repos_url": "https://api.github.com/users/BobMerkus/repos",
"events_url": "https://api.github.com/users/BobMerkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobMerkus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| true
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-13T17:11:19
| 2025-02-13T19:44:21
| 2025-02-13T19:44:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [ ] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
FROM langchain/langgraph-api:3.12
# Note that this example has been redacted to protect private information
...
# Set security scheme
ENV LANGGRAPH_AUTH='{"path": "/deps/<my_dep>/<some_folder>/security/langgraph_init.py:auth", "disable_studio_auth": true, "openapi": {"securitySchemes": {"bearerAuth": {"type": "http", "scheme": "bearer", "bearerFormat": "JWT"}}, "security": [{"bearerAuth": []}]}}'
# Graphs
ENV LANGSERVE_GRAPHS='{"agent": "/deps/<my_dep>/<some_folder>/graph.py:graph"}'
```
### Error Message and Stack Trace (if applicable)
```shell
2025-02-13T12:29:09.619757Z [warning ] No license key found, running in test mode with LangSmith API key. For production use, set LANGGRAPH_CLOUD_LICENSE_KEY in environment. [langgraph_license.validation] api_revision=cfcc057 api_variant=local
2025-02-13T12:29:09.913043Z [info ] HTTP Request: GET https://eu.api.smith.langchain.com/auth?langgraph-api=true "HTTP/1.1 200 OK" [httpx] api_revision=cfcc057 api_variant=local
2025-02-13T12:29:09.916649Z [error ] Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan
async with self.lifespan_context(app) as maybe_state:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/api/langgraph_api/lifespan.py", line 18, in lifespan
File "/api/langgraph_license/validation.py", line 98, in get_license_status
ValueError: Custom authentication is currently available in the Managed Cloud version of LangGraph Platform or with an self-hosting enterprise license. Please visit https://langchain-ai.github.io/langgraph/concepts/deployment_options/ to learn more about deployment options, or contact sales@langchain.com for information on upgrading from the self-hosted Lite plan to an enterprise license.
[uvicorn.error] api_revision=cfcc057 api_variant=local
2025-02-13T12:29:09.917001Z [error ] Application startup failed. Exiting. [uvicorn.error] api_revision=cfcc057 api_variant=local
```
### Description
Hi,
I am currently using a self-hosted deployment of the LangGraph API, but somehow our existing deployment (with custom auth) has been broken. I'm getting the following error:
```
ValueError: Custom authentication is currently available in the Managed Cloud version of LangGraph Platform or with an self-hosting enterprise license. Please visit https://langchain-ai.github.io/langgraph/concepts/deployment_options/ to learn more about deployment options, or contact sales@langchain.com for information on upgrading from the self-hosted Lite plan to an enterprise license.
```
How is this possible without any version updates? The docs note that this is perfectly fine for [Self-Hosted Auth](https://langchain-ai.github.io/langgraph/concepts/auth/#self-hosted):
- No default authentication
- Complete flexibility to implement your security model
- You control all aspects of authentication and authorization
Authentication limits are not discussed on the [self-hosted lite deployment](https://langchain-ai.github.io/langgraph/concepts/deployment_options/?h=self+hosted+lite#self-hosted-lite) either, it just notes: '[Cron jobs](https://langchain-ai.github.io/langgraph/cloud/how-tos/cron_jobs/) are not available for Self-Hosted Lite deployments'. The [deployments options](https://langchain-ai.github.io/langgraph/concepts/deployment_options/?h=self+hosted+lite#overview) clearly note **Self-Hosted Lite: Available for all plans.**.
The troubleshooting page for [INVALID_LICENSE self-hosted lite](https://langchain-ai.github.io/langgraph/troubleshooting/errors/INVALID_LICENSE/?h=license#for-self-hosted-lite-limited-features) however, notes: 'The API key must be associated with an account on a Plus plan or greater'.
I'm assuming this is a bug, otherwise I would expect this to be described in release notes and the public docs. That would still not justify the fact that this does not follow semantic versioning, i.e. there should have been a major release that changed this behaviour, but this happened WITHOUT any version updates (not even minor). The only thing I did is rebuild the docker image with identical dependencies inside our CI/CD pipeline, i.e. same python version (3.12), package dependencies & base image (langchain/langgraph-api:3.12).
Features can be moved to different licensing & terms over time, but this is why semantic versioning and stable releases exist after all. The terms and services note: "Any Free Access Subscriptions are provided by LangChain “AS-IS” and without any representations, warranties, performance, or data security guarantees or support obligations.". I understand this clause, but I still expect released versions to work over time, and at the very least that this is accurately described in the public documentation (under professional production auth in this case).
Edit: Upon further digging we noticed that the underlying docker image is the issue. We created our implementation based on [langchain/langgraph-api:3.12-233a561](https://hub.docker.com/layers/langchain/langgraph-api/3.12-233a561/images/sha256-d542bce97b0b4c020c8eef0b0c8fc17c8a5bef70ff6504481c356e9b19b11725) and it still works fine, the breaking behaviour seems to be introduced in [langchain/langgraph-api:3.12-7e692e2](https://hub.docker.com/layers/langchain/langgraph-api/3.12-7e692e2/images/sha256-baead473981b283c008b4f9098a9d107c306fabdc77531448e1ee970cd402a6d) as [langchain/langgraph-api:3.12-cfb4da0](https://hub.docker.com/layers/langchain/langgraph-api/3.12-cfb4da0/images/sha256-b923e87a671277848ad17c3da98664ea20edd33fe0d85b1820f971c0450d17f0) works fine.
### System Info
Not relevant, this issue was verified on multiple Linux/MacOS machines
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3421/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3418
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3418/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3418/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3418/events
|
https://github.com/langchain-ai/langgraph/issues/3418
| 2,851,034,820
|
I_kwDOKFU0lc6p71bE
| 3,418
|
Subgraph state update contains the entire CompiledStateGraph in it which is not serializable for the astream events
|
{
"login": "DebienRuben",
"id": 94834599,
"node_id": "U_kgDOBacPpw",
"avatar_url": "https://avatars.githubusercontent.com/u/94834599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DebienRuben",
"html_url": "https://github.com/DebienRuben",
"followers_url": "https://api.github.com/users/DebienRuben/followers",
"following_url": "https://api.github.com/users/DebienRuben/following{/other_user}",
"gists_url": "https://api.github.com/users/DebienRuben/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DebienRuben/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DebienRuben/subscriptions",
"organizations_url": "https://api.github.com/users/DebienRuben/orgs",
"repos_url": "https://api.github.com/users/DebienRuben/repos",
"events_url": "https://api.github.com/users/DebienRuben/events{/privacy}",
"received_events_url": "https://api.github.com/users/DebienRuben/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-13T13:23:28
| 2025-02-18T21:31:27
| 2025-02-18T21:31:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
class LookupGraph(BaseGraphWrapper):
"""Graph wrapper for the lookup tool, used to also add a function to get the prompt description"""
name: str = "Lookup Tool"
description: str = "Tool used to retrieve information needed to answer any input question"
graph: StateGraph = None
input_schema = LookupInput
def get_graph(self):
"""Compile the graph and return it"""
graph = StateGraph(state_schema=LookupState)
graph.add_node("retriever", retriever_node)
graph.add_node("filter_documents", FilterGraph.graph)
graph.add_node("replace_urls", replace_urls_node)
graph.add_node("generate_answer", CollapseGraph.graph)
graph.add_node("process_ouput", process_ouput)
graph.add_edge(START, "retriever")
graph.add_edge("retriever", "filter_documents")
graph.add_edge("filter_documents", "replace_urls")
graph.add_edge("replace_urls", "generate_answer")
graph.add_edge("generate_answer", "process_ouput")
graph.add_edge("process_ouput", END)
graph = graph.compile()
self.graph = graph
return graph
class FilterGraph(BaseGraphWrapper):
"""Graph wrapper for the filter tool, used to also add a function to get the prompt description"""
name: str = "Filter Documents"
description: str = "Tool designed to filter documents based on a question."
graph: CompiledStateGraph = graph.compile()
input_schema = FilterInput
```
### Error Message and Stack Trace (if applicable)
```shell
File "/home/runner/work/IRIS-RAG-API/IRIS-RAG-API/app/app.py", line 550, in send_message
async for event, chunk in agent_graph.astream(
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 2007, in astream
async for _ in runner.atick(
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/runner.py", line 527, in atick
_panic_or_proceed(
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/runner.py", line 619, in _panic_or_proceed
raise exc
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 128, in arun_with_retry
return await task.proc.ainvoke(task.input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 543, in ainvoke
input = await step.ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 331, in ainvoke
ret = await asyncio.create_task(coro, context=context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/IRIS-RAG-API/IRIS-RAG-API/app/rebatch_chain/graphs/simple_agent_graph.py", line 224, in custom_tools_node
subgraph_state = await t.ainvoke(tool_args, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langchain_core/tools/structured.py", line 58, in ainvoke
return await super().ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langchain_core/tools/base.py", line 493, in ainvoke
return await self.arun(tool_input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langchain_core/tools/base.py", line 842, in arun
raise error_to_raise
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langchain_core/tools/base.py", line 809, in arun
response = await asyncio.create_task(coro, context=context) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langchain_core/tools/structured.py", line 97, in _arun
return await self.coroutine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/IRIS-RAG-API/IRIS-RAG-API/app/rebatch_chain/graphs/simple_agent_graph.py", line 187, in lookup
lookup_state = await lookup_graph.ainvoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 2122, in ainvoke
async for chunk in self.astream(
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 2007, in astream
async for _ in runner.atick(
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/runner.py", line 444, in atick
await arun_with_retry(
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 123, in arun_with_retry
async for _ in task.proc.astream(task.input, config):
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 666, in astream
async for chunk in aiterator:
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1455, in atransform
async for ichunk in input:
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1455, in atransform
async for ichunk in input:
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 2007, in astream
async for _ in runner.atick(
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/runner.py", line 444, in atick
await arun_with_retry(
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 123, in arun_with_retry
async for _ in task.proc.astream(task.input, config):
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 666, in astream
async for chunk in aiterator:
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1455, in atransform
async for ichunk in input:
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1455, in atransform
async for ichunk in input:
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1960, in astream
async with AsyncPregelLoop(
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/loop.py", line 1103, in __aexit__
return await exit_task
^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/contextlib.py", line 745, in __aexit__
raise exc_details[1]
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/contextlib.py", line 728, in __aexit__
cb_suppress = await cb(*exc_details)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/executor.py", line 206, in __aexit__
raise exc
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/loop.py", line 1019, in _checkpointer_put_after_previous
await prev
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/pregel/loop.py", line 1021, in _checkpointer_put_after_previous
await cast(BaseCheckpointSaver, self.checkpointer).aput(
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/checkpoint/memory/__init__.py", line 474, in aput
return self.put(config, checkpoint, metadata, new_versions)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/checkpoint/memory/__init__.py", line 359, in put
self.serde.dumps_typed(
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/checkpoint/serde/jsonplus.py", line 193, in dumps_typed
return "msgpack", _msgpack_enc(obj)
^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/checkpoint/serde/jsonplus.py", line 514, in _msgpack_enc
return enc.pack(data)
^^^^^^^^^^^^^^
File "msgpack/_packer.pyx", line 279, in msgpack._cmsgpack.Packer.pack
File "msgpack/_packer.pyx", line 276, in msgpack._cmsgpack.Packer.pack
File "msgpack/_packer.pyx", line 265, in msgpack._cmsgpack.Packer._pack
File "msgpack/_packer.pyx", line 213, in msgpack._cmsgpack.Packer._pack_inner
File "msgpack/_packer.pyx", line 267, in msgpack._cmsgpack.Packer._pack
File "/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/langgraph/checkpoint/serde/jsonplus.py", line 435, in _msgpack_default
raise TypeError(f"Object of type {obj.__class__.__name__} is not serializable")
TypeError: Object of type CompiledStateGraph is not serializable
```
### Description
We have a graph with a subgraph. This subgraph has just one node that returns a list of langchain Documents and the question asked.
For some reason we get the error above where it seems like the entire CompiledGraph from the subgraph is included in the updated state.
The error is thrown in the astream because the CompiledGraph cannot be serialized.
It seems to occur with the latest update from langgraph and langgraph-checkpoint
It works with:
langgraph==0.2.71
langgrahp-checkpoint==2.0.12
but it breaks with:
langgraph==0.2.72
langgrahp-checkpoint==2.0.13
Especially the langgraph-checkpoint seems the culprit as downgrading the langraph to 71 didn't work but downgrading the checkpoint to 12 did.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #53~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Jan 15 19:18:46 UTC 2
> Python Version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.3.35
> langchain: 0.3.18
> langchain_community: 0.3.17
> langsmith: 0.3.8
> langchain_huggingface: 0.1.2
> langchain_milvus: 0.1.7
> langchain_openai: 0.2.13
> langchain_text_splitters: 0.3.6
> langchain_weaviate: 0.0.4
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp<4.0.0,>=3.8.3: Installed. No version info available.
> async-timeout<5.0.0,>=4.0.0;: Installed. No version info available.
> dataclasses-json<0.7,>=0.5.7: Installed. No version info available.
> httpx: 0.28.1
> httpx-sse<1.0.0,>=0.4.0: Installed. No version info available.
> huggingface-hub: 0.28.1
> jsonpatch<2.0,>=1.33: Installed. No version info available.
> langchain-anthropic;: Installed. No version info available.
> langchain-aws;: Installed. No version info available.
> langchain-cohere;: Installed. No version info available.
> langchain-community;: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.34: Installed. No version info available.
> langchain-deepseek;: Installed. No version info available.
> langchain-fireworks;: Installed. No version info available.
> langchain-google-genai;: Installed. No version info available.
> langchain-google-vertexai;: Installed. No version info available.
> langchain-groq;: Installed. No version info available.
> langchain-huggingface;: Installed. No version info available.
> langchain-mistralai;: Installed. No version info available.
> langchain-ollama;: Installed. No version info available.
> langchain-openai;: Installed. No version info available.
> langchain-text-splitters<1.0.0,>=0.3.6: Installed. No version info available.
> langchain-together;: Installed. No version info available.
> langchain<1.0.0,>=0.3.18: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> langsmith<0.4,>=0.1.125: Installed. No version info available.
> langsmith<0.4,>=0.1.17: Installed. No version info available.
> numpy: 1.26.4
> numpy<2,>=1.26.4;: Installed. No version info available.
> numpy<3,>=1.26.2;: Installed. No version info available.
> openai: 1.55.3
> orjson: 3.10.15
> packaging<25,>=23.2: Installed. No version info available.
> pydantic: 2.8.2
> pydantic-settings<3.0.0,>=2.4.0: Installed. No version info available.
> pydantic<3.0.0,>=2.5.2;: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4;: Installed. No version info available.
> pymilvus: 2.4.14
> pytest: 8.3.4
> PyYAML>=5.3: Installed. No version info available.
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> requests<3,>=2: Installed. No version info available.
> rich: Installed. No version info available.
> sentence-transformers: 3.4.1
> simsimd: 6.2.1
> SQLAlchemy<3,>=1.4: Installed. No version info available.
> tenacity!=8.4.0,<10,>=8.1.0: Installed. No version info available.
> tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
> tiktoken: 0.8.0
> tokenizers: 0.19.1
> transformers: 4.42.4
> typing-extensions>=4.7: Installed. No version info available.
> weaviate-client: 4.10.4
> zstandard: 0.23.0
|
{
"login": "nfcampos",
"id": 56902,
"node_id": "MDQ6VXNlcjU2OTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/56902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nfcampos",
"html_url": "https://github.com/nfcampos",
"followers_url": "https://api.github.com/users/nfcampos/followers",
"following_url": "https://api.github.com/users/nfcampos/following{/other_user}",
"gists_url": "https://api.github.com/users/nfcampos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nfcampos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nfcampos/subscriptions",
"organizations_url": "https://api.github.com/users/nfcampos/orgs",
"repos_url": "https://api.github.com/users/nfcampos/repos",
"events_url": "https://api.github.com/users/nfcampos/events{/privacy}",
"received_events_url": "https://api.github.com/users/nfcampos/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3418/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3417
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3417/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3417/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3417/events
|
https://github.com/langchain-ai/langgraph/issues/3417
| 2,850,870,913
|
I_kwDOKFU0lc6p7NaB
| 3,417
|
Returning state in nodes won't update it
|
{
"login": "NathanAP",
"id": 12821171,
"node_id": "MDQ6VXNlcjEyODIxMTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/12821171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NathanAP",
"html_url": "https://github.com/NathanAP",
"followers_url": "https://api.github.com/users/NathanAP/followers",
"following_url": "https://api.github.com/users/NathanAP/following{/other_user}",
"gists_url": "https://api.github.com/users/NathanAP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NathanAP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NathanAP/subscriptions",
"organizations_url": "https://api.github.com/users/NathanAP/orgs",
"repos_url": "https://api.github.com/users/NathanAP/repos",
"events_url": "https://api.github.com/users/NathanAP/events{/privacy}",
"received_events_url": "https://api.github.com/users/NathanAP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-13T12:15:18
| 2025-02-13T19:12:21
| 2025-02-13T19:12:19
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
class MyNode:
def action(state: MyState):
# some cool stuff
state.my_value = 123
return state
def should_continue(state: MyState):
print(state.my_value) # this returns default value (in my case is None)
# some cool logic to continue or not
return "END"
```
### Description
Hi, I've been using `latest` version forever, but since this morning I've being dealing with the fact that returning `state` in my nodes won't update it. When I rollback to `0.2.71` it worked fine. Did something change since `.71`?
Sorry if example code is not really helpful.
edit1: my versions were wrong;
edit2: looking better, the problem seems related to Lists, Dicts and BaseModels. It wont't update those for me, here are some `print` I've made:
```
agent=None settings=None database=None user=User(id=UUID('89fe6ec9-a318-4c5c-8236-e870f7fa4e6f'), status=True, full_name='nathan', email="nathan@example.com"
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Debian 6.1.106-3 (2024-08-26)
> Python Version: 3.12.1 (main, Dec 12 2024, 22:30:56) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.3.33
> langchain: 0.3.17
> langchain_community: 0.3.16
> langsmith: 0.3.4
> langchain_openai: 0.3.3
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.43
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.7
> orjson: 3.10.11
> packaging: 24.1
> pydantic: 2.10.6
> pydantic-settings: 2.6.1
> pytest: Installed. No version info available.
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rich: Installed. No version info available.
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: 0.23.0
|
{
"login": "NathanAP",
"id": 12821171,
"node_id": "MDQ6VXNlcjEyODIxMTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/12821171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NathanAP",
"html_url": "https://github.com/NathanAP",
"followers_url": "https://api.github.com/users/NathanAP/followers",
"following_url": "https://api.github.com/users/NathanAP/following{/other_user}",
"gists_url": "https://api.github.com/users/NathanAP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NathanAP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NathanAP/subscriptions",
"organizations_url": "https://api.github.com/users/NathanAP/orgs",
"repos_url": "https://api.github.com/users/NathanAP/repos",
"events_url": "https://api.github.com/users/NathanAP/events{/privacy}",
"received_events_url": "https://api.github.com/users/NathanAP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3417/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3417/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3416
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3416/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3416/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3416/events
|
https://github.com/langchain-ai/langgraph/issues/3416
| 2,850,584,550
|
I_kwDOKFU0lc6p6Hfm
| 3,416
|
LangGraph JS SDK : errors are parsed as JSON
|
{
"login": "arthberman",
"id": 106271022,
"node_id": "U_kgDOBlWRLg",
"avatar_url": "https://avatars.githubusercontent.com/u/106271022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arthberman",
"html_url": "https://github.com/arthberman",
"followers_url": "https://api.github.com/users/arthberman/followers",
"following_url": "https://api.github.com/users/arthberman/following{/other_user}",
"gists_url": "https://api.github.com/users/arthberman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arthberman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arthberman/subscriptions",
"organizations_url": "https://api.github.com/users/arthberman/orgs",
"repos_url": "https://api.github.com/users/arthberman/repos",
"events_url": "https://api.github.com/users/arthberman/events{/privacy}",
"received_events_url": "https://api.github.com/users/arthberman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-13T10:13:44
| 2025-02-13T14:50:52
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
app.get("/test", async (c) => {
const client = new Client({
apiUrl: "http://localhost:2024",
});
const thread = await client.threads.get("non-existent-thread");
return c.json({
thread,
});
});
```
### Error Message and Stack Trace (if applicable)
```shell
[wrangler:inf] GET /test 500 Internal Server Error (39ms)
✘ [ERROR] SyntaxError: Unexpected token 'I', "Invalid th"... is not valid JSON
at async Array.<anonymous>
```
### Description
- when the API return an error (forbidden, not found, ...), the SDK systematically parse it as JSON
- error is not throw as it should be
- we can't see the complete error
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.2.0: Fri Dec 6 19:00:33 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T8122
> Python Version: 3.12.5 (main, Aug 14 2024, 04:32:18) [Clang 18.1.8 ]
Package Information
-------------------
> langchain_core: 0.3.34
> langchain: 0.3.18
> langchain_community: 0.3.17
> langsmith: 0.3.8
> langchain_anthropic: 0.3.7
> langchain_aws: 0.2.12
> langchain_openai: 0.3.5
> langchain_text_splitters: 0.3.6
> langgraph_api: 0.0.23
> langgraph_cli: 0.1.71
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.51
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp<4.0.0,>=3.8.3: Installed. No version info available.
> anthropic<1,>=0.45.0: Installed. No version info available.
> async-timeout<5.0.0,>=4.0.0;: Installed. No version info available.
> boto3: 1.36.18
> click: 8.1.8
> cryptography: 43.0.3
> dataclasses-json<0.7,>=0.5.7: Installed. No version info available.
> httpx: 0.28.1
> httpx-sse<1.0.0,>=0.4.0: Installed. No version info available.
> jsonpatch<2.0,>=1.33: Installed. No version info available.
> jsonschema-rs: 0.25.1
> langchain-anthropic;: Installed. No version info available.
> langchain-aws;: Installed. No version info available.
> langchain-cohere;: Installed. No version info available.
> langchain-community;: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.34: Installed. No version info available.
> langchain-deepseek;: Installed. No version info available.
> langchain-fireworks;: Installed. No version info available.
> langchain-google-genai;: Installed. No version info available.
> langchain-google-vertexai;: Installed. No version info available.
> langchain-groq;: Installed. No version info available.
> langchain-huggingface;: Installed. No version info available.
> langchain-mistralai;: Installed. No version info available.
> langchain-ollama;: Installed. No version info available.
> langchain-openai;: Installed. No version info available.
> langchain-text-splitters<1.0.0,>=0.3.6: Installed. No version info available.
> langchain-together;: Installed. No version info available.
> langchain<1.0.0,>=0.3.18: Installed. No version info available.
> langgraph: 0.2.71
> langgraph-checkpoint: 2.0.12
> langsmith-pyo3: Installed. No version info available.
> langsmith<0.4,>=0.1.125: Installed. No version info available.
> langsmith<0.4,>=0.1.17: Installed. No version info available.
> numpy: 2.2.2
> numpy<2,>=1.26.4;: Installed. No version info available.
> numpy<3,>=1.26.2;: Installed. No version info available.
> openai<2.0.0,>=1.58.1: Installed. No version info available.
> orjson: 3.10.15
> packaging<25,>=23.2: Installed. No version info available.
> pydantic: 2.10.6
> pydantic-settings<3.0.0,>=2.4.0: Installed. No version info available.
> pydantic<3.0.0,>=2.5.2;: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4;: Installed. No version info available.
> pyjwt: 2.10.1
> pytest: Installed. No version info available.
> python-dotenv: 1.0.1
> PyYAML>=5.3: Installed. No version info available.
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> requests<3,>=2: Installed. No version info available.
> rich: Installed. No version info available.
> SQLAlchemy<3,>=1.4: Installed. No version info available.
> sse-starlette: 2.1.3
> starlette: 0.45.3
> structlog: 24.4.0
> tenacity: 9.0.0
> tenacity!=8.4.0,<10,>=8.1.0: Installed. No version info available.
> tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
> tiktoken<1,>=0.7: Installed. No version info available.
> typing-extensions>=4.7: Installed. No version info available.
> uvicorn: 0.34.0
> watchfiles: 1.0.4
> zstandard: 0.23.0
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3416/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3415
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3415/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3415/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3415/events
|
https://github.com/langchain-ai/langgraph/issues/3415
| 2,850,212,986
|
I_kwDOKFU0lc6p4sx6
| 3,415
|
Subgraph history
|
{
"login": "ErDaN213",
"id": 47355880,
"node_id": "MDQ6VXNlcjQ3MzU1ODgw",
"avatar_url": "https://avatars.githubusercontent.com/u/47355880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErDaN213",
"html_url": "https://github.com/ErDaN213",
"followers_url": "https://api.github.com/users/ErDaN213/followers",
"following_url": "https://api.github.com/users/ErDaN213/following{/other_user}",
"gists_url": "https://api.github.com/users/ErDaN213/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErDaN213/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErDaN213/subscriptions",
"organizations_url": "https://api.github.com/users/ErDaN213/orgs",
"repos_url": "https://api.github.com/users/ErDaN213/repos",
"events_url": "https://api.github.com/users/ErDaN213/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErDaN213/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-13T07:34:44
| 2025-02-21T11:40:49
| 2025-02-21T11:40:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Discussed in https://github.com/langchain-ai/langgraph/discussions/3332
<div type='discussions-op-text'>
<sup>Originally posted by **ErDaN213** February 6, 2025</sup>
Hi everyone,
I am currently working with LangGraph and would like to ensure that all subgraphs and their respective nodes are being properly invoked during execution. Is there a method to track or verify that each node is being triggered as expected?
I have explored the get_state_history method; however, it does not seem to provide information regarding the invocation of subgraph nodes. I would prefer to avoid creating a custom State variable that would need to be overwritten at each node.
Could anyone suggest an appropriate way to retrieve this information, or is it possible to introduce a subgraphs=True parameter to the get_state_history method to capture such details?
I look forward to your insights and suggestions.
Thank you in advance for your assistance!
Best regards,</div>
|
{
"login": "ErDaN213",
"id": 47355880,
"node_id": "MDQ6VXNlcjQ3MzU1ODgw",
"avatar_url": "https://avatars.githubusercontent.com/u/47355880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErDaN213",
"html_url": "https://github.com/ErDaN213",
"followers_url": "https://api.github.com/users/ErDaN213/followers",
"following_url": "https://api.github.com/users/ErDaN213/following{/other_user}",
"gists_url": "https://api.github.com/users/ErDaN213/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErDaN213/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErDaN213/subscriptions",
"organizations_url": "https://api.github.com/users/ErDaN213/orgs",
"repos_url": "https://api.github.com/users/ErDaN213/repos",
"events_url": "https://api.github.com/users/ErDaN213/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErDaN213/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3415/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3415/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3393
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3393/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3393/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3393/events
|
https://github.com/langchain-ai/langgraph/issues/3393
| 2,846,817,288
|
I_kwDOKFU0lc6prvwI
| 3,393
|
LangGraph Platform Cloud SaaS timing out on deployments
|
{
"login": "austinberke",
"id": 13461291,
"node_id": "MDQ6VXNlcjEzNDYxMjkx",
"avatar_url": "https://avatars.githubusercontent.com/u/13461291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/austinberke",
"html_url": "https://github.com/austinberke",
"followers_url": "https://api.github.com/users/austinberke/followers",
"following_url": "https://api.github.com/users/austinberke/following{/other_user}",
"gists_url": "https://api.github.com/users/austinberke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/austinberke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/austinberke/subscriptions",
"organizations_url": "https://api.github.com/users/austinberke/orgs",
"repos_url": "https://api.github.com/users/austinberke/repos",
"events_url": "https://api.github.com/users/austinberke/events{/privacy}",
"received_events_url": "https://api.github.com/users/austinberke/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-11T23:56:48
| 2025-02-12T03:23:03
| 2025-02-12T03:23:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
Steps to reproduce:
1. Push to branch in connected repo
2. Revision + build is automatically kicked off in LangGraph Platform
3. Wait for "Building", "Waiting for Deploy", "Deploying LangGraph Server" messages
4. The deploy will take a few minutes - if it goes over 5 minutes it will time out and fail
```
### Error Message and Stack Trace (if applicable)
```shell
Timeout: Deployment is not ready after 300 seconds.
```
### Description
I've been trying to set up LangGraph Platform Cloud SaaS to evaluate if it's stable enough for us to host on, and although the first couple deployments worked fine, the past several ones have failed with the following error message:
Timeout: Deployment is not ready after 300 seconds.
Some of these deployments will show server logs that it did successfully start up after showing the failure message, but many of them have empty server logs.
Is it normal for deployments to be taking this long? Was there an outage today? If it regularly goes over 300 sec should this timeout be increased?
### System Info
n/a
|
{
"login": "austinberke",
"id": 13461291,
"node_id": "MDQ6VXNlcjEzNDYxMjkx",
"avatar_url": "https://avatars.githubusercontent.com/u/13461291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/austinberke",
"html_url": "https://github.com/austinberke",
"followers_url": "https://api.github.com/users/austinberke/followers",
"following_url": "https://api.github.com/users/austinberke/following{/other_user}",
"gists_url": "https://api.github.com/users/austinberke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/austinberke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/austinberke/subscriptions",
"organizations_url": "https://api.github.com/users/austinberke/orgs",
"repos_url": "https://api.github.com/users/austinberke/repos",
"events_url": "https://api.github.com/users/austinberke/events{/privacy}",
"received_events_url": "https://api.github.com/users/austinberke/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3393/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3392
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3392/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3392/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3392/events
|
https://github.com/langchain-ai/langgraph/issues/3392
| 2,846,662,511
|
I_kwDOKFU0lc6prJ9v
| 3,392
|
`TypeError` in PostgreSQL Store Setup Function (Re-execution)
|
{
"login": "and1412",
"id": 3029339,
"node_id": "MDQ6VXNlcjMwMjkzMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3029339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/and1412",
"html_url": "https://github.com/and1412",
"followers_url": "https://api.github.com/users/and1412/followers",
"following_url": "https://api.github.com/users/and1412/following{/other_user}",
"gists_url": "https://api.github.com/users/and1412/gists{/gist_id}",
"starred_url": "https://api.github.com/users/and1412/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/and1412/subscriptions",
"organizations_url": "https://api.github.com/users/and1412/orgs",
"repos_url": "https://api.github.com/users/and1412/repos",
"events_url": "https://api.github.com/users/and1412/events{/privacy}",
"received_events_url": "https://api.github.com/users/and1412/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-11T22:18:45
| 2025-02-15T02:12:22
| 2025-02-15T02:12:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from langgraph.store.postgres import AsyncPostgresStore
from psycopg_pool import AsyncConnectionPool
pool = AsyncConnectionPool(
conninfo = '...',
max_size = '...',
min_size = '...',
kwargs= {
"autocommit": True,
"prepare_threshold": 0,
}
)
await pool.open()
#...
store = AsyncPostgresStore(pool)
await store.setup()
```
### Error Message and Stack Trace (if applicable)
```shell
File "/Users/development/project/application/core/graph.py", line 410, in generate_graph
await store.setup()
File "/Users/development/project/.venv/lib/python3.11/site-packages/langgraph/store/postgres/aio.py", line 234, in setup
version = await _get_version(cur, table="store_migrations")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/development/project/.venv/lib/python3.11/site-packages/langgraph/store/postgres/aio.py", line 230, in _get_version
version = row["v"]
~~~^^^^^
TypeError: tuple indices must be integers or slices, not str
```
### Description
Hello guys,
### Bug Report: `TypeError` in PostgreSQL Store Setup Function (Re-execution)
#### Description
When executing the PostgreSQL store setup function for the **first time**, it runs without errors. However, from the **second execution onward**, the following error occurs:
```python
File "/Users/development/project/.venv/lib/python3.11/site-packages/langgraph/store/postgres/aio.py", line 230, in _get_version
version = row["v"]
~~~^^^^^
TypeError: tuple indices must be integers or slices, not str
```
#### Observations
After reviewing the code, I noticed that the PostgreSQL cursor expects a **dictionary with key-value pairs**, but instead, it returns only values (a tuple). This leads to the `TypeError` when attempting to access `"v"` as a dictionary key.
The issue occurs at **line 230** of `.venv/lib/python3.11/site-packages/langgraph/store/postgres/aio.py`:
```python
if row is None:
version = -1
else:
version = row["v"] # Error occurs here
return version
```
Additionally, in **lines 426-431**, the cursor is created as follows:
```python
else:
async with (
self.lock,
conn.cursor(binary=True) as cur, # Cursor creation
):
yield cur
```
It seems that `row_factory=dict_row` might need to be used to ensure the cursor returns a dictionary instead of a tuple.
#### Environment
- `langgraph`: **0.2.69**
- `langgraph_checkpoint_postgres`: **2.0.13**
- Python: **3.11**
- OS: macOS
#### Suggested Fix
Would it be appropriate to use `row_factory=dict_row` when creating the cursor to ensure dictionary-based row access?
Let me know if more details are needed. Thanks!
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
> Python Version: 3.11.4 (main, Jul 12 2023, 12:11:09) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.2.43
> langchain: 0.2.3
> langchain_community: 0.2.4
> langsmith: 0.1.147
> langchain_aws: 0.1.6
> langchain_cli: 0.0.35
> langchain_experimental: 0.0.60
> langchain_openai: 0.1.7
> langchain_postgres: 0.0.9
> langchain_text_splitters: 0.2.1
> langchainhub: 0.1.21
> langgraph: 0.2.69
> langserve: 0.2.2
Other Dependencies
------------------
> aiohttp: 3.9.5
> async-timeout: Installed. No version info available.
> beautifulsoup4: Installed. No version info available.
> boto3: 1.34.162
> dataclasses-json: 0.6.7
> faker: 19.13.0
> fastapi: 0.115.6
> gitpython: 3.1.43
> gritql: 0.1.5
> httpx: 0.27.0
> jinja2: 3.1.4
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.10
> langgraph-sdk: 0.1.51
> langserve[all]: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> lxml: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.57.0
> orjson: 3.10.5
> packaging: 23.2
> pandas: 2.2.3
> pgvector: 0.2.5
> presidio-analyzer: Installed. No version info available.
> presidio-anonymizer: Installed. No version info available.
> psycopg: 3.2.3
> psycopg-pool: 3.2.4
> pydantic: 2.10.3
> pyproject-toml: 0.0.10
> PyYAML: 6.0.1
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sentence-transformers: Installed. No version info available.
> SQLAlchemy: 2.0.36
> sqlalchemy: 2.0.36
> sse-starlette: 1.8.2
> tabulate: Installed. No version info available.
> tenacity: 8.5.0
> tiktoken: 0.7.0
> tomlkit: 0.12.5
> typer[all]: Installed. No version info available.
> types-requests: 2.32.0.20240622
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
|
{
"login": "and1412",
"id": 3029339,
"node_id": "MDQ6VXNlcjMwMjkzMzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3029339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/and1412",
"html_url": "https://github.com/and1412",
"followers_url": "https://api.github.com/users/and1412/followers",
"following_url": "https://api.github.com/users/and1412/following{/other_user}",
"gists_url": "https://api.github.com/users/and1412/gists{/gist_id}",
"starred_url": "https://api.github.com/users/and1412/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/and1412/subscriptions",
"organizations_url": "https://api.github.com/users/and1412/orgs",
"repos_url": "https://api.github.com/users/and1412/repos",
"events_url": "https://api.github.com/users/and1412/events{/privacy}",
"received_events_url": "https://api.github.com/users/and1412/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3392/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3383
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3383/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3383/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3383/events
|
https://github.com/langchain-ai/langgraph/issues/3383
| 2,846,023,141
|
I_kwDOKFU0lc6pot3l
| 3,383
|
Typo in the read_document tool/function in the Hierarchical Agent Teams notebook?
|
{
"login": "milenakapralova",
"id": 79589398,
"node_id": "MDQ6VXNlcjc5NTg5Mzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/79589398?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/milenakapralova",
"html_url": "https://github.com/milenakapralova",
"followers_url": "https://api.github.com/users/milenakapralova/followers",
"following_url": "https://api.github.com/users/milenakapralova/following{/other_user}",
"gists_url": "https://api.github.com/users/milenakapralova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/milenakapralova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/milenakapralova/subscriptions",
"organizations_url": "https://api.github.com/users/milenakapralova/orgs",
"repos_url": "https://api.github.com/users/milenakapralova/repos",
"events_url": "https://api.github.com/users/milenakapralova/events{/privacy}",
"received_events_url": "https://api.github.com/users/milenakapralova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-11T17:21:09
| 2025-02-11T18:13:22
| 2025-02-11T18:08:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
def provide_a_start_line(
start: Annotated[Optional[int], "The start line. Default is 0"] = None
) -> str:
"""Return the start line."""
if start is not None:
start = 0
return start
print(provide_a_start_line), provide_a_start_line(1), provide_a_start_line(2)
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
I am using the `read_document()` function in the [Hierarchical Agent Teams](https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/multi_agent/hierarchical_agent_teams.ipynb) notebook and the function:
* keeps the input None start line as None, and
* overwrites the non-None input start line with 0.
Therefore, lines[start:end] is lines[0:end] irrespective of the start provided as argument to the function.
The function in question:
```
def read_document(
file_name: Annotated[str, "File path to read the document from."],
start: Annotated[Optional[int], "The start line. Default is 0"] = None,
end: Annotated[Optional[int], "The end line. Default is None"] = None,
) -> str:
"""Read the specified document."""
with (WORKING_DIRECTORY / file_name).open("r") as file:
lines = file.readlines()
if start is not None:
start = 0
return "\n".join(lines[start:end])
```
So, the function can keep the if-statement s.t. it is of the form `if start is None:` or can remove the if-statement altogether since lines[None:end] is equivalent to lines[0:end] in Python slicing.
### System Info
Replication requires only python's typing module: `from typing import Optional`
`python -m langchain_core.sys_info` gives:
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:06 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T8103
> Python Version: 3.11.4 (main, Jul 5 2023, 09:00:44) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.3.34
> langchain: 0.3.18
> langchain_community: 0.3.17
> langsmith: 0.1.147
> langchain_anthropic: 0.3.7
> langchain_experimental: 0.3.4
> langchain_openai: 0.2.11
> langchain_text_splitters: 0.3.6
> langgraph_cli: 0.1.61
> langgraph_sdk: 0.1.43
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp<4.0.0,>=3.8.3: Installed. No version info available.
> anthropic<1,>=0.45.0: Installed. No version info available.
> async-timeout<5.0.0,>=4.0.0;: Installed. No version info available.
> click: 8.1.7
> dataclasses-json<0.7,>=0.5.7: Installed. No version info available.
> httpx: 0.28.0
> httpx-sse<1.0.0,>=0.4.0: Installed. No version info available.
> jsonpatch<2.0,>=1.33: Installed. No version info available.
> langchain-anthropic;: Installed. No version info available.
> langchain-aws;: Installed. No version info available.
> langchain-cohere;: Installed. No version info available.
> langchain-community;: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.34: Installed. No version info available.
> langchain-deepseek;: Installed. No version info available.
> langchain-fireworks;: Installed. No version info available.
> langchain-google-genai;: Installed. No version info available.
> langchain-google-vertexai;: Installed. No version info available.
> langchain-groq;: Installed. No version info available.
> langchain-huggingface;: Installed. No version info available.
> langchain-mistralai;: Installed. No version info available.
> langchain-ollama;: Installed. No version info available.
> langchain-openai;: Installed. No version info available.
> langchain-text-splitters<1.0.0,>=0.3.6: Installed. No version info available.
> langchain-together;: Installed. No version info available.
> langchain<1.0.0,>=0.3.18: Installed. No version info available.
> langgraph-api: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> langsmith<0.4,>=0.1.125: Installed. No version info available.
> langsmith<0.4,>=0.1.17: Installed. No version info available.
> numpy<2,>=1.26.4;: Installed. No version info available.
> numpy<3,>=1.26.2;: Installed. No version info available.
> openai: 1.56.2
> orjson: 3.10.12
> packaging<25,>=23.2: Installed. No version info available.
> pydantic: 2.10.3
> pydantic-settings<3.0.0,>=2.4.0: Installed. No version info available.
> pydantic<3.0.0,>=2.5.2;: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4;: Installed. No version info available.
> python-dotenv: 1.0.1
> PyYAML>=5.3: Installed. No version info available.
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> requests<3,>=2: Installed. No version info available.
> SQLAlchemy<3,>=1.4: Installed. No version info available.
> tenacity!=8.4.0,<10,>=8.1.0: Installed. No version info available.
> tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
> tiktoken: 0.8.0
> typing-extensions>=4.7: Installed. No version info available.
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3383/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3382
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3382/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3382/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3382/events
|
https://github.com/langchain-ai/langgraph/issues/3382
| 2,845,572,718
|
I_kwDOKFU0lc6pm_5u
| 3,382
|
[Urgent] Live deployment doesn't work on LangGraph Cloud
|
{
"login": "n-sviridenko",
"id": 9335422,
"node_id": "MDQ6VXNlcjkzMzU0MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9335422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n-sviridenko",
"html_url": "https://github.com/n-sviridenko",
"followers_url": "https://api.github.com/users/n-sviridenko/followers",
"following_url": "https://api.github.com/users/n-sviridenko/following{/other_user}",
"gists_url": "https://api.github.com/users/n-sviridenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n-sviridenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n-sviridenko/subscriptions",
"organizations_url": "https://api.github.com/users/n-sviridenko/orgs",
"repos_url": "https://api.github.com/users/n-sviridenko/repos",
"events_url": "https://api.github.com/users/n-sviridenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/n-sviridenko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-11T14:31:29
| 2025-02-12T14:12:58
| 2025-02-12T00:28:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
-
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
You seem to have a typo inside `/api/langgraph_api/queue.py` (instead of `await` you wrote `wait`) - this made all live deployments being stuck since at least 6h.
```
Background worker scheduler failed
Traceback (most recent call last):
File "/api/langgraph_api/queue.py", line 126, in queue
File "/usr/local/lib/python3.11/contextlib.py", line 334, in helper
return _AsyncGeneratorContextManager(func, args, kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/contextlib.py", line 105, in __init__
self.gen = func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
TypeError: Runs.next() got an unexpected keyword argument 'wait'
```
### System Info
-
|
{
"login": "jacoblee93",
"id": 6952323,
"node_id": "MDQ6VXNlcjY5NTIzMjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6952323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacoblee93",
"html_url": "https://github.com/jacoblee93",
"followers_url": "https://api.github.com/users/jacoblee93/followers",
"following_url": "https://api.github.com/users/jacoblee93/following{/other_user}",
"gists_url": "https://api.github.com/users/jacoblee93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacoblee93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacoblee93/subscriptions",
"organizations_url": "https://api.github.com/users/jacoblee93/orgs",
"repos_url": "https://api.github.com/users/jacoblee93/repos",
"events_url": "https://api.github.com/users/jacoblee93/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacoblee93/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3382/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3380
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3380/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3380/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3380/events
|
https://github.com/langchain-ai/langgraph/issues/3380
| 2,845,287,339
|
I_kwDOKFU0lc6pl6Or
| 3,380
|
For langraph workflow with parallel node execution (send/map-reduce) follow uo quetions are not working /. memory /checkpointer not wokring
|
{
"login": "akashAD98",
"id": 62583018,
"node_id": "MDQ6VXNlcjYyNTgzMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/62583018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akashAD98",
"html_url": "https://github.com/akashAD98",
"followers_url": "https://api.github.com/users/akashAD98/followers",
"following_url": "https://api.github.com/users/akashAD98/following{/other_user}",
"gists_url": "https://api.github.com/users/akashAD98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akashAD98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akashAD98/subscriptions",
"organizations_url": "https://api.github.com/users/akashAD98/orgs",
"repos_url": "https://api.github.com/users/akashAD98/repos",
"events_url": "https://api.github.com/users/akashAD98/events{/privacy}",
"received_events_url": "https://api.github.com/users/akashAD98/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-11T12:42:20
| 2025-02-19T15:39:58
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
# Model and prompts
query_prompt = """Given the query, create a lexical keyword search query that would help find relevant information.
Original query: {query}
Return only the lexical keyword query without any explanation."""
response_prompt = """Write 2 lines of response for this query: {query}
Make the response informative and concise."""
best_response_prompt = """Below are responses to the query: {query}
Select the best response! Return the ID of the best one (0 or 1).
{responses}"""
class OverallState(TypedDict):
query: str
queries: List[str]
responses: Annotated[List[str], operator.add]
best_response: str
class QueryState(TypedDict):
query: str
class QueryWorkflow:
def __init__(self):
logger.info("Initializing QueryWorkflow")
try:
self.llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
self.workflow_app = None
self.checkpointer = None
self.setup_workflow()
except Exception as e:
logger.error(f"Error in initialization: {str(e)}")
raise
def setup_workflow(self):
"""Set up workflow with parallel processing and Redis checkpoint support."""
logger.info("Setting up workflow")
try:
workflow = StateGraph(OverallState)
def generate_queries(state: OverallState):
"""Generate list of queries including original and lexical keyword query."""
logger.info(f"Generating queries for: {state['query']}")
prompt = query_prompt.format(query=state['query'])
response = self.llm.invoke(prompt)
lexical_query = response.content.strip()
queries = [state['query'], lexical_query]
logger.info(f"Generated queries: {queries}")
return {"queries": queries}
def process_query(state: QueryState):
"""Process a single query and generate response."""
logger.info(f"Processing query: {state['query']}")
prompt = response_prompt.format(query=state['query'])
response = self.llm.invoke(prompt)
logger.info(f"Generated response for query: {state['query']}")
return {"responses": [response.content.strip()]}
def continue_to_processing(state: OverallState):
"""Map function to distribute queries to parallel processing."""
logger.info(f"Mapping {len(state['queries'])} queries to parallel processing")
return [Send("process_query", {"query": q}) for q in state['queries']]
def select_best_response(state: OverallState):
"""Select the best response from all generated responses."""
logger.info(f"Selecting best response from {len(state['responses'])} responses")
formatted_responses = "\n\n".join(f"Response {i}: {resp}" for i, resp in enumerate(state['responses']))
prompt = best_response_prompt.format(query=state['query'], responses=formatted_responses)
response = self.llm.with_structured_output(BestResponse).invoke(prompt)
best_response = state['responses'][response.id]
logger.info(f"Selected best response (index {response.id})")
return {"best_response": best_response}
# Add nodes
workflow.add_node("generate_queries", generate_queries)
workflow.add_node("process_query", process_query)
workflow.add_node("select_best_response", select_best_response)
# Define edges
workflow.add_edge(START, "generate_queries")
workflow.add_conditional_edges(
"generate_queries",
continue_to_processing,
["process_query"]
)
workflow.add_edge("process_query", "select_best_response")
workflow.add_edge("select_best_response", END)
# Initialize Redis checkpointer
logger.info("Initializing Redis checkpointer")
with RedisSaver.from_conn_url(REDIS_URL) as checkpointer:
self.checkpointer = checkpointer
self.workflow_app = workflow.compile(checkpointer=self.checkpointer)
logger.info("Workflow compilation completed with Redis checkpoint support")
except Exception as e:
logger.error(f"Error in setup_workflow: {str(e)}")
raise
def store_thread_id(self, session_id: str, thread_id: str):
"""Store the thread_id in Redis for a given session_id."""
redis_key = f"query_thread_id_{session_id}"
try:
logger.info(f"Storing thread ID {thread_id} for session {session_id}")
r = redis.Redis.from_url(url=REDIS_URL, decode_responses=True)
r.set(redis_key, thread_id)
except Exception as e:
logger.error(f"Error saving thread ID to Redis: {str(e)}")
raise
def get_thread_id(self, session_id: str) -> Optional[str]:
"""Retrieve the thread_id from Redis for a given session_id."""
redis_key = f"query_thread_id_{session_id}"
try:
r = redis.Redis.from_url(url=REDIS_URL, decode_responses=True)
thread_id = r.get(redis_key)
logger.info(f"Retrieved thread ID {thread_id} for session {session_id}")
return thread_id
except Exception as e:
logger.error(f"Error retrieving thread ID from Redis: {str(e)}")
return None
def process_query(self, query: str, session_id: str = None):
"""Process queries with Redis checkpoint and streaming support."""
try:
if not self.workflow_app:
raise ValueError("Workflow is not initialized.")
logger.info(f"Processing query: {query}")
# Use provided session_id or generate one
session_id = session_id or str(time.time())
logger.info(f"Using session ID: {session_id}")
# Get or create thread ID
thread_id = self.get_thread_id(session_id)
if not thread_id:
thread_id = f"query-thread-{session_id}"
self.store_thread_id(session_id, thread_id)
logger.info(f"Using thread ID: {thread_id}")
# Configure thread settings
config = {
"configurable": {
"thread_id": thread_id,
"checkpoint_ns": "query",
"recursion_limit": 2
}
}
# Initialize state
initial_state = {
"query": query,
"queries": [],
"responses": [],
"best_response": ""
}
logger.info("Starting workflow execution with streaming")
# Stream the workflow execution
for step in self.workflow_app.stream(initial_state, config):
logger.info(f"Step output: {step}")
# Print progress information
if "generate_queries" in step:
print("\nGenerated queries:", step["generate_queries"].get("queries", []))
elif "process_query" in step:
print("\nGenerated response:", step["process_query"].get("responses", [])[0] if step["process_query"].get("responses") else "")
elif "select_best_response" in step:
if "best_response" in step["select_best_response"]:
return step["select_best_response"]["best_response"]
return "No response generated."
except Exception as e:
logger.error(f"Error in process_query: {str(e)}")
return f"Error processing query: {str(e)}"
# Example usage
if __name__ == "__main__":
try:
logger.info("Starting query workflow example")
workflow = QueryWorkflow()
query = "What was my last quetion?"
session_id = '123456789'
result = workflow.process_query(query, session_id)
print("\nBest Response:")
print(result)
except Exception as e:
logger.error(f"Main execution error: {str(e)}")
```
### Error Message and Stack Trace (if applicable)
```shell
i am able to get the first quetion asnwer.
but when i m asking the follow up quetion its not able to recognise the history .even i printed out all my state.snapshots. do we have any other way to maje this working?
```
### Description
I'm working on building a custom workflow where a user asks a question.
1. We generate three different queries.
2. Each query runs in parallel.
3. The outputs are combined and passed to the final LLM to generate an answer.
However, how is memory handled in this process?
When a follow-up question is asked, the system generates three new queries instead of leveraging previous context from memory. Should we incorporate an additional agent, planner, or any other mechanism to handle this more effectively? Looking for suggestions.
my working code with memory
```
logger = Logger.get_logger()
# Simplified state type
class QueryState(TypedDict):
query: str
answer: str
class QueryWorkflow:
def __init__(self):
self.llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")
self.workflow_app = None
self.checkpointer = None
self.setup_workflow()
def setup_workflow(self):
"""Set up workflow with Redis checkpoint support."""
logger.info("Setting up query workflow...")
workflow = StateGraph(MessagesState)
def get_answer(state: MessagesState):
"""Get answer from LLM."""
messages = state["messages"]
response = self.llm.invoke(messages)
return {"messages": [response]}
# Add nodes
workflow.add_node("get_answer", get_answer)
# workflow.add_node("translate_hindi", translate_to_hindi)
# Update edges
workflow.add_edge(START, "get_answer")
workflow.add_edge("get_answer", END)
# workflow.add_edge("translate_hindi", END)
# Initialize Redis checkpointer
with RedisSaver.from_conn_url(REDIS_URL) as checkpointer:
self.checkpointer = checkpointer
self.workflow_app = workflow.compile(checkpointer=self.checkpointer)
logger.info("Query workflow setup completed with Redis checkpoint support")
def store_thread_id(self, session_id: str, thread_id: str):
"""Store the thread_id in Redis for a given session_id."""
redis_key = f"query_thread_id_{session_id}"
try:
r = redis.Redis.from_url(url=REDIS_URL, decode_responses=True)
r.set(redis_key, thread_id)
except Exception as e:
logger.error(f"Error saving thread ID to Redis: {e}")
def get_thread_id(self, session_id: str) -> Optional[str]:
"""Retrieve the thread_id from Redis for a given session_id."""
redis_key = f"query_thread_id_{session_id}"
try:
r = redis.Redis.from_url(url=REDIS_URL, decode_responses=True)
return r.get(redis_key)
except Exception as e:
logger.error(f"Error retrieving thread ID from Redis: {e}")
return None
def process_query(self, query: str, session_id: str = None):
"""Process a query with Redis checkpoint and streaming support."""
try:
if not self.workflow_app:
raise ValueError("Workflow is not initialized.")
# Use provided session_id or generate one
session_id = session_id or str(time.time())
# Get or create thread ID
thread_id = self.get_thread_id(session_id)
if not thread_id:
thread_id = f"query-thread-{session_id}"
self.store_thread_id(session_id, thread_id)
# Configure thread settings
config = {"configurable": {"thread_id": thread_id, "recursion_limit": 2}}
# Create input message
input_messages = [HumanMessage(content=query)]
# Initialize streaming variables
first = True
gathered_response = None
# Stream the response
for msg, _ in self.workflow_app.stream(
{"messages": input_messages}, config, stream_mode="messages"
):
if msg.content and not isinstance(msg, HumanMessage):
if isinstance(msg, AIMessageChunk):
if first:
gathered_response = msg
first = False
else:
gathered_response = gathered_response + msg
# Get workflow state for debugging/monitoring
print('thread_id is >> ', thread_id)
state_graph = self.workflow_app.get_state(config)
print("state_graph is ", state_graph)
print("state history is ", list(self.workflow_app.get_state_history(config)))
if gathered_response:
return gathered_response.content
logger.warning("No AI response received from workflow")
return "No response received from workflow."
except Exception as e:
logger.error(f"Error in query workflow: {e}")
return f"Error processing query: {str(e)}"
# Example usage:
if __name__ == "__main__":
workflow = QueryWorkflow()
# First question
result1 = workflow.process_query(
query="What was my last question?",
session_id="02020"
)
print(f"First Result: {result1}")
```
working code is simple but why im not getting it wirth parallel nodes ?
### System Info
im using ubantu 22.04
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3380/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3372
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3372/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3372/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3372/events
|
https://github.com/langchain-ai/langgraph/issues/3372
| 2,843,259,162
|
I_kwDOKFU0lc6peLEa
| 3,372
|
In Langgraph studio the subgraph part is not shown when we invoke the subgraphs in the parent graph's node
|
{
"login": "arya18mak",
"id": 55435847,
"node_id": "MDQ6VXNlcjU1NDM1ODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/55435847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arya18mak",
"html_url": "https://github.com/arya18mak",
"followers_url": "https://api.github.com/users/arya18mak/followers",
"following_url": "https://api.github.com/users/arya18mak/following{/other_user}",
"gists_url": "https://api.github.com/users/arya18mak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arya18mak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arya18mak/subscriptions",
"organizations_url": "https://api.github.com/users/arya18mak/orgs",
"repos_url": "https://api.github.com/users/arya18mak/repos",
"events_url": "https://api.github.com/users/arya18mak/events{/privacy}",
"received_events_url": "https://api.github.com/users/arya18mak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-10T18:17:12
| 2025-02-22T12:50:45
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
# Define subgraph
class SubgraphState(TypedDict):
# note that none of these keys are shared with the parent graph state
bar: str
baz: str
def subgraph_node_1(state: SubgraphState):
return {"baz": "baz"}
def subgraph_node_2(state: SubgraphState):
return {"bar": state["bar"] + state["baz"]}
subgraph_builder = StateGraph(SubgraphState)
subgraph_builder.add_node(subgraph_node_1)
subgraph_builder.add_node(subgraph_node_2)
subgraph_builder.add_edge(START, "subgraph_node_1")
subgraph_builder.add_edge("subgraph_node_1", "subgraph_node_2")
subgraph = subgraph_builder.compile()
# Define parent graph
class ParentState(TypedDict):
foo: str
def node_1(state: ParentState):
return {"foo": "hi! " + state["foo"]}
def node_2(state: ParentState):
# transform the state to the subgraph state
response = subgraph.invoke({"bar": state["foo"]})
# transform response back to the parent state
return {"foo": response["bar"]}
builder = StateGraph(ParentState)
builder.add_node("node_1", node_1)
# note that instead of using the compiled subgraph we are using `node_2` function that is calling the subgraph
builder.add_node("node_2", node_2)
builder.add_edge(START, "node_1")
builder.add_edge("node_1", "node_2")
graph = builder.compile()
#In the above case the subgraph does'nt visualize. But here it does:
from langgraph.graph import START, StateGraph
from typing import TypedDict
# Define subgraph
class SubgraphState(TypedDict):
foo: str # note that this key is shared with the parent graph state
bar: str
def subgraph_node_1(state: SubgraphState):
return {"bar": "bar"}
def subgraph_node_2(state: SubgraphState):
# note that this node is using a state key ('bar') that is only available in the subgraph
# and is sending update on the shared state key ('foo')
return {"foo": state["foo"] + state["bar"]}
subgraph_builder = StateGraph(SubgraphState)
subgraph_builder.add_node(subgraph_node_1)
subgraph_builder.add_node(subgraph_node_2)
subgraph_builder.add_edge(START, "subgraph_node_1")
subgraph_builder.add_edge("subgraph_node_1", "subgraph_node_2")
subgraph = subgraph_builder.compile()
# Define parent graph
class ParentState(TypedDict):
foo: str
def node_1(state: ParentState):
return {"foo": "hi! " + state["foo"]}
builder = StateGraph(ParentState)
builder.add_node("node_1", node_1)
# note that we're adding the compiled subgraph as a node to the parent graph
builder.add_node("node_2", subgraph)
builder.add_edge(START, "node_1")
builder.add_edge("node_1", "node_2")
graph = builder.compile()
```
### Error Message and Stack Trace (if applicable)
```shell
No error
```
### Description
I am using langgraph cli and langsmith to visual the graph using studio but when invoking a subgraph from parent node. The UI does'nt expand to subgraph level.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.3.0: Thu Jan 2 20:22:58 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T8132
> Python Version: 3.11.9 (v3.11.9:de54cf5be3, Apr 2 2024, 07:12:50) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.3.34
> langchain: 0.3.18
> langchain_community: 0.3.17
> langsmith: 0.3.8
> langchain_ollama: 0.2.3
> langchain_openai: 0.3.4
> langchain_text_splitters: 0.3.6
> langgraph_api: 0.0.22
> langgraph_cli: 0.1.71
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.51
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp<4.0.0,>=3.8.3: Installed. No version info available.
> async-timeout<5.0.0,>=4.0.0;: Installed. No version info available.
> click: 8.1.8
> cryptography: 44.0.0
> dataclasses-json<0.7,>=0.5.7: Installed. No version info available.
> httpx: 0.28.1
> httpx-sse<1.0.0,>=0.4.0: Installed. No version info available.
> jsonpatch<2.0,>=1.33: Installed. No version info available.
> jsonschema-rs: 0.29.1
> langchain-anthropic;: Installed. No version info available.
> langchain-aws;: Installed. No version info available.
> langchain-cohere;: Installed. No version info available.
> langchain-community;: Installed. No version info available.
> langchain-core<1.0.0,>=0.3.34: Installed. No version info available.
> langchain-deepseek;: Installed. No version info available.
> langchain-fireworks;: Installed. No version info available.
> langchain-google-genai;: Installed. No version info available.
> langchain-google-vertexai;: Installed. No version info available.
> langchain-groq;: Installed. No version info available.
> langchain-huggingface;: Installed. No version info available.
> langchain-mistralai;: Installed. No version info available.
> langchain-ollama;: Installed. No version info available.
> langchain-openai;: Installed. No version info available.
> langchain-text-splitters<1.0.0,>=0.3.6: Installed. No version info available.
> langchain-together;: Installed. No version info available.
> langchain<1.0.0,>=0.3.18: Installed. No version info available.
> langgraph: 0.2.70
> langgraph-checkpoint: 2.0.12
> langsmith-pyo3: Installed. No version info available.
> langsmith<0.4,>=0.1.125: Installed. No version info available.
> langsmith<0.4,>=0.1.17: Installed. No version info available.
> numpy<2,>=1.26.4;: Installed. No version info available.
> numpy<3,>=1.26.2;: Installed. No version info available.
> ollama: 0.4.7
> openai<2.0.0,>=1.58.1: Installed. No version info available.
> orjson: 3.10.15
> packaging<25,>=23.2: Installed. No version info available.
> pydantic: 2.10.6
> pydantic-settings<3.0.0,>=2.4.0: Installed. No version info available.
> pydantic<3.0.0,>=2.5.2;: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4;: Installed. No version info available.
> pyjwt: 2.10.1
> pytest: Installed. No version info available.
> python-dotenv: 1.0.1
> PyYAML>=5.3: Installed. No version info available.
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> requests<3,>=2: Installed. No version info available.
> rich: Installed. No version info available.
> SQLAlchemy<3,>=1.4: Installed. No version info available.
> sse-starlette: 2.2.1
> starlette: 0.45.3
> structlog: 25.1.0
> tenacity: 9.0.0
> tenacity!=8.4.0,<10,>=8.1.0: Installed. No version info available.
> tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
> tiktoken<1,>=0.7: Installed. No version info available.
> typing-extensions>=4.7: Installed. No version info available.
> uvicorn: 0.34.0
> watchfiles: 1.0.4
> zstandard: 0.23.0
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3372/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3372/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3369
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3369/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3369/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3369/events
|
https://github.com/langchain-ai/langgraph/issues/3369
| 2,842,759,259
|
I_kwDOKFU0lc6pcRBb
| 3,369
|
`custom` stream_mode does not update state
|
{
"login": "aguadoenzo",
"id": 11411723,
"node_id": "MDQ6VXNlcjExNDExNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/11411723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aguadoenzo",
"html_url": "https://github.com/aguadoenzo",
"followers_url": "https://api.github.com/users/aguadoenzo/followers",
"following_url": "https://api.github.com/users/aguadoenzo/following{/other_user}",
"gists_url": "https://api.github.com/users/aguadoenzo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aguadoenzo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aguadoenzo/subscriptions",
"organizations_url": "https://api.github.com/users/aguadoenzo/orgs",
"repos_url": "https://api.github.com/users/aguadoenzo/repos",
"events_url": "https://api.github.com/users/aguadoenzo/events{/privacy}",
"received_events_url": "https://api.github.com/users/aguadoenzo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-10T15:18:22
| 2025-02-10T17:51:40
| 2025-02-10T17:51:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import time
import uuid
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.types import Command, StreamWriter
from typing_extensions import TypedDict
class State(TypedDict):
foo: str
def node_a(state: State, writer: StreamWriter):
# Simulate non-langchain LLM call
response = ""
for chunk in ["hello", "world", "foo", "bar"]:
time.sleep(1)
response += chunk
writer({"foo": response})
return {"foo": response}
builder = StateGraph(State)
builder.add_edge(START, "node_a")
builder.add_node(node_a)
config = {"configurable": {"thread_id": uuid.uuid4()}}
memory = MemorySaver()
graph = builder.compile(checkpointer=memory)
state = State(foo="")
for chunk in graph.stream(state, stream_mode="custom", config=config):
print("== graph state: ", graph.get_state(config).values)
print("== Returned chunk: ", chunk, "\n\n")
```
### Error Message and Stack Trace (if applicable)
```shell
== graph state: {'foo': ''}
== Returned chunk: {'foo': 'hello'}
== graph state: {'foo': ''}
== Returned chunk: {'foo': 'helloworld'}
== graph state: {'foo': ''}
== Returned chunk: {'foo': 'helloworldfoo'}
== graph state: {'foo': 'helloworldfoobar'}
== Returned chunk: {'foo': 'helloworldfoobar'}
```
### Description
I want to build a graph that has non-langchain LLM streaming, as well as interrupts.
For interrupts, as per the [documentation](https://langchain-ai.github.io/langgraph/how-tos/human_in_the_loop/wait-user-input/#simple-usage) and [this GH discussion](https://github.com/langchain-ai/langgraph/discussions/3336), I need the `updates` streaming mode (although it never mentions that `updates` is required).
For streaming LLM tokens, as per the [documentation](https://langchain-ai.github.io/langgraph/how-tos/streaming-tokens/#example-without-langchain) I need to use the `custom` mode as well.
Since `updates` and `custom` don't return the full state, I'm using [`get_state`](https://langchain-ai.github.io/langgraph/reference/graphs/#langgraph.graph.graph.CompiledGraph.get_state) to avoid having to rebuild the whole state every time before sending it back to the caller.
**What I expect**:
`get_state()` returns the updated state for every `StreamWriter()` call.
**What happens**:
calling `StreamWriter()` does not update the state, which makes `get_state` useless for streaming.
### System Info
Not using langchain
|
{
"login": "aguadoenzo",
"id": 11411723,
"node_id": "MDQ6VXNlcjExNDExNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/11411723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aguadoenzo",
"html_url": "https://github.com/aguadoenzo",
"followers_url": "https://api.github.com/users/aguadoenzo/followers",
"following_url": "https://api.github.com/users/aguadoenzo/following{/other_user}",
"gists_url": "https://api.github.com/users/aguadoenzo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aguadoenzo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aguadoenzo/subscriptions",
"organizations_url": "https://api.github.com/users/aguadoenzo/orgs",
"repos_url": "https://api.github.com/users/aguadoenzo/repos",
"events_url": "https://api.github.com/users/aguadoenzo/events{/privacy}",
"received_events_url": "https://api.github.com/users/aguadoenzo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3369/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3366
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3366/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3366/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3366/events
|
https://github.com/langchain-ai/langgraph/issues/3366
| 2,840,920,550
|
I_kwDOKFU0lc6pVQHm
| 3,366
|
LangGraph docker compose 100% CPU usage
|
{
"login": "TimurMunykin",
"id": 128926700,
"node_id": "U_kgDOB69D7A",
"avatar_url": "https://avatars.githubusercontent.com/u/128926700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimurMunykin",
"html_url": "https://github.com/TimurMunykin",
"followers_url": "https://api.github.com/users/TimurMunykin/followers",
"following_url": "https://api.github.com/users/TimurMunykin/following{/other_user}",
"gists_url": "https://api.github.com/users/TimurMunykin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TimurMunykin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimurMunykin/subscriptions",
"organizations_url": "https://api.github.com/users/TimurMunykin/orgs",
"repos_url": "https://api.github.com/users/TimurMunykin/repos",
"events_url": "https://api.github.com/users/TimurMunykin/events{/privacy}",
"received_events_url": "https://api.github.com/users/TimurMunykin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-09T19:57:52
| 2025-02-22T21:14:10
| 2025-02-20T21:42:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [ ] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
python3.11 -m venv .venv
source .venv/bin/activate
python3 --version
Python 3.11.11
pip install -U langgraph-cli
langgraph build -t my-image
docker compose up -d
```
### Error Message and Stack Trace (if applicable)
```shell
#log from container
Executing task: docker logs --tail 1000 -f 17d8796d5938042773496dd98afee1314da7a2a5e0b477f813b68f12bf46fb1d
2025-02-09T19:36:33.445133Z [info ] Using auth of type=noop [langgraph_api.auth.middleware] api_revision=7d5eecb api_variant=local
2025-02-09T19:36:33.452650Z [info ] Started server process [1] [uvicorn.error] api_revision=7d5eecb api_variant=local color_message='Started server process [\x1b[36m%d\x1b[0m]'
2025-02-09T19:36:33.452835Z [info ] Waiting for application startup. [uvicorn.error] api_revision=7d5eecb api_variant=local
2025-02-09T19:36:33.453088Z [warning ] No license key found, running in test mode with LangSmith API key. For production use, set LANGGRAPH_CLOUD_LICENSE_KEY in environment. [langgraph_license.validation] api_revision=7d5eecb api_variant=local
2025-02-09T19:36:33.873404Z [info ] HTTP Request: GET https://api.smith.langchain.com/auth?langgraph-api=true "HTTP/1.1 200 OK" [httpx] api_revision=7d5eecb api_variant=local
2025-02-09T19:36:33.927013Z [info ] No LANGGRAPH_STORE configuration found, using default configuration [langgraph_storage.database] api_revision=7d5eecb api_variant=local
2025-02-09T19:36:33.929783Z [info ] Postgres pool stats [langgraph_storage.database] api_revision=7d5eecb api_variant=local connections_ms=13 connections_num=1 pool_available=1 pool_max=150 pool_min=1 pool_size=1 requests_num=1 requests_waiting=0 usage_ms=1
2025-02-09T19:36:33.932606Z [info ] Redis pool stats [langgraph_storage.redis] api_revision=7d5eecb api_variant=local idle_connections=1 in_use_connections=0 max_connections=500
2025-02-09T19:36:33.942666Z [info ] Registering graph with id 'my_agent' [langgraph_api.graph] api_revision=7d5eecb api_variant=local graph_id=my_agent
2025-02-09T19:36:33.946778Z [info ] Starting metadata loop [langgraph_api.metadata] api_revision=7d5eecb api_variant=local
2025-02-09T19:36:33.947942Z [info ] Application startup complete. [uvicorn.error] api_revision=7d5eecb api_variant=local
2025-02-09T19:36:33.948665Z [info ] Starting 10 background workers [langgraph_api.queue] api_revision=7d5eecb api_variant=local
2025-02-09T19:36:33.949463Z [info ] Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) [uvicorn.error] api_revision=7d5eecb api_variant=local color_message='Uvicorn running on \x1b[1m%s://%s:%d\x1b[0m (Press CTRL+C to quit)'
2025-02-09T19:36:33.950474Z [info ] Worker stats [langgraph_api.queue] active=0 api_revision=7d5eecb api_variant=local available=10 max=10
2025-02-09T19:36:33.955662Z [info ] Queue stats [langgraph_api.queue] api_revision=7d5eecb api_variant=local med_age_secs=None min_age_secs=None n_pending=0 n_running=0
2025-02-09T19:36:33.982250Z [info ] Sweeped runs [langgraph_api.queue] api_revision=7d5eecb api_variant=local run_ids=[]
2025-02-09T19:36:34.121100Z [info ] HTTP Request: POST https://api.smith.langchain.com/v1/metadata/submit "HTTP/1.1 204 No Content" [httpx] api_revision=7d5eecb api_variant=local
2025-02-09T19:36:36.935696Z [info ] GET /ok 200 0ms [langgraph_api.server] api_revision=7d5eecb api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.11', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-09T19:36:42.057932Z [info ] GET /ok 200 0ms [langgraph_api.server] api_revision=7d5eecb api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.11', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-09T19:36:47.202318Z [info ] GET /ok 200 0ms [langgraph_api.server] api_revision=7d5eecb api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.11', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-09T19:36:52.351129Z [info ] GET /ok 200 0ms [langgraph_api.server] api_revision=7d5eecb api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.11', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
2025-02-09T19:36:57.504541Z [info ] GET /ok 200 0ms [langgraph_api.server] api_revision=7d5eecb api_variant=local latency_ms=0 method=GET path=/ok path_params={} proto=1.1 query_string= req_header={'accept-encoding': 'identity', 'host': 'localhost:8000', 'user-agent': 'Python-urllib/3.11', 'connection': 'close'} res_header={'content-length': '11', 'content-type': 'application/json'} route=None status=200
```
### Description
I have an issue with 100% CPU running self-hosted Langgraph with docker-compose.
Here is an example that you can easily run https://github.com/TimurMunykin/langgraph_issue
### System Info
root@17d8796d5938:/deps/__outer_langgraph_issue/src# python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Mon Dec 4 10:03:25 UTC 2023
> Python Version: 3.11.11 (main, Feb 4 2025, 04:55:09) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.3.34
> langsmith: 0.3.7
> langgraph_api: 0.0.22
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.51
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> cryptography: 44.0.0
> httpx: 0.28.1
> jsonpatch<2.0,>=1.33: Installed. No version info available.
> jsonschema-rs: 0.29.1
> langgraph: 0.2.70
> langgraph-checkpoint: 2.0.10
> langsmith-pyo3: Installed. No version info available.
> langsmith<0.4,>=0.1.125: Installed. No version info available.
> orjson: 3.10.15
> packaging<25,>=23.2: Installed. No version info available.
> pydantic: 2.10.6
> pydantic<3.0.0,>=2.5.2;: Installed. No version info available.
> pydantic<3.0.0,>=2.7.4;: Installed. No version info available.
> pyjwt: 2.10.1
> pytest: Installed. No version info available.
> PyYAML>=5.3: Installed. No version info available.
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rich: Installed. No version info available.
> sse-starlette: 2.1.3
> starlette: 0.45.3
> structlog: 25.1.0
> tenacity: 9.0.0
> tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
> typing-extensions>=4.7: Installed. No version info available.
> uvicorn: 0.34.0
> watchfiles: Installed. No version info available.
> zstandard: 0.23.0
|
{
"login": "hinthornw",
"id": 13333726,
"node_id": "MDQ6VXNlcjEzMzMzNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13333726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hinthornw",
"html_url": "https://github.com/hinthornw",
"followers_url": "https://api.github.com/users/hinthornw/followers",
"following_url": "https://api.github.com/users/hinthornw/following{/other_user}",
"gists_url": "https://api.github.com/users/hinthornw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hinthornw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hinthornw/subscriptions",
"organizations_url": "https://api.github.com/users/hinthornw/orgs",
"repos_url": "https://api.github.com/users/hinthornw/repos",
"events_url": "https://api.github.com/users/hinthornw/events{/privacy}",
"received_events_url": "https://api.github.com/users/hinthornw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3366/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3366/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3365
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3365/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3365/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3365/events
|
https://github.com/langchain-ai/langgraph/issues/3365
| 2,840,722,513
|
I_kwDOKFU0lc6pUfxR
| 3,365
|
Segment the documentation into legacy content vs modern best practices
|
{
"login": "jtayl222",
"id": 6443533,
"node_id": "MDQ6VXNlcjY0NDM1MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6443533?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jtayl222",
"html_url": "https://github.com/jtayl222",
"followers_url": "https://api.github.com/users/jtayl222/followers",
"following_url": "https://api.github.com/users/jtayl222/following{/other_user}",
"gists_url": "https://api.github.com/users/jtayl222/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jtayl222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jtayl222/subscriptions",
"organizations_url": "https://api.github.com/users/jtayl222/orgs",
"repos_url": "https://api.github.com/users/jtayl222/repos",
"events_url": "https://api.github.com/users/jtayl222/events{/privacy}",
"received_events_url": "https://api.github.com/users/jtayl222/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-09T14:38:50
| 2025-02-21T17:38:41
| 2025-02-19T21:44:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Discussed in https://github.com/langchain-ai/langgraph/discussions/3363
<div type='discussions-op-text'>
<sup>Originally posted by **jtayl222** February 9, 2025</sup>
**References:**
- 📖 [LangGraph’s StateGraph and Conversation History](https://jeftaylo.medium.com/why-you-should-use-stategraph-for-structured-chatbot-workflows-27162f79faa5)
- 📖 [Why You Shouldn't Use @tool in StateGraph Workflows](https://medium.com/@jeftaylo/why-you-shouldnt-use-tool-in-langgraphs-stategraph-workflows-4efc38e4d203)
**Problem:**
Given the evolution of StateGraph and best practices, LangGraph has moved away from older, less structured methodologies. However, if the documentation and tutorials still mix legacy approaches with current best practices, it could create confusion for both new and existing users.
**What is a reasonable plan to fix this problem?**
A structured approach would involve segmenting the documentation into legacy content and modern best practices. Here’s how we could do it:
1. Categorizing Existing Content
Identify and label older documentation that references legacy approaches, such as:
Agent-based memory (e.g., ConversationBufferMemory)
Using @tool in StateGraph workflows
Non-deterministic execution patterns
Separate modern best practices, such as:
Explicit state transitions
StateGraph-first workflows
Avoiding @tool and implicit memory storage
State management via reducers and structured state dictionaries
2. Updating the Official Documentation
Introduce a clear distinction in the LangGraph documentation:
✅ Best Practices Section → Features modern, structured approaches using StateGraph.
❌ Legacy Section (with disclaimers) → Outlines older methods that are no longer recommended.
Add clear notes to older methods indicating they are legacy approaches with links to updated best practices.
3. Improving Community Awareness
Documentation summarizing:
The issues with mixed documentation.
Why StateGraph-first workflows are now the recommended approach.
A comparison table showing "Old vs. New" best practices.
**Conclusion**
By implementing a structured plan to distinguish legacy from best practices, we can improve clarity, reduce confusion, and help new developers adopt StateGraph workflows correctly. </div>
|
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3365/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3362
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3362/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3362/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3362/events
|
https://github.com/langchain-ai/langgraph/issues/3362
| 2,840,588,294
|
I_kwDOKFU0lc6pT_AG
| 3,362
|
When using subgraphs and Command, the output of the final node in the subgraph is not displayed
|
{
"login": "ren8k",
"id": 49476274,
"node_id": "MDQ6VXNlcjQ5NDc2Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/49476274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ren8k",
"html_url": "https://github.com/ren8k",
"followers_url": "https://api.github.com/users/ren8k/followers",
"following_url": "https://api.github.com/users/ren8k/following{/other_user}",
"gists_url": "https://api.github.com/users/ren8k/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ren8k/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ren8k/subscriptions",
"organizations_url": "https://api.github.com/users/ren8k/orgs",
"repos_url": "https://api.github.com/users/ren8k/repos",
"events_url": "https://api.github.com/users/ren8k/events{/privacy}",
"received_events_url": "https://api.github.com/users/ren8k/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6658011276,
"node_id": "LA_kwDOKFU0lc8AAAABjNkwjA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/investigate",
"name": "investigate",
"color": "375C45",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-09T11:12:58
| 2025-02-13T17:17:27
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from typing import Annotated, TypedDict
from typing_extensions import Literal
from langgraph.graph import END, StateGraph
from langgraph.types import Command
def my_reducer(a: str, b: str | None) -> int:
if b is not None:
return b
return a
class State(TypedDict):
# node_name: Annotated[list[str], add]
node_name: Annotated[str, my_reducer]
foo: str
def subgraph_node_1(state: State) -> Command[Literal["subgraph_node_2"]]:
return Command(
goto="subgraph_node_2",
update={
"node_name": "subgraph_node_1",
"foo": "Update at subgraph_node_1!",
},
)
def subgraph_node_2(state: State) -> Command:
return Command(
goto="node_3",
update={"node_name": "subgraph_node_2"},
graph=Command.PARENT,
)
subgraph_builder = StateGraph(State)
subgraph_builder.add_node(subgraph_node_1)
subgraph_builder.add_node(subgraph_node_2)
subgraph_builder.set_entry_point("subgraph_node_1")
subgraph = subgraph_builder.compile()
# Define main graph
def node_1(state: State) -> Command[Literal["node_2"]]:
return Command(
goto="node_2",
update={"node_name": "node_1"},
)
def node_3(state: State) -> Command[Literal["__end__"]]:
return Command(
goto=END,
update={"node_name": "node_3"},
)
main_builder = StateGraph(State)
main_builder.add_node("node_1", node_1)
main_builder.add_node("node_2", subgraph)
main_builder.add_node("node_3", node_3)
main_builder.set_entry_point("node_1")
main_graph = main_builder.compile()
# Build subgraph
with open("graph_use_command.md", "w") as file:
file.write(f"\n{main_graph.get_graph(xray=1).draw_mermaid()}")
initial = {"node_name": "__start__"}
for chunk in main_graph.stream(initial, stream_mode="values", subgraphs=True):
print(chunk)
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
This issue is related to [#3115](https://github.com/langchain-ai/langgraph/issues/3115).
#### Problem
When using Command in both the parent graph and the subgraph, running the stream method does not display the output of the final node in the subgraph.
Below is a diagram of the graph from the example code:
```mermaid
%%{init: {'flowchart': {'curve': 'linear'}}}%%
graph TD;
__start__([<p>__start__</p>]):::first
node_1(node_1)
node_2_subgraph_node_1(subgraph_node_1)
node_2_subgraph_node_2(subgraph_node_2)
node_3(node_3)
__end__([<p>__end__</p>]):::last
__start__ --> node_1;
node_1 --> node_2_subgraph_node_1;
node_2_subgraph_node_2 --> node_3;
node_3 --> __end__;
subgraph node_2
node_2_subgraph_node_1 --> node_2_subgraph_node_2;
end
classDef default fill:#f2f0ff,line-height:1.2
classDef first fill-opacity:0
classDef last fill:#bfb6fc
```
Below is the output from the example code. Notice that the result for the subgraph node `subgaph_node_2` is not displayed:
```
((), {'node_name': '__start__'})
((), {'node_name': 'node_1'})
(('node_2:5198e6fe-5d96-32e1-6f68-ce104abdea04',), {'node_name': 'node_1'})
(('node_2:5198e6fe-5d96-32e1-6f68-ce104abdea04',), {'node_name': 'subgraph_node_1', 'foo': 'Update at subgraph_node_1!'})
((), {'node_name': 'subgraph_node_2', 'foo': 'Update at subgraph_node_1!'})
((), {'node_name': 'node_3', 'foo': 'Update at subgraph_node_1!'})
```
#### Expected Behavior
It is expected that the output of the final node in the subgraph is displayed.
Note that when using add_edge instead of Command, the expected behavior is achieved. Below is the example code rewritten using add_edge:
````python
from typing import TypedDict
from langgraph.graph import END, START, StateGraph
class State(TypedDict):
node_name: str
foo: str
def subgraph_node_1(state: State):
return {
"node_name": "subgraph_node_1",
"foo": "Update at subgraph_node_1!",
}
def subgraph_node_2(state: State):
return {"node_name": "subgraph_node_2"}
subgraph_builder = StateGraph(State)
subgraph_builder.add_node(subgraph_node_1)
subgraph_builder.add_node(subgraph_node_2)
subgraph_builder.add_edge(START, "subgraph_node_1")
subgraph_builder.add_edge("subgraph_node_1", "subgraph_node_2")
subgraph_builder.add_edge("subgraph_node_2", END)
subgraph = subgraph_builder.compile()
# Define main graph
def node_1(state: State):
return {"node_name": "node_1"}
def node_3(state: State):
return {"node_name": "node_3"}
main_builder = StateGraph(State)
main_builder.add_node("node_1", node_1)
main_builder.add_node("node_2", subgraph)
main_builder.add_node("node_3", node_3)
main_builder.add_edge(START, "node_1")
main_builder.add_edge("node_1", "node_2")
main_builder.add_edge("node_2", "node_3")
main_builder.add_edge("node_3", END)
main_graph = main_builder.compile()
# Build subgraph
with open("graph_use_edge.md", "w") as file:
file.write(f"```mermaid\n{main_graph.get_graph(xray=1).draw_mermaid()}```")
initial = {"node_name": "__start__"}
for chunk in main_graph.stream(initial, stream_mode="values", subgraphs=True):
print(chunk)
````
Below is the output, where the result for the subgraph node `subgraph_node_2` is correctly displayed:
```
((), {'node_name': '__start__'})
((), {'node_name': 'node_1'})
(('node_2:0cf73922-e372-fc29-ef53-ba1c568bb221',), {'node_name': 'node_1'})
(('node_2:0cf73922-e372-fc29-ef53-ba1c568bb221',), {'node_name': 'subgraph_node_1', 'foo': 'Update at subgraph_node_1!'})
(('node_2:0cf73922-e372-fc29-ef53-ba1c568bb221',), {'node_name': 'subgraph_node_2', 'foo': 'Update at subgraph_node_1!'})
((), {'node_name': 'subgraph_node_2', 'foo': 'Update at subgraph_node_1!'})
((), {'node_name': 'node_3', 'foo': 'Update at subgraph_node_1!'})
```
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #44~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Jun 18 14:36:16 UTC 2
> Python Version: 3.12.5 | packaged by Anaconda, Inc. | (main, Sep 12 2024, 18:27:27) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.3.28
> langchain: 0.3.7
> langchain_community: 0.3.5
> langsmith: 0.1.136
> langchain_aws: 0.2.10
> langchain_fireworks: 0.2.1
> langchain_openai: 0.2.11
> langchain_text_splitters: 0.3.0
> langgraph_sdk: 0.1.48
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: Installed. No version info available.
> boto3: 1.35.90
> dataclasses-json: 0.6.7
> fireworks-ai: 0.15.7
> httpx: 0.27.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.56.2
> orjson: 3.10.9
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.34
> tenacity: 8.2.3
> tiktoken: 0.8.0
> typing-extensions: 4.11.0
```
#### LangGraph Version
```
pip list | grep langgraph
> langgraph 0.2.70
> langgraph-checkpoint 2.0.12
> langgraph-sdk 0.1.48
```
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3362/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3362/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3344
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3344/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3344/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3344/events
|
https://github.com/langchain-ai/langgraph/issues/3344
| 2,837,107,055
|
I_kwDOKFU0lc6pGtFv
| 3,344
|
State Transitions Not working as Expected in Agent Setup
|
{
"login": "jeffersonRosman",
"id": 129314071,
"node_id": "U_kgDOB7UtFw",
"avatar_url": "https://avatars.githubusercontent.com/u/129314071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffersonRosman",
"html_url": "https://github.com/jeffersonRosman",
"followers_url": "https://api.github.com/users/jeffersonRosman/followers",
"following_url": "https://api.github.com/users/jeffersonRosman/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffersonRosman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffersonRosman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffersonRosman/subscriptions",
"organizations_url": "https://api.github.com/users/jeffersonRosman/orgs",
"repos_url": "https://api.github.com/users/jeffersonRosman/repos",
"events_url": "https://api.github.com/users/jeffersonRosman/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffersonRosman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-07T02:45:27
| 2025-02-07T15:56:48
| 2025-02-07T15:56:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import logging
from langgraph.graph import MessagesState, StateGraph
from langgraph.checkpoint.memory import MemorySaver
from langgraph.llm import HumanMessage
# Define the AgentState class
class AgentState(MessagesState):
error: bool
error_msg: str
code: str
external_libs: bool
# Define the nodes
def code_generator(state: AgentState):
prompt = "Can you provide me with Python code for a function that takes a string as input and returns True if the string is a palindrome and False otherwise? The function should not use any libraries."
msg = [HumanMessage(content=prompt)]
function_str = "def is_palindrome(s): return s == s[::-1]"
return {"messages": [function_str], "code": function_str}
def validate_code(state: AgentState):
function = state["code"]
try:
exec(function)
return {"error": False}
except Exception as e:
return {"error": True, "error_msg": str(e)}
def validate_external_libs(state: AgentState):
return {"external_libs": False}
def install_external_libs(state: AgentState):
return {"external_libs": False}
def correct_python_erros(state: AgentState):
corrected_function_str = "def is_palindrome(s): return s == s[::-1]"
return {"messages": [corrected_function_str], "code": corrected_function_str, "error": False}
def execute_python_function(state: AgentState):
function = state["code"]
exec(function)
return "Done"
# Define the edges
def check_code(state: AgentState):
error = state.get("error", None)
if error:
return "correct_python_erros"
else:
return "validate_external_libs"
def check_external_libs(state: AgentState):
external_libs = state.get("external_libs", False)
if external_libs:
return "install_external_libs"
else:
return "execute_python_function"
# Initialize the agent and pass the schema
agent_init = StateGraph(AgentState)
agent_init.add_node("code_generator", code_generator)
agent_init.add_node("validate_code", validate_code)
agent_init.add_node("validate_external_libs", validate_external_libs)
agent_init.add_node("install_external_libs", install_external_libs)
agent_init.add_node("correct_python_erros", correct_python_erros)
agent_init.add_node("execute_python_function", execute_python_function)
# Add the edges
from langgraph.graph import START, END
agent_init.add_edge(START, "code_generator")
agent_init.add_edge("code_generator", "validate_code")
agent_init.add_conditional_edges("validate_code", check_code)
agent_init.add_edge("correct_python_erros", "validate_code")
agent_init.add_conditional_edges("validate_external_libs", check_external_libs)
agent_init.add_edge("install_external_libs", "execute_python_function")
agent_init.add_edge("execute_python_function", END)
# Create the agent
memory = MemorySaver()
agent = agent_init.compile(checkpointer=memory)
# Generate and display the diagram
from IPython.display import Image, display
display(Image(agent.get_graph().draw_mermaid_png()))
```
### Error Message and Stack Trace (if applicable)
```shell
Incorrect daigram
Please cehck the diagram in the description
```
### Description
State Transitions Not Displayed Correctly in Mermaid Diagram (see the diagram at the end)
# Description:
When generating a mermaid diagram using agent.get_graph().draw_mermaid_png() in LangGraph, the state transitions are not displayed correctly. The transitions do not reflect the defined conditional logic between nodes.
# Steps to Reproduce:
Define an AgentState and nodes with transition logic as shown in the code snippet below.
Add nodes and conditional edges to the agent.
Generate the mermaid diagram with agent.get_graph().draw_mermaid_png().
Observe the incorrect relationships in the generated diagram.
# Diagram
<img width="849" alt="Image" src="https://github.com/user-attachments/assets/898d21c9-4ece-4eb5-9b62-570ff37c5d4b" />
### System Info
State Transitions Not Displayed Correctly in Mermaid Diagram (see the diagram at the end)
# Description:
When generating a mermaid diagram using agent.get_graph().draw_mermaid_png() in LangGraph, the state transitions are not displayed correctly. The transitions do not reflect the defined conditional logic between nodes.
# Steps to Reproduce:
Define an AgentState and nodes with transition logic as shown in the code snippet below.
Add nodes and conditional edges to the agent.
Generate the mermaid diagram with agent.get_graph().draw_mermaid_png().
Observe the incorrect relationships in the generated diagram.
# Diagram
<img width="849" alt="Image" src="https://github.com/user-attachments/assets/898d21c9-4ece-4eb5-9b62-570ff37c5d4b" />
# Code
```python
import logging
from langgraph.graph import MessagesState, StateGraph
from langgraph.checkpoint.memory import MemorySaver
from langgraph.llm import HumanMessage
# Define the AgentState class
class AgentState(MessagesState):
error: bool
error_msg: str
code: str
external_libs: bool
# Define the nodes
def code_generator(state: AgentState):
prompt = "Can you provide me with Python code for a function that takes a string as input and returns True if the string is a palindrome and False otherwise? The function should not use any libraries."
msg = [HumanMessage(content=prompt)]
function_str = "def is_palindrome(s): return s == s[::-1]"
return {"messages": [function_str], "code": function_str}
def validate_code(state: AgentState):
function = state["code"]
try:
exec(function)
return {"error": False}
except Exception as e:
return {"error": True, "error_msg": str(e)}
def validate_external_libs(state: AgentState):
return {"external_libs": False}
def install_external_libs(state: AgentState):
return {"external_libs": False}
def correct_python_erros(state: AgentState):
corrected_function_str = "def is_palindrome(s): return s == s[::-1]"
return {"messages": [corrected_function_str], "code": corrected_function_str, "error": False}
def execute_python_function(state: AgentState):
function = state["code"]
exec(function)
return "Done"
# Define the edges
def check_code(state: AgentState):
error = state.get("error", None)
if error:
return "correct_python_erros"
else:
return "validate_external_libs"
def check_external_libs(state: AgentState):
external_libs = state.get("external_libs", False)
if external_libs:
return "install_external_libs"
else:
return "execute_python_function"
# Initialize the agent and pass the schema
agent_init = StateGraph(AgentState)
agent_init.add_node("code_generator", code_generator)
agent_init.add_node("validate_code", validate_code)
agent_init.add_node("validate_external_libs", validate_external_libs)
agent_init.add_node("install_external_libs", install_external_libs)
agent_init.add_node("correct_python_erros", correct_python_erros)
agent_init.add_node("execute_python_function", execute_python_function)
# Add the edges
from langgraph.graph import START, END
agent_init.add_edge(START, "code_generator")
agent_init.add_edge("code_generator", "validate_code")
agent_init.add_conditional_edges("validate_code", check_code)
agent_init.add_edge("correct_python_erros", "validate_code")
agent_init.add_conditional_edges("validate_external_libs", check_external_libs)
agent_init.add_edge("install_external_libs", "execute_python_function")
agent_init.add_edge("execute_python_function", END)
# Create the agent
memory = MemorySaver()
agent = agent_init.compile(checkpointer=memory)
# Generate and display the diagram
from IPython.display import Image, display
display(Image(agent.get_graph().draw_mermaid_png()))
|
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3344/timeline
| null |
duplicate
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3343
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3343/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3343/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3343/events
|
https://github.com/langchain-ai/langgraph/issues/3343
| 2,837,107,040
|
I_kwDOKFU0lc6pGtFg
| 3,343
|
State Transitions Not working as Expected in Agent Setup
|
{
"login": "jeffersonRosman",
"id": 129314071,
"node_id": "U_kgDOB7UtFw",
"avatar_url": "https://avatars.githubusercontent.com/u/129314071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffersonRosman",
"html_url": "https://github.com/jeffersonRosman",
"followers_url": "https://api.github.com/users/jeffersonRosman/followers",
"following_url": "https://api.github.com/users/jeffersonRosman/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffersonRosman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffersonRosman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffersonRosman/subscriptions",
"organizations_url": "https://api.github.com/users/jeffersonRosman/orgs",
"repos_url": "https://api.github.com/users/jeffersonRosman/repos",
"events_url": "https://api.github.com/users/jeffersonRosman/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffersonRosman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-07T02:45:26
| 2025-02-07T20:04:28
| 2025-02-07T19:15:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import logging
from langgraph.graph import MessagesState, StateGraph
from langgraph.checkpoint.memory import MemorySaver
from langgraph.llm import HumanMessage
# Define the AgentState class
class AgentState(MessagesState):
error: bool
error_msg: str
code: str
external_libs: bool
# Define the nodes
def code_generator(state: AgentState):
prompt = "Can you provide me with Python code for a function that takes a string as input and returns True if the string is a palindrome and False otherwise? The function should not use any libraries."
msg = [HumanMessage(content=prompt)]
function_str = "def is_palindrome(s): return s == s[::-1]"
return {"messages": [function_str], "code": function_str}
def validate_code(state: AgentState):
function = state["code"]
try:
exec(function)
return {"error": False}
except Exception as e:
return {"error": True, "error_msg": str(e)}
def validate_external_libs(state: AgentState):
return {"external_libs": False}
def install_external_libs(state: AgentState):
return {"external_libs": False}
def correct_python_erros(state: AgentState):
corrected_function_str = "def is_palindrome(s): return s == s[::-1]"
return {"messages": [corrected_function_str], "code": corrected_function_str, "error": False}
def execute_python_function(state: AgentState):
function = state["code"]
exec(function)
return "Done"
# Define the edges
def check_code(state: AgentState):
error = state.get("error", None)
if error:
return "correct_python_erros"
else:
return "validate_external_libs"
def check_external_libs(state: AgentState):
external_libs = state.get("external_libs", False)
if external_libs:
return "install_external_libs"
else:
return "execute_python_function"
# Initialize the agent and pass the schema
agent_init = StateGraph(AgentState)
agent_init.add_node("code_generator", code_generator)
agent_init.add_node("validate_code", validate_code)
agent_init.add_node("validate_external_libs", validate_external_libs)
agent_init.add_node("install_external_libs", install_external_libs)
agent_init.add_node("correct_python_erros", correct_python_erros)
agent_init.add_node("execute_python_function", execute_python_function)
# Add the edges
from langgraph.graph import START, END
agent_init.add_edge(START, "code_generator")
agent_init.add_edge("code_generator", "validate_code")
agent_init.add_conditional_edges("validate_code", check_code)
agent_init.add_edge("correct_python_erros", "validate_code")
agent_init.add_conditional_edges("validate_external_libs", check_external_libs)
agent_init.add_edge("install_external_libs", "execute_python_function")
agent_init.add_edge("execute_python_function", END)
# Create the agent
memory = MemorySaver()
agent = agent_init.compile(checkpointer=memory)
# Generate and display the diagram
from IPython.display import Image, display
display(Image(agent.get_graph().draw_mermaid_png()))
```
### Error Message and Stack Trace (if applicable)
```shell
Incorrect daigram
Please cehck the diagram in the description
```
### Description
State Transitions Not Displayed Correctly in Mermaid Diagram (see the diagram at the end)
# Description:
When generating a mermaid diagram using agent.get_graph().draw_mermaid_png() in LangGraph, the state transitions are not displayed correctly. The transitions do not reflect the defined conditional logic between nodes.
# Steps to Reproduce:
Define an AgentState and nodes with transition logic as shown in the code snippet below.
Add nodes and conditional edges to the agent.
Generate the mermaid diagram with agent.get_graph().draw_mermaid_png().
Observe the incorrect relationships in the generated diagram.
# Diagram
<img width="849" alt="Image" src="https://github.com/user-attachments/assets/898d21c9-4ece-4eb5-9b62-570ff37c5d4b" />
### System Info
State Transitions Not Displayed Correctly in Mermaid Diagram (see the diagram at the end)
# Description:
When generating a mermaid diagram using agent.get_graph().draw_mermaid_png() in LangGraph, the state transitions are not displayed correctly. The transitions do not reflect the defined conditional logic between nodes.
# Steps to Reproduce:
Define an AgentState and nodes with transition logic as shown in the code snippet below.
Add nodes and conditional edges to the agent.
Generate the mermaid diagram with agent.get_graph().draw_mermaid_png().
Observe the incorrect relationships in the generated diagram.
# Diagram
<img width="849" alt="Image" src="https://github.com/user-attachments/assets/898d21c9-4ece-4eb5-9b62-570ff37c5d4b" />
# Code
```python
import logging
from langgraph.graph import MessagesState, StateGraph
from langgraph.checkpoint.memory import MemorySaver
from langgraph.llm import HumanMessage
# Define the AgentState class
class AgentState(MessagesState):
error: bool
error_msg: str
code: str
external_libs: bool
# Define the nodes
def code_generator(state: AgentState):
prompt = "Can you provide me with Python code for a function that takes a string as input and returns True if the string is a palindrome and False otherwise? The function should not use any libraries."
msg = [HumanMessage(content=prompt)]
function_str = "def is_palindrome(s): return s == s[::-1]"
return {"messages": [function_str], "code": function_str}
def validate_code(state: AgentState):
function = state["code"]
try:
exec(function)
return {"error": False}
except Exception as e:
return {"error": True, "error_msg": str(e)}
def validate_external_libs(state: AgentState):
return {"external_libs": False}
def install_external_libs(state: AgentState):
return {"external_libs": False}
def correct_python_erros(state: AgentState):
corrected_function_str = "def is_palindrome(s): return s == s[::-1]"
return {"messages": [corrected_function_str], "code": corrected_function_str, "error": False}
def execute_python_function(state: AgentState):
function = state["code"]
exec(function)
return "Done"
# Define the edges
def check_code(state: AgentState):
error = state.get("error", None)
if error:
return "correct_python_erros"
else:
return "validate_external_libs"
def check_external_libs(state: AgentState):
external_libs = state.get("external_libs", False)
if external_libs:
return "install_external_libs"
else:
return "execute_python_function"
# Initialize the agent and pass the schema
agent_init = StateGraph(AgentState)
agent_init.add_node("code_generator", code_generator)
agent_init.add_node("validate_code", validate_code)
agent_init.add_node("validate_external_libs", validate_external_libs)
agent_init.add_node("install_external_libs", install_external_libs)
agent_init.add_node("correct_python_erros", correct_python_erros)
agent_init.add_node("execute_python_function", execute_python_function)
# Add the edges
from langgraph.graph import START, END
agent_init.add_edge(START, "code_generator")
agent_init.add_edge("code_generator", "validate_code")
agent_init.add_conditional_edges("validate_code", check_code)
agent_init.add_edge("correct_python_erros", "validate_code")
agent_init.add_conditional_edges("validate_external_libs", check_external_libs)
agent_init.add_edge("install_external_libs", "execute_python_function")
agent_init.add_edge("execute_python_function", END)
# Create the agent
memory = MemorySaver()
agent = agent_init.compile(checkpointer=memory)
# Generate and display the diagram
from IPython.display import Image, display
display(Image(agent.get_graph().draw_mermaid_png()))
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3343/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3329
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3329/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3329/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3329/events
|
https://github.com/langchain-ai/langgraph/issues/3329
| 2,834,591,593
|
I_kwDOKFU0lc6o9G9p
| 3,329
|
Parallel edges not working properly while using map reduce with Send class
|
{
"login": "sriakhil25",
"id": 165472988,
"node_id": "U_kgDOCdzq3A",
"avatar_url": "https://avatars.githubusercontent.com/u/165472988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sriakhil25",
"html_url": "https://github.com/sriakhil25",
"followers_url": "https://api.github.com/users/sriakhil25/followers",
"following_url": "https://api.github.com/users/sriakhil25/following{/other_user}",
"gists_url": "https://api.github.com/users/sriakhil25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sriakhil25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sriakhil25/subscriptions",
"organizations_url": "https://api.github.com/users/sriakhil25/orgs",
"repos_url": "https://api.github.com/users/sriakhil25/repos",
"events_url": "https://api.github.com/users/sriakhil25/events{/privacy}",
"received_events_url": "https://api.github.com/users/sriakhil25/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-06T05:35:26
| 2025-02-12T08:31:05
| 2025-02-12T08:31:05
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from typing import Annotated
from langgraph.graph import StateGraph, START, END
from langchain_core.prompts.prompt import PromptTemplate
from typing_extensions import TypedDict
from langchain_ollama.llms import OllamaLLM
from typing import TypedDict, Annotated, List
import operator
from typing import Annotated
import time
from langgraph.constants import Send
class AgentState(TypedDict):
documents: List[str]
identified_alert_message:Annotated[List[str], operator.add]
extracted_timestamp:Annotated[List[str], operator.add]
document: str
llm_mistral= OllamaLLM(model="mistral",temperature=0)
alert_identification_prompt =""" You are an helpful assistant say hi to user: {text}"""
def alert_identification(state:AgentState):
doc = state["document"]
print("processing document number in alerts ======",doc)
response=''
prompt = PromptTemplate.from_template(alert_identification_prompt)
chain=prompt | llm_mistral
response=''
try:
response=chain.invoke({'text':doc})
time.sleep(15)
print("Identified alert is as follows ===", response)
except Exception as e: # Catch all exceptions and print details
print("Error observed:")
print(f"Type: {type(e).__name__}")
print(f"Message: {str(e)}")
final_response = response
return {'identified_alert_message':[final_response]}
timestamp_prompt =""" You are an helpful assistant say "hello there" to user: {text}"""
def timestamp_extraction(state:AgentState):
doc = state["document"]
print("processing document number in root_cause ======",doc)
response=''
prompt = PromptTemplate.from_template(timestamp_prompt)
chain=prompt | llm_mistral
response=''
try:
response=chain.invoke({'text':doc})
time.sleep(15)
print("Root Cause is as follows ===", response)
except Exception as e: # Catch all exceptions and print details
print("Error observed:")
print(f"Type: {type(e).__name__}")
print(f"Message: {str(e)}")
final_response = response
return {'extracted_timestamp':[final_response]}
def continue_to_documents_alert(state: AgentState):
return [Send("alert_identification", {"document": d}) for d in state["documents"]]
def continue_to_documents_root_cause(state: AgentState):
return [Send("timestamp_extraction", {"document": d}) for d in state["documents"]]
workflow = StateGraph(AgentState)
workflow.add_node("alert_identification", alert_identification)
workflow.add_node("timestamp_extraction", timestamp_extraction)
workflow.add_conditional_edges(START, continue_to_documents_alert, ["alert_identification"])
workflow.add_conditional_edges(START, continue_to_documents_root_cause, ["timestamp_extraction"])
workflow.add_edge( "alert_identification", END)
workflow.add_edge( "timestamp_extraction", END)
app = workflow.compile()
from IPython.display import display, Image
display(Image(app.get_graph().draw_mermaid_png()))
documents=['Doc 1','Doc 2','Doc 3','Doc 4','Doc 5','Doc 6','Doc 7','Doc 8','Doc 9','Doc 10','Doc 11',
'Doc 12','Doc 13','Doc 14','Doc 15','Doc 16','Doc 17','Doc 18','Doc 19','Doc 20','Doc 21']
inputs = {'documents':documents[0:]}
result= app.invoke(inputs)
```
### Error Message and Stack Trace (if applicable)
```shell
Output is as follows -
processing document number in alerts ====== Doc 1
processing document number in alerts ====== Doc 2
processing document number in alerts ====== Doc 3
processing document number in alerts ====== Doc 4
processing document number in alerts ====== Doc 5
processing document number in alerts ====== Doc 6
processing document number in alerts ====== Doc 8
processing document number in alerts ====== Doc 7
processing document number in alerts ====== Doc 9
processing document number in alerts ====== Doc 10
processing document number in alerts ====== Doc 11
processing document number in alerts ====== Doc 12
processing document number in alerts ====== Doc 13
processing document number in alerts ====== Doc 14
processing document number in alerts ====== Doc 15
processing document number in alerts ====== Doc 16
Identified alert is as follows === Hello, Doc 12! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in alerts ====== Doc 17
Identified alert is as follows === Hello, Doc 4! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in alerts ====== Doc 18
Identified alert is as follows === Hello, Doc 14! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in alerts ====== Doc 19
Identified alert is as follows === Hello, Doc 16! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in alerts ====== Doc 20
Identified alert is as follows ===Identified alert is as follows === Hello, Doc 1! How can I assist you today? I'm here to help with any questions or tasks you might have. Let's get started!
Hello, Doc 9! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 1
processing document number in alerts ====== Doc 21
Identified alert is as follows === Hello, Doc 2! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 2
Identified alert is as follows === Hello, Doc 7! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 3
Identified alert is as follows ===Identified alert is as follows === Hello, Doc 5! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 4
Hello Doc 10! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 5
Identified alert is as follows === Hello, Doc 3! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 6
Identified alert is as follows === Hello, Doc 8! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 7
Identified alert is as follows ===Identified alert is as follows === Hello, Doc 11! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
Hello, Doc 13! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 9
processing document number in root_cause ====== Doc 8
Identified alert is as follows === Hello, Doc 6! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 10
Identified alert is as follows === Hello, Doc 15! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 11
Identified alert is as follows === Hello, Doc 17! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 12
Identified alert is as follows === Hello Doc 20! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 13
Identified alert is as follows === Hello, Doc 19! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
Identified alert is as follows === Hello, Doc 18! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 14
processing document number in root_cause ====== Doc 15
Root Cause is as follows ===Root Cause is as follows === Hello there, Doc 3! How can I assist you today?
processing document number in root_cause ====== Doc 16
Hello there, Doc 2! How can I assist you today?
processing document number in root_cause ====== Doc 17
Identified alert is as follows === Hello, Doc 21! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 18
Root Cause is as follows === Hello there, Doc 1! How can I assist you today? Let's make this a productive and enjoyable interaction. What would you like to discuss or learn about?
processing document number in root_cause ====== Doc 19
Root Cause is as follows === Hello there, Doc 4! How can I assist you today?
processing document number in root_cause ====== Doc 20
Root Cause is as follows === Hello there, Doc 6! How can I assist you today?
processing document number in root_cause ====== Doc 21
Root Cause is as follows === Hello there, Doc 5! How can I assist you today?
Root Cause is as follows === Hello there, Doc 7! How can I assist you today?
Root Cause is as follows ===Root Cause is as follows === Hello there, Doc 8! How can I assist you today?
Hello there, Doc 9! How can I assist you today?
Root Cause is as follows === Hello there, Doc 10! How can I assist you today?
Root Cause is as follows === Hello there, Doc 11! How can I assist you today?
Root Cause is as follows === Hello there, Doc 12! How can I assist you today?
Root Cause is as follows === Hello there, Doc 15! How can I assist you today?
Root Cause is as follows === Hello there, Doc 13! How can I assist you today?
Root Cause is as follows === Hello there, Doc 14! How can I assist you today?
Root Cause is as follows === Hello there, Doc 17! How can I assist you today?
Root Cause is as follows === Hello there, Doc 16! How can I assist you today?
Root Cause is as follows === Hello there, Doc 18! How can I assist you today?
Root Cause is as follows === Hello there, Doc 19! How can I assist you today?
Root Cause is as follows === Hello there, Doc 20! How can I assist you today?
Root Cause is as follows === Hello there, Doc 21! How can I assist you today?
```
### Description
I have created two parallel nodes 1) alert_identification and 2) timestamp_extraction. While using Send API, these nodes should run in parallel, But I noted that "alert_identification" runs and finishes first and then program runs "timestamp_extraction".
Also for "alert_identification" there are 21 values so I am expecting parallel execution of all 21 values in map reduce but it is running only 16 values in parallel.
No runtime error but opportunity to improve performance.
### System Info
Issue in latest version of Langgraph
|
{
"login": "sriakhil25",
"id": 165472988,
"node_id": "U_kgDOCdzq3A",
"avatar_url": "https://avatars.githubusercontent.com/u/165472988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sriakhil25",
"html_url": "https://github.com/sriakhil25",
"followers_url": "https://api.github.com/users/sriakhil25/followers",
"following_url": "https://api.github.com/users/sriakhil25/following{/other_user}",
"gists_url": "https://api.github.com/users/sriakhil25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sriakhil25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sriakhil25/subscriptions",
"organizations_url": "https://api.github.com/users/sriakhil25/orgs",
"repos_url": "https://api.github.com/users/sriakhil25/repos",
"events_url": "https://api.github.com/users/sriakhil25/events{/privacy}",
"received_events_url": "https://api.github.com/users/sriakhil25/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3329/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3325
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3325/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3325/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3325/events
|
https://github.com/langchain-ai/langgraph/issues/3325
| 2,833,532,507
|
I_kwDOKFU0lc6o5EZb
| 3,325
|
MalformedError('No key could be detected.') When Using BigQuery Tool in LangGraph Cloud Deployment
|
{
"login": "johannescastner",
"id": 138595172,
"node_id": "U_kgDOCELLZA",
"avatar_url": "https://avatars.githubusercontent.com/u/138595172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johannescastner",
"html_url": "https://github.com/johannescastner",
"followers_url": "https://api.github.com/users/johannescastner/followers",
"following_url": "https://api.github.com/users/johannescastner/following{/other_user}",
"gists_url": "https://api.github.com/users/johannescastner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johannescastner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johannescastner/subscriptions",
"organizations_url": "https://api.github.com/users/johannescastner/orgs",
"repos_url": "https://api.github.com/users/johannescastner/repos",
"events_url": "https://api.github.com/users/johannescastner/events{/privacy}",
"received_events_url": "https://api.github.com/users/johannescastner/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-05T17:20:51
| 2025-02-06T19:10:30
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import os
import json
import asyncio
from typing import Type
import logging
# Core dependencies
from pydantic import BaseModel, Field
# Google Cloud
from google.cloud import bigquery
from google.oauth2 import service_account
# LangChain & LangGraph
from langchain_openai import ChatOpenAI
from langchain_core.tools import StructuredTool
from langgraph.prebuilt import create_react_agent
# Configure logging
logging.basicConfig(level=logging.INFO)
# CONFIGURATION
PROJECT_ID = os.getenv("PROJECT_ID", "datawarehouse-447422")
RAW_DATASET_ID = os.getenv("RAW_DATASET_ID", "linkedin_raw")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# INPUT SCHEMA
class BigQueryListTablesInput(BaseModel):
dataset_name: str = Field(..., description="Name of the BigQuery dataset to list tables from")
# BIGQUERY CLIENT INITIALIZATION
def get_bigquery_client() -> bigquery.Client:
"""Initialize BigQuery client with proper credentials"""
if creds_json := os.getenv("GOOGLE_CLOUD_CREDENTIALS_JSON"):
logging.info("Using service account credentials from environment variable.")
credentials = service_account.Credentials.from_service_account_info(json.loads(creds_json))
return bigquery.Client(credentials=credentials, project=credentials.project_id)
logging.info("Using default project ID for BigQuery client.")
return bigquery.Client(project=PROJECT_ID)
# TOOL IMPLEMENTATION
async def list_bigquery_tables(dataset_name: str) -> str:
"""List tables in a BigQuery dataset"""
logging.info(f"Received dataset_name: {dataset_name}")
if not dataset_name:
raise ValueError("Missing required input: dataset_name")
try:
logging.info("Starting BigQuery client initialization...")
client = get_bigquery_client()
logging.info(f"BigQuery client initialized successfully with project ID: {client.project}")
logging.info(f"Creating dataset reference for dataset: {dataset_name}")
dataset_ref = client.dataset(dataset_name)
logging.info(f"Dataset reference created: {dataset_ref.path}")
logging.info("Listing tables in the dataset...")
tables = client.list_tables(dataset_ref)
table_ids = ", ".join(table.table_id for table in tables)
logging.info(f"Table IDs: {table_ids}")
return table_ids or "No tables found"
except Exception as e:
logging.error(f"Error listing tables: {e}")
raise
# TOOL REGISTRATION
tools = [
StructuredTool.from_function(
func=list_bigquery_tables,
name="list_bigquery_tables",
description="Lists tables in a BigQuery dataset. Input: JSON object with 'dataset_name'",
args_schema=BigQueryListTablesInput,
coroutine=list_bigquery_tables,
),
]
# AGENT CREATION
def create_agent():
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
max_tokens=1200,
openai_api_key=OPENAI_API_KEY
)
return create_react_agent(llm, tools)
# Initialize the agent graph
graph = create_agent()
# Simulate agent flow locally (for testing purposes)
async def simulate_agent_flow():
# Simulate input generation
dataset_name = "linkedin_raw"
# Test the tool
print(await list_bigquery_tables(dataset_name))
# Run simulation locally
if __name__ == "__main__":
asyncio.run(simulate_agent_flow())
```
### Error Message and Stack Trace (if applicable)
```shell
Error listing tables: No key could be detected.
```
### Description
I am encountering a persistent `MalformedError('No key could be detected.')` error when deploying an agent with a BigQuery tool (`list_bigquery_tables`) to LangGraph Cloud. The same code works flawlessly in a local environment, which suggests the issue lies within the LangGraph Cloud deployment or its interaction with external APIs like BigQuery.
**Steps to Reproduce**
Deploy the following minimal code to LangGraph Cloud.
Set the required secrets (GOOGLE_CLOUD_CREDENTIALS_JSON and OPENAI_API_KEY) in the LangGraph Cloud environment.
Trigger the list_bigquery_tables tool by asking the agent to list tables in the linkedin_raw dataset.
**Expected Behavior**
The tool should successfully list all tables in the specified dataset and return their names.
**Actual Behavior**
The tool fails with the following error:
`Error listing tables: No key could be detected.
`
**Troubleshooting Steps Taken**
**Validated Secrets :** Confirmed that GOOGLE_CLOUD_CREDENTIALS_JSON and OPENAI_API_KEY are correctly set in LangGraph Cloud.
**Tested Locally :** Verified that the code works locally with the same service account credentials.
**Added Logging :** Enhanced logging to capture the entire execution flow, including project ID, dataset reference, and table listing.
**Checked Permissions :** Ensured the service account has the BigQuery Admin role.
**Hypotheses**
**LangGraph Cloud Restrictions :**
LangGraph Cloud might block or limit outbound HTTP requests to external APIs like BigQuery 4.
There could be constraints on the size or format of responses returned by tools.
**Agent-Tool Integration :**
The agent might mishandle the tool's response, leading to a malformed output 3.
There could be a mismatch between the expected and actual response formats.
**Request for Assistance**
Could the maintainers of LangGraph Cloud provide clarification on the following points?
Are there any restrictions on outbound HTTP requests to external APIs like BigQuery?
Are there specific requirements for tool response formats or schemas?
Could this issue be related to the runtime environment or permissions in LangGraph Cloud?
This example is fully self-contained, minimal, and reproducible. It includes all relevant imports, configurations, and logging to help diagnose the issue 1. Please let me know if further clarification is needed!
### System Info
Python Version: 3.9+
Required Libraries: google-cloud-bigquery, langchain-openai, langchain-core, langgraph
Deployment Environment: LangGraph Cloud
Google Service Account Role: BigQuery Admin
OpenAI Model: gpt-3.5-turbo
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3325/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3324
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3324/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3324/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3324/events
|
https://github.com/langchain-ai/langgraph/issues/3324
| 2,833,302,099
|
I_kwDOKFU0lc6o4MJT
| 3,324
|
Stream Disabling is not working
|
{
"login": "soufiene-slimi",
"id": 10948245,
"node_id": "MDQ6VXNlcjEwOTQ4MjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/10948245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soufiene-slimi",
"html_url": "https://github.com/soufiene-slimi",
"followers_url": "https://api.github.com/users/soufiene-slimi/followers",
"following_url": "https://api.github.com/users/soufiene-slimi/following{/other_user}",
"gists_url": "https://api.github.com/users/soufiene-slimi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soufiene-slimi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soufiene-slimi/subscriptions",
"organizations_url": "https://api.github.com/users/soufiene-slimi/orgs",
"repos_url": "https://api.github.com/users/soufiene-slimi/repos",
"events_url": "https://api.github.com/users/soufiene-slimi/events{/privacy}",
"received_events_url": "https://api.github.com/users/soufiene-slimi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-05T15:41:10
| 2025-02-05T21:38:59
| 2025-02-05T20:31:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from langgraph.constants import TAG_NOSTREAM
# Not working
llm_json_mode = ChatOllama(
model="deepseek-r1",
temperature=0,
format="json",
disable_streaming=True
# tags=[TAG_NOSTREAM]
).with_structured_output(
schema=ClassificationOutputStructure,
method="json_schema"
)
# Working
llm_json_mode = ChatOllama(
model="deepseek-r1",
temperature=0,
format="json",
# disable_streaming=True
tags=[TAG_NOSTREAM]
).with_structured_output(
schema=ClassificationOutputStructure,
method="json_schema"
)
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
# Using the `disable_streaming=True` (Stream is not disabled)
<img width="723" alt="Image" src="https://github.com/user-attachments/assets/faa63bba-151e-42e4-836d-2cea26f3fc78" />
# Using the `tags=[TAG_NOSTREAM]` (Stream disabled successfully)
> PS: This is not mentioned in the docs
<img width="565" alt="Image" src="https://github.com/user-attachments/assets/9e1b949a-396c-4288-a34e-419693ec1a72" />
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:16 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6000
> Python Version: 3.12.3 | packaged by Anaconda, Inc. | (main, May 6 2024, 14:46:42) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_google_genai: 2.0.9
> langchain_ollama: 0.2.2
> langchain_openai: 0.3.1
> langchain_qdrant: 0.2.0
> langchain_text_splitters: 0.3.5
> langgraph_api: 0.0.15
> langgraph_cli: 0.1.65
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.48
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> fastembed: Installed. No version info available.
> filetype: 1.2.0
> google-generativeai: 0.8.4
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.61
> langgraph-checkpoint: 2.0.9
> langsmith-pyo3: Installed. No version info available.
> numpy: 2.2.1
> ollama: 0.4.5
> openai: 1.60.0
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.4
> pydantic-settings: 2.7.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> qdrant-client: 1.13.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> sse-starlette: 2.1.3
> starlette: 0.45.2
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
> watchfiles: 1.0.3
> zstandard: Installed. No version info available.
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3324/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3310
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3310/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3310/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3310/events
|
https://github.com/langchain-ai/langgraph/issues/3310
| 2,831,142,083
|
I_kwDOKFU0lc6ov8zD
| 3,310
|
Typings for coroutine returned instead of value returned when awaiting async task (functional api)
|
{
"login": "lightsofapollo",
"id": 322957,
"node_id": "MDQ6VXNlcjMyMjk1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/322957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lightsofapollo",
"html_url": "https://github.com/lightsofapollo",
"followers_url": "https://api.github.com/users/lightsofapollo/followers",
"following_url": "https://api.github.com/users/lightsofapollo/following{/other_user}",
"gists_url": "https://api.github.com/users/lightsofapollo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lightsofapollo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lightsofapollo/subscriptions",
"organizations_url": "https://api.github.com/users/lightsofapollo/orgs",
"repos_url": "https://api.github.com/users/lightsofapollo/repos",
"events_url": "https://api.github.com/users/lightsofapollo/events{/privacy}",
"received_events_url": "https://api.github.com/users/lightsofapollo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-04T19:42:57
| 2025-02-05T03:43:27
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
@task
async def retrieve():
return "foo"
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
Using langggraph functional api with async functions I am getting typing errors. It seems to always return a coroutine instead of the final value when I await it.
<img width="666" alt="Image" src="https://github.com/user-attachments/assets/7b3503bc-e6d6-4458-b3ce-75cdd77a7d6c" />
<img width="440" alt="Image" src="https://github.com/user-attachments/assets/ab661bd6-2e6c-4368-857d-cc50cf3dc4f0" />
<img width="692" alt="Image" src="https://github.com/user-attachments/assets/2f7af96b-a8aa-46aa-a6bc-8110ac57724b" />
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:11 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6020
> Python Version: 3.12.8 (main, Jan 5 2025, 06:55:30) [Clang 19.1.6 ]
Package Information
-------------------
> langchain_core: 0.3.33
> langchain: 0.3.17
> langchain_community: 0.3.15
> langsmith: 0.3.1
> langchain_anthropic: 0.3.4
> langchain_experimental: 0.3.4
> langchain_google_genai: 2.0.9
> langchain_openai: 0.3.2
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.45.0
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> filetype: 1.2.0
> google-generativeai: 0.8.4
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.61.0
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.6
> pydantic-settings: 2.7.1
> pytest: Installed. No version info available.
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rich: 13.9.4
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: 0.23.0
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3310/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3299
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3299/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3299/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3299/events
|
https://github.com/langchain-ai/langgraph/issues/3299
| 2,829,019,579
|
I_kwDOKFU0lc6on2m7
| 3,299
|
DOC: Cannot connect LangGraph server to deployed Postgres instance
|
{
"login": "aeronesto",
"id": 13804518,
"node_id": "MDQ6VXNlcjEzODA0NTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/13804518?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aeronesto",
"html_url": "https://github.com/aeronesto",
"followers_url": "https://api.github.com/users/aeronesto/followers",
"following_url": "https://api.github.com/users/aeronesto/following{/other_user}",
"gists_url": "https://api.github.com/users/aeronesto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aeronesto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aeronesto/subscriptions",
"organizations_url": "https://api.github.com/users/aeronesto/orgs",
"repos_url": "https://api.github.com/users/aeronesto/repos",
"events_url": "https://api.github.com/users/aeronesto/events{/privacy}",
"received_events_url": "https://api.github.com/users/aeronesto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-04T03:12:40
| 2025-02-18T02:30:27
| 2025-02-18T02:30:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Issue with current documentation:
I have followed the **Introduction to LangGraph** course and I am trying to deploy a local instance of the LangGraph API server using the files in module-6/deployment.
I have set up Redis and Postgres instances on GCP, and confirmed that they are accepting connections. According to the course documentation, all I have to do is provide these instance URIs in the following command after I build my image with `langgraph build -t my-image`:
COMMAND:
```
sh
docker run \
--env-file .env \
-p 8123:8000 \
-e POSTGRES_URI=<POSTGRES_URI> \
-e REDIS_URI=<REDIS_URI> \
my-image
```
I am getting the following error, showing that my application is attempting to incorrectly connect to a local instance of Postgres. How can I fix this?
Could it be that one of the LangGraph libraries is defaulting to local Postgres instance? I have modified my `task_maistro.py` code (see below) to try to resolve this issue, but had no success.
ERROR:
```
2025-02-04T02:20:16.591708Z [info ] Using auth of type=noop [langgraph_api.auth.middleware] api_revision=d02afa8 api_variant=local
2025-02-04T02:20:16.593067Z [info ] Started server process [1] [uvicorn.error] api_revision=d02afa8 api_variant=local color_message='Started server process [\x1b[36m%d\x1b[0m]'
2025-02-04T02:20:16.593171Z [info ] Waiting for application startup. [uvicorn.error] api_revision=d02afa8 api_variant=local
2025-02-04T02:20:16.593387Z [warning ] No license key found, running in test mode with LangSmith API key. For production use, set LANGGRAPH_CLOUD_LICENSE_KEY in environment. [langgraph_license.validation] api_revision=d02afa8 api_variant=local
2025-02-04T02:20:16.900546Z [info ] HTTP Request: GET https://api.smith.langchain.com/auth?langgraph-api=true "HTTP/1.1 200 OK" [httpx] api_revision=d02afa8 api_variant=local
2025-02-04T02:20:16.944656Z [warning ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=d02afa8 api_variant=local
2025-02-04T02:20:17.944788Z [warning ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=d02afa8 api_variant=local
2025-02-04T02:20:19.929930Z [warning ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=d02afa8 api_variant=local
2025-02-04T02:20:23.881929Z [warning ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=d02afa8 api_variant=local
2025-02-04T02:20:31.775841Z [warning ] error connecting in 'pool-1': [Errno -2] Name or service not known [psycopg.pool] api_revision=d02afa8 api_variant=local
2025-02-04T02:20:46.960836Z [error ] Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 693, in lifespan
async with self.lifespan_context(app) as maybe_state:
File "/usr/local/lib/python3.11/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/api/langgraph_api/lifespan.py", line 29, in lifespan
File "/usr/local/lib/python3.11/site-packages/langgraph_storage/database.py", line 149, in start_pool
await _pg_pool.open(wait=True)
File "/usr/local/lib/python3.11/site-packages/psycopg_pool/pool_async.py", line 387, in open
await self.wait(timeout=timeout)
File "/usr/local/lib/python3.11/site-packages/psycopg_pool/pool_async.py", line 174, in wait
raise PoolTimeout(f"pool initialization incomplete after {timeout} sec")
psycopg_pool.PoolTimeout: pool initialization incomplete after 30.0 sec
[uvicorn.error] api_revision=d02afa8 api_variant=local
2025-02-04T02:20:46.961102Z [error ] Application startup failed. Exiting. [uvicorn.error] api_revision=d02afa8 api_variant=local
````
</error>
Note:
I am able to run `docker compose up` (using the provided docker-compose.yml) and `langgraph dev` successfully. The issue is with trying to use a remote/deployed instances of Postgres, and possibly Redis (I haven’t seen any errors regarding that, yet).
`task_maistro.py`:
```
import uuid
import os
from datetime import datetime
from pydantic import BaseModel, Field
from trustcall import create_extractor
from typing import Literal, Optional, TypedDict
from langchain_core.runnables import RunnableConfig
from langchain_core.messages import merge_message_runs
from langchain_core.messages import SystemMessage, HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.store.postgres import PostgresStore
conn_string = os.getenv("POSTGRES_URI")
print('MY POSTGRES CONNECTION IS: ', conn_string)
# from my_agent import configuration
import configuration
## Utilities
# Inspect the tool calls for Trustcall
class Spy:
def __init__(self):
self.called_tools = []
def __call__(self, run):
q = [run]
while q:
r = q.pop()
if r.child_runs:
q.extend(r.child_runs)
if r.run_type == "chat_model":
self.called_tools.append(
r.outputs["generations"][0][0]["message"]["kwargs"]["tool_calls"]
)
# Extract information from tool calls for both patches and new memories in Trustcall
def extract_tool_info(tool_calls, schema_name="Memory"):
"""Extract information from tool calls for both patches and new memories.
Args:
tool_calls: List of tool calls from the model
schema_name: Name of the schema tool (e.g., "Memory", "ToDo", "Profile")
"""
# Initialize list of changes
changes = []
for call_group in tool_calls:
for call in call_group:
if call['name'] == 'PatchDoc':
# Check if there are any patches
if call['args']['patches']:
changes.append({
'type': 'update',
'doc_id': call['args']['json_doc_id'],
'planned_edits': call['args']['planned_edits'],
'value': call['args']['patches'][0]['value']
})
else:
# Handle case where no changes were needed
changes.append({
'type': 'no_update',
'doc_id': call['args']['json_doc_id'],
'planned_edits': call['args']['planned_edits']
})
elif call['name'] == schema_name:
changes.append({
'type': 'new',
'value': call['args']
})
# Format results as a single string
result_parts = []
for change in changes:
if change['type'] == 'update':
result_parts.append(
f"Document {change['doc_id']} updated:\n"
f"Plan: {change['planned_edits']}\n"
f"Added content: {change['value']}"
)
elif change['type'] == 'no_update':
result_parts.append(
f"Document {change['doc_id']} unchanged:\n"
f"{change['planned_edits']}"
)
else:
result_parts.append(
f"New {schema_name} created:\n"
f"Content: {change['value']}"
)
return "\n\n".join(result_parts)
## Schema definitions
# User profile schema
class Profile(BaseModel):
"""This is the profile of the user you are chatting with"""
name: Optional[str] = Field(description="The user's name", default=None)
location: Optional[str] = Field(description="The user's location", default=None)
job: Optional[str] = Field(description="The user's job", default=None)
connections: list[str] = Field(
description="Personal connection of the user, such as family members, friends, or coworkers",
default_factory=list
)
interests: list[str] = Field(
description="Interests that the user has",
default_factory=list
)
# ToDo schema
class ToDo(BaseModel):
task: str = Field(description="The task to be completed.")
time_to_complete: Optional[int] = Field(description="Estimated time to complete the task (minutes).")
deadline: Optional[datetime] = Field(
description="When the task needs to be completed by (if applicable)",
default=None
)
solutions: list[str] = Field(
description="List of specific, actionable solutions (e.g., specific ideas, service providers, or concrete options relevant to completing the task)",
min_items=1,
default_factory=list
)
status: Literal["not started", "in progress", "done", "archived"] = Field(
description="Current status of the task",
default="not started"
)
## Initialize the model and tools
# Update memory tool
class UpdateMemory(TypedDict):
""" Decision on what memory type to update """
update_type: Literal['user', 'todo', 'instructions']
# Initialize the model
model = ChatOpenAI(model="gpt-4o", temperature=0)
## Create the Trustcall extractors for updating the user profile and ToDo list
profile_extractor = create_extractor(
model,
tools=[Profile],
tool_choice="Profile",
)
## Prompts
# Chatbot instruction for choosing what to update and what tools to call
MODEL_SYSTEM_MESSAGE = """{task_maistro_role}
You have a long term memory which keeps track of three things:
1. The user's profile (general information about them)
2. The user's ToDo list
3. General instructions for updating the ToDo list
Here is the current User Profile (may be empty if no information has been collected yet):
<user_profile>
{user_profile}
</user_profile>
Here is the current ToDo List (may be empty if no tasks have been added yet):
<todo>
{todo}
</todo>
Here are the current user-specified preferences for updating the ToDo list (may be empty if no preferences have been specified yet):
<instructions>
{instructions}
</instructions>
Here are your instructions for reasoning about the user's messages:
1. Reason carefully about the user's messages as presented below.
2. Decide whether any of the your long-term memory should be updated:
- If personal information was provided about the user, update the user's profile by calling UpdateMemory tool with type `user`
- If tasks are mentioned, update the ToDo list by calling UpdateMemory tool with type `todo`
- If the user has specified preferences for how to update the ToDo list, update the instructions by calling UpdateMemory tool with type `instructions`
3. Tell the user that you have updated your memory, if appropriate:
- Do not tell the user you have updated the user's profile
- Tell the user them when you update the todo list
- Do not tell the user that you have updated instructions
4. Err on the side of updating the todo list. No need to ask for explicit permission.
5. Respond naturally to user user after a tool call was made to save memories, or if no tool call was made."""
# Trustcall instruction
TRUSTCALL_INSTRUCTION = """Reflect on following interaction.
Use the provided tools to retain any necessary memories about the user.
Use parallel tool calling to handle updates and insertions simultaneously.
System Time: {time}"""
# Instructions for updating the ToDo list
CREATE_INSTRUCTIONS = """Reflect on the following interaction.
Based on this interaction, update your instructions for how to update ToDo list items. Use any feedback from the user to update how they like to have items added, etc.
Your current instructions are:
<current_instructions>
{current_instructions}
</current_instructions>"""
## Node definitions
def task_mAIstro(state: MessagesState, config: RunnableConfig, store: PostgresStore):
"""Load memories from the store and use them to personalize the chatbot's response."""
with store.from_conn_string(conn_string) as postgres_store:
# Get the user ID from the config
configurable = configuration.Configuration.from_runnable_config(config)
user_id = configurable.user_id
todo_category = configurable.todo_category
task_maistro_role = configurable.task_maistro_role
# Retrieve profile memory from the store
namespace = ("profile", todo_category, user_id)
memories = postgres_store.search(namespace)
if memories:
user_profile = memories[0].value
else:
user_profile = None
# Retrieve people memory from the store
namespace = ("todo", todo_category, user_id)
memories = postgres_store.search(namespace)
todo = "\n".join(f"{mem.value}" for mem in memories)
# Retrieve custom instructions
namespace = ("instructions", todo_category, user_id)
memories = postgres_store.search(namespace)
if memories:
instructions = memories[0].value
else:
instructions = ""
system_msg = MODEL_SYSTEM_MESSAGE.format(task_maistro_role=task_maistro_role, user_profile=user_profile, todo=todo, instructions=instructions)
# Respond using memory as well as the chat history
response = model.bind_tools([UpdateMemory], parallel_tool_calls=False).invoke([SystemMessage(content=system_msg)]+state["messages"])
return {"messages": [response]}
def update_profile(state: MessagesState, config: RunnableConfig, store: PostgresStore):
"""Reflect on the chat history and update the memory collection."""
with store.from_conn_string(conn_string) as postgres_store:
# Get the user ID from the config
configurable = configuration.Configuration.from_runnable_config(config)
user_id = configurable.user_id
todo_category = configurable.todo_category
# Define the namespace for the memories
namespace = ("profile", todo_category, user_id)
# Retrieve the most recent memories for context
existing_items = postgres_store.search(namespace)
# Format the existing memories for the Trustcall extractor
tool_name = "Profile"
existing_memories = ([(existing_item.key, tool_name, existing_item.value)
for existing_item in existing_items]
if existing_items
else None
)
# Merge the chat history and the instruction
TRUSTCALL_INSTRUCTION_FORMATTED=TRUSTCALL_INSTRUCTION.format(time=datetime.now().isoformat())
updated_messages=list(merge_message_runs(messages=[SystemMessage(content=TRUSTCALL_INSTRUCTION_FORMATTED)] + state["messages"][:-1]))
# Invoke the extractor
result = profile_extractor.invoke({"messages": updated_messages,
"existing": existing_memories})
# Save save the memories from Trustcall to the store
for r, rmeta in zip(result["responses"], result["response_metadata"]):
postgres_store.put(namespace,
rmeta.get("json_doc_id", str(uuid.uuid4())),
r.model_dump(mode="json"),
)
tool_calls = state['messages'][-1].tool_calls
# Return tool message with update verification
return {"messages": [{"role": "tool", "content": "updated profile", "tool_call_id":tool_calls[0]['id']}]}
def update_todos(state: MessagesState, config: RunnableConfig, store: PostgresStore):
"""Reflect on the chat history and update the memory collection."""
with store.from_conn_string(conn_string) as postgres_store:
# Get the user ID from the config
configurable = configuration.Configuration.from_runnable_config(config)
user_id = configurable.user_id
todo_category = configurable.todo_category
# Define the namespace for the memories
namespace = ("todo", todo_category, user_id)
postgres_store.setup()
# Retrieve the most recent memories for context
existing_items = postgres_store.search(namespace)
# Format the existing memories for the Trustcall extractor
tool_name = "ToDo"
existing_memories = ([(existing_item.key, tool_name, existing_item.value)
for existing_item in existing_items]
if existing_items
else None
)
# Merge the chat history and the instruction
TRUSTCALL_INSTRUCTION_FORMATTED=TRUSTCALL_INSTRUCTION.format(time=datetime.now().isoformat())
updated_messages=list(merge_message_runs(messages=[SystemMessage(content=TRUSTCALL_INSTRUCTION_FORMATTED)] + state["messages"][:-1]))
# Initialize the spy for visibility into the tool calls made by Trustcall
spy = Spy()
# Create the Trustcall extractor for updating the ToDo list
todo_extractor = create_extractor(
model,
tools=[ToDo],
tool_choice=tool_name,
enable_inserts=True
).with_listeners(on_end=spy)
# Invoke the extractor
result = todo_extractor.invoke({"messages": updated_messages,
"existing": existing_memories})
# Save save the memories from Trustcall to the store
for r, rmeta in zip(result["responses"], result["response_metadata"]):
postgres_store.put(namespace,
rmeta.get("json_doc_id", str(uuid.uuid4())),
r.model_dump(mode="json"),
)
# Respond to the tool call made in task_mAIstro, confirming the update
tool_calls = state['messages'][-1].tool_calls
# Extract the changes made by Trustcall and add the the ToolMessage returned to task_mAIstro
todo_update_msg = extract_tool_info(spy.called_tools, tool_name)
return {"messages": [{"role": "tool", "content": todo_update_msg, "tool_call_id":tool_calls[0]['id']}]}
def update_instructions(state: MessagesState, config: RunnableConfig, store: PostgresStore):
"""Reflect on the chat history and update the memory collection."""
with store.from_conn_string(conn_string) as postgres_store:
# Get the user ID from the config
configurable = configuration.Configuration.from_runnable_config(config)
user_id = configurable.user_id
todo_category = configurable.todo_category
namespace = ("instructions", todo_category, user_id)
existing_memory = postgres_store.get(namespace, "user_instructions")
# Format the memory in the system prompt
system_msg = CREATE_INSTRUCTIONS.format(current_instructions=existing_memory.value if existing_memory else None)
new_memory = model.invoke([SystemMessage(content=system_msg)]+state['messages'][:-1] + [HumanMessage(content="Please update the instructions based on the conversation")])
# Overwrite the existing memory in the store
key = "user_instructions"
postgres_store.put(namespace, key, {"memory": new_memory.content})
tool_calls = state['messages'][-1].tool_calls
# Return tool message with update verification
return {"messages": [{"role": "tool", "content": "updated instructions", "tool_call_id":tool_calls[0]['id']}]}
# Conditional edge
def route_message(state: MessagesState, config: RunnableConfig, store: PostgresStore) -> Literal[END, "update_todos", "update_instructions", "update_profile"]:
"""Reflect on the memories and chat history to decide whether to update the memory collection."""
message = state['messages'][-1]
if len(message.tool_calls) ==0:
return END
else:
tool_call = message.tool_calls[0]
if tool_call['args']['update_type'] == "user":
return "update_profile"
elif tool_call['args']['update_type'] == "todo":
return "update_todos"
elif tool_call['args']['update_type'] == "instructions":
return "update_instructions"
else:
raise ValueError
# Create the graph + all nodes
builder = StateGraph(MessagesState, config_schema=configuration.Configuration)
# Define the flow of the memory extraction process
builder.add_node(task_mAIstro)
builder.add_node(update_todos)
builder.add_node(update_profile)
builder.add_node(update_instructions)
# Define the flow
builder.add_edge(START, "task_mAIstro")
builder.add_conditional_edges("task_mAIstro", route_message)
builder.add_edge("update_todos", "task_mAIstro")
builder.add_edge("update_profile", "task_mAIstro")
builder.add_edge("update_instructions", "task_mAIstro")
# Compile the graph
graph = builder.compile()
```
### Idea or request for content:
_No response_
|
{
"login": "aeronesto",
"id": 13804518,
"node_id": "MDQ6VXNlcjEzODA0NTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/13804518?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aeronesto",
"html_url": "https://github.com/aeronesto",
"followers_url": "https://api.github.com/users/aeronesto/followers",
"following_url": "https://api.github.com/users/aeronesto/following{/other_user}",
"gists_url": "https://api.github.com/users/aeronesto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aeronesto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aeronesto/subscriptions",
"organizations_url": "https://api.github.com/users/aeronesto/orgs",
"repos_url": "https://api.github.com/users/aeronesto/repos",
"events_url": "https://api.github.com/users/aeronesto/events{/privacy}",
"received_events_url": "https://api.github.com/users/aeronesto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3299/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3291
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3291/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3291/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3291/events
|
https://github.com/langchain-ai/langgraph/issues/3291
| 2,828,223,706
|
I_kwDOKFU0lc6ok0Ta
| 3,291
|
"Failed to fetch" in external browser to LangSmith Studio
|
{
"login": "kbfifi",
"id": 35601089,
"node_id": "MDQ6VXNlcjM1NjAxMDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/35601089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kbfifi",
"html_url": "https://github.com/kbfifi",
"followers_url": "https://api.github.com/users/kbfifi/followers",
"following_url": "https://api.github.com/users/kbfifi/following{/other_user}",
"gists_url": "https://api.github.com/users/kbfifi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kbfifi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kbfifi/subscriptions",
"organizations_url": "https://api.github.com/users/kbfifi/orgs",
"repos_url": "https://api.github.com/users/kbfifi/repos",
"events_url": "https://api.github.com/users/kbfifi/events{/privacy}",
"received_events_url": "https://api.github.com/users/kbfifi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
|
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-03T18:26:16
| 2025-02-06T21:48:41
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
Does not apply as I'm still setting up
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
After installation I'm trying to see the Studio environment. So far I:
```
python3 -m venv langgraphstudiotest
source langgraphstudiotest/bin/activate
pip install --upgrade "langgraph-cli[inmem]"
langgraph new langgraphstudiotest --template react-agent-python
cd langgraphstudiotest
mkdir app
langgraph new app --template react-agent-python
cd app/
pip install -e .
nano .env
langgraph dev --host 0.0.0.0
```
All well up-til-here.
What works from a **remote** web browser:
http://192.168.178.31:2024/ok => {"ok":true}
http://192.168.178.31:2024/docs => works
This doesn't work:
https://smith.langchain.com/studio/thread?baseUrl=http://192.168.178.31:2024 => Page shows: "Failed to fetch"
Also tried with
langgraph dev --host 192.168.178.31
Both give same results
See also #3261
Installed fix:
pip install --upgrade "langgraph-cli[inmem]" "langgraph-api==0.0.21"
Unfortenately not fixing my issue
### System Info
python3 -m langchain_core.sys_info [4/1075]
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue Nov 12 08:48:32 UTC 2024
> Python Version: 3.11.2 (main, Nov 30 2024, 21:22:50) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.3.33
> langchain: 0.3.17
> langchain_community: 0.3.16
> langsmith: 0.2.11
> langchain_anthropic: 0.3.5
> langchain_fireworks: 0.2.7
> langchain_openai: 0.3.3
> langchain_text_splitters: 0.3.5
> langgraph_api: 0.0.21
> langgraph_cli: 0.1.70
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.51
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.45.2
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fireworks-ai: 0.15.12
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.69
> langgraph-checkpoint: 2.0.10
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.60.2
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.6
> pydantic-settings: 2.7.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> sse-starlette: 2.1.3
> starlette: 0.45.3
> structlog: 24.4.0
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
> watchfiles: 1.0.4
> zstandard: Installed. No version info available.
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3291/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3289
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3289/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3289/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3289/events
|
https://github.com/langchain-ai/langgraph/issues/3289
| 2,828,190,960
|
I_kwDOKFU0lc6oksTw
| 3,289
|
Serialization Error
|
{
"login": "BharahthyKannan",
"id": 17752392,
"node_id": "MDQ6VXNlcjE3NzUyMzky",
"avatar_url": "https://avatars.githubusercontent.com/u/17752392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BharahthyKannan",
"html_url": "https://github.com/BharahthyKannan",
"followers_url": "https://api.github.com/users/BharahthyKannan/followers",
"following_url": "https://api.github.com/users/BharahthyKannan/following{/other_user}",
"gists_url": "https://api.github.com/users/BharahthyKannan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BharahthyKannan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BharahthyKannan/subscriptions",
"organizations_url": "https://api.github.com/users/BharahthyKannan/orgs",
"repos_url": "https://api.github.com/users/BharahthyKannan/repos",
"events_url": "https://api.github.com/users/BharahthyKannan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BharahthyKannan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-03T18:09:10
| 2025-02-05T21:34:35
| 2025-02-05T21:34:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
builder = StateGraph(State)
builder.add_node(call_llm)
builder.add_node(run_tool)
builder.add_node(human_review_node)
builder.add_edge(START, "call_llm")
builder.add_conditional_edges("call_llm", route_after_llm)
builder.add_edge("run_tool", "call_llm")
This is the simple Graph
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
I am trying to serialize compiledstategraph object to use across API calls. I tried to pickle, getting this below error
b'"Can\'t pickle local object \'CompgiledStateGraph.attach_node.<locals>._get_updates\'"''
How to serialize the compilestategraph
### System Info
Python
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3289/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3286
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3286/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3286/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3286/events
|
https://github.com/langchain-ai/langgraph/issues/3286
| 2,827,773,331
|
I_kwDOKFU0lc6ojGWT
| 3,286
|
problem with Langfuse not retrieving model call costs when using invoke with create_react_agent
|
{
"login": "Guemri-Jawher",
"id": 84639609,
"node_id": "MDQ6VXNlcjg0NjM5NjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/84639609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guemri-Jawher",
"html_url": "https://github.com/Guemri-Jawher",
"followers_url": "https://api.github.com/users/Guemri-Jawher/followers",
"following_url": "https://api.github.com/users/Guemri-Jawher/following{/other_user}",
"gists_url": "https://api.github.com/users/Guemri-Jawher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guemri-Jawher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guemri-Jawher/subscriptions",
"organizations_url": "https://api.github.com/users/Guemri-Jawher/orgs",
"repos_url": "https://api.github.com/users/Guemri-Jawher/repos",
"events_url": "https://api.github.com/users/Guemri-Jawher/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guemri-Jawher/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-03T15:03:56
| 2025-02-03T15:47:39
| 2025-02-03T15:47:38
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
---------------------------------------------------------
from langfuse.callbacks import CallbackHuandler
langfuse_handler = CallbackHandler(
httpx_client=client,
public_key=langfuse_config.LANGFUSE_PUBLIC_KEY,
secret_key=langfuse_config.LANGFUSE_SECRET_KEY,
host=langfuse_config.LANGFUSE_HOST_URL,
tags=[tag],
session_id=conversation_id,
)
--------------------------------------------------------------
from langchain.schema.runnable.config import RunnableConfig
llm = ChatOpenAI()
system_message = SystemMessage(content=SYSTEM_PROMPT)
memory = MemorySaver()
agent= create_react_agent(
model=llm,
tools=[rertiever_tool],
state_modifier=system_message,
checkpointer=memory
)
config = RunnableConfig(callbacks=[langfuse_handler], configurable={
"thread_id": "thread_id"})
await agent.ainvoke(inputs, config)
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
I use LangGraph and Langfuse to track and log model interactions.
I expect to see both the trace and the cost of model calls in the Langfuse dashboard.
Instead, it only logs the request input and output without calculating the total token usage and cost of the call.

### System Info
i use :
langchain_core: 0.3.32
langchain: 0.3.16
langchain_community: 0.3.16
langGraph: 0.2.59
langfuse: 2.36.2
python: 3.12.8
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3286/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3281
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3281/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3281/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3281/events
|
https://github.com/langchain-ai/langgraph/issues/3281
| 2,826,016,272
|
I_kwDOKFU0lc6ocZYQ
| 3,281
|
AttributeError: 'Command' object has no attribute 'content'
|
{
"login": "nick-youngblut",
"id": 2468572,
"node_id": "MDQ6VXNlcjI0Njg1NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2468572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nick-youngblut",
"html_url": "https://github.com/nick-youngblut",
"followers_url": "https://api.github.com/users/nick-youngblut/followers",
"following_url": "https://api.github.com/users/nick-youngblut/following{/other_user}",
"gists_url": "https://api.github.com/users/nick-youngblut/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nick-youngblut/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nick-youngblut/subscriptions",
"organizations_url": "https://api.github.com/users/nick-youngblut/orgs",
"repos_url": "https://api.github.com/users/nick-youngblut/repos",
"events_url": "https://api.github.com/users/nick-youngblut/events{/privacy}",
"received_events_url": "https://api.github.com/users/nick-youngblut/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-02T21:14:38
| 2025-02-14T17:20:23
| 2025-02-14T17:20:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from typing import Annotated, List
from langgraph.types import Command
from langchain_core.tools import tool
from langchain_core.messages import AIMessage
def create_handoff_tool(agent_list: List[str]):
"""Create a tool that can return handoff via a Command"""
available_agents = agent_list + ["__end__"]
@tool
def handoff_to_agent(
agent_name: Annotated[str, "The name of the agent to handoff to"]
):
"""Handoff to another agent or end the task."""
# check if agent name in the list
if agent_name not in available_agents:
error_message = f"Agent {agent_name} is not available. Choose one of: {', '.join(available_agents)}",
return Command(
goto=Command.PARENT,
graph=Command.PARENT,
update={"messages": [AIMessage(content=error_message)]},
)
# return the routing command
return Command(
goto=agent_name,
graph=Command.PARENT,
update={"messages": [AIMessage(content=f"Successfully transferred to {agent_name}")]},
)
# dynamically modify doc string
handoff_to_agent.__doc__ = "\n".join([
"Transfer to another agent or end the task.",
f"Available agents: {', '.join(available_agents)}.",
"If you want to end the task, use '__end__'."
])
return handoff_to_agent
```
### Error Message and Stack Trace (if applicable)
```shell
Traceback (most recent call last):
File "/Users/nickyoungblut/mambaforge/envs/genomics-guide2/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 88, in exec_func_with_error_handling
result = func()
^^^^^^
File "/Users/nickyoungblut/mambaforge/envs/genomics-guide2/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 579, in code_to_exec
exec(code, module.__dict__)
File "/Users/nickyoungblut/dev/python/streamlit/genomics_guide2/app.py", line 133, in <module>
response = asyncio.run(
^^^^^^^^^^^^
File "/Users/nickyoungblut/mambaforge/envs/genomics-guide2/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/Users/nickyoungblut/mambaforge/envs/genomics-guide2/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nickyoungblut/mambaforge/envs/genomics-guide2/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/nickyoungblut/dev/python/streamlit/genomics_guide2/genomics_guide2/astream_event_handler.py", line 49, in astream_graph
output_placeholder.code(event['data'].get('output').content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Command' object has no attribute 'content'
```
### Description
`Command` is not working with this simple example, and `output_placeholder.code(event['data'].get('output').content)` is not very helpful for determining the cause of the error.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.2.0: Fri Dec 6 18:56:34 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6020
> Python Version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:21:42) [Clang 18.1.8 ]
Package Information
-------------------
> langchain_core: 0.3.33
> langchain: 0.3.17
> langchain_community: 0.3.13
> langsmith: 0.1.147
> langchain_groq: 0.2.2
> langchain_openai: 0.3.3
> langchain_text_splitters: 0.3.4
> langchain_weaviate: 0.0.3
> langchainhub: 0.1.21
> langgraph_sdk: 0.1.48
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> groq: 0.13.1
> httpx: 0.27.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.61.0
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.4
> pydantic-settings: 2.7.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> simsimd: 4.4.0
> SQLAlchemy: 2.0.36
> tenacity: 8.5.0
> tiktoken: 0.8.0
> types-requests: 2.32.0.20241016
> typing-extensions: 4.12.2
> weaviate-client: 4.8.1
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3281/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3275
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3275/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3275/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3275/events
|
https://github.com/langchain-ai/langgraph/issues/3275
| 2,824,981,837
|
I_kwDOKFU0lc6oYc1N
| 3,275
|
Interrupt using the same old question, when invoked second time.
|
{
"login": "Saisiva123",
"id": 36258631,
"node_id": "MDQ6VXNlcjM2MjU4NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/36258631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saisiva123",
"html_url": "https://github.com/Saisiva123",
"followers_url": "https://api.github.com/users/Saisiva123/followers",
"following_url": "https://api.github.com/users/Saisiva123/following{/other_user}",
"gists_url": "https://api.github.com/users/Saisiva123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saisiva123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saisiva123/subscriptions",
"organizations_url": "https://api.github.com/users/Saisiva123/orgs",
"repos_url": "https://api.github.com/users/Saisiva123/repos",
"events_url": "https://api.github.com/users/Saisiva123/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saisiva123/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292607,
"node_id": "LA_kwDOKFU0lc8AAAABW2Sv_w",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-02-01T05:54:53
| 2025-02-05T02:43:48
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
def ask_user_node(state: LookupState) -> Command[Literal['lookup_node']]:
user_response = interrupt(state['messages'][-1].content) # intead of taking the new message content its taking the old value that it has already shown to the user.
if user_response:
return Command(goto='lookup_node',
update={'messages': [HumanMessage(content=user_response, name="User_Response")]})
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
I had a node that has interrupt and this node might be called multiple times, to collect information from the user. When interrupt is invoked for the second time it shows the previous question instead of showing the new question to user.
### System Info
python -m langchain_core.sys_info
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3275/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3267
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3267/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3267/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3267/events
|
https://github.com/langchain-ai/langgraph/issues/3267
| 2,824,143,905
|
I_kwDOKFU0lc6oVQQh
| 3,267
|
Agents failing to load Environment Variables
|
{
"login": "magallardo",
"id": 4683095,
"node_id": "MDQ6VXNlcjQ2ODMwOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4683095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/magallardo",
"html_url": "https://github.com/magallardo",
"followers_url": "https://api.github.com/users/magallardo/followers",
"following_url": "https://api.github.com/users/magallardo/following{/other_user}",
"gists_url": "https://api.github.com/users/magallardo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/magallardo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/magallardo/subscriptions",
"organizations_url": "https://api.github.com/users/magallardo/orgs",
"repos_url": "https://api.github.com/users/magallardo/repos",
"events_url": "https://api.github.com/users/magallardo/events{/privacy}",
"received_events_url": "https://api.github.com/users/magallardo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-31T18:28:14
| 2025-02-06T15:45:20
| 2025-02-06T15:45:20
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import os
from dotenv import load_dotenv
# Load environment variables from a .env file
load_dotenv()
var1 = os.environ.get("VAR1")
var2 = os.environ.get("VAR2")
print("=====================================")
print(f"var1: {var1}")
print(f"var2: {var2}")
print("=====================================")
```
### Error Message and Stack Trace (if applicable)
```shell
2025-01-31 13:05:22 =====================================
2025-01-31 13:05:22 var1: None
2025-01-31 13:05:22 var2: None
2025-01-31 13:05:22 =====================================
```
### Description
I have an application deployed to a local langgraph server. The application contains several agents and I have configured the recommended file structure and provided a .env file with the variables and I have also provided a langgraph.json file which has an entry for the env.
When I run and call one of the agents, I added the code above and it seems like the variables are not being loaded.
The structure of my project is as follow:
my-app/
|── my_agent1
│ |── utils
│ │ |── __init__.py
│ │ |── tools.py
│ │ |── nodes.py
│ │ |── state.py
│ |── requirements.txt
│ |── __init__.py
│ |── agent.py
|── my_agent2
│ |── utils
│ │ |── __init__.py
│ │ |── tools.py
│ │ |── nodes.py
│ │ |── state.py
│ |── requirements.txt
│ |── __init__.py
│ |── agent.py
|── my_agent3
│ |── utils
│ │ |── __init__.py
│ │ |── tools.py
│ │ |── nodes.py
│ │ |── state.py
│ |── requirements.txt
│ |── __init__.py
│ |── agent.py
|── .env # environment variables
|── langgraph.json # configuration file for LangGraph
The langgraph.json file is:
{
"dockerfile_lines": [],
"dependencies": [
"./my_agent1",
"./my_agent2",
"./my_agent3"
],
"graphs": {
"agent1": "./my_agent1/agent.py:graph",
"agent2": "./my_agent2/agent.py:graph",
"agent3": "./my_agent3/agent.py:graph"
},
"env": "./.env",
"python_version": "3.11"
}
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:00 PDT 2024; root:xnu-10063.141.2~1/RELEASE_X86_64
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.15
> langchain_community: 0.3.15
> langsmith: 0.2.10
> langchain_openai: 0.3.2
> langchain_text_splitters: 0.3.5
> langgraph_cli: 0.1.67
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> click: 8.1.8
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langgraph-api: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.7
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
|
{
"login": "magallardo",
"id": 4683095,
"node_id": "MDQ6VXNlcjQ2ODMwOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4683095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/magallardo",
"html_url": "https://github.com/magallardo",
"followers_url": "https://api.github.com/users/magallardo/followers",
"following_url": "https://api.github.com/users/magallardo/following{/other_user}",
"gists_url": "https://api.github.com/users/magallardo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/magallardo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/magallardo/subscriptions",
"organizations_url": "https://api.github.com/users/magallardo/orgs",
"repos_url": "https://api.github.com/users/magallardo/repos",
"events_url": "https://api.github.com/users/magallardo/events{/privacy}",
"received_events_url": "https://api.github.com/users/magallardo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3267/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3266
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3266/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3266/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3266/events
|
https://github.com/langchain-ai/langgraph/issues/3266
| 2,824,124,996
|
I_kwDOKFU0lc6oVLpE
| 3,266
|
Unexpected State Update
|
{
"login": "Sarthak-ONS",
"id": 57105611,
"node_id": "MDQ6VXNlcjU3MTA1NjEx",
"avatar_url": "https://avatars.githubusercontent.com/u/57105611?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sarthak-ONS",
"html_url": "https://github.com/Sarthak-ONS",
"followers_url": "https://api.github.com/users/Sarthak-ONS/followers",
"following_url": "https://api.github.com/users/Sarthak-ONS/following{/other_user}",
"gists_url": "https://api.github.com/users/Sarthak-ONS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sarthak-ONS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sarthak-ONS/subscriptions",
"organizations_url": "https://api.github.com/users/Sarthak-ONS/orgs",
"repos_url": "https://api.github.com/users/Sarthak-ONS/repos",
"events_url": "https://api.github.com/users/Sarthak-ONS/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sarthak-ONS/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-31T18:00:50
| 2025-02-03T14:58:55
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.tools import tool
from langchain_core.messages import HumanMessage , AIMessage
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from typing import Annotated, List, TypedDict
from rich.console import Console
from rich.markdown import Markdown
from rich import print as rprint
from environs import Env
from prompts import system_prompt
from moviepy import *
import uuid
env = Env()
env.read_env()
console = Console()
# Memory configuration
memory = MemorySaver()
video_path = "video.mp4"
class VideoAppStateGraph(TypedDict):
messages : Annotated[list , add_messages]
video_path : str
@tool
def trim_video(start_time: str, end_time: str, state : VideoAppStateGraph ,config: RunnableConfig):
"""
Trim a video between specified start and end times.
Args:
start_time (str): Start time in format HH:MM:SS
end_time (str): End time in format HH:MM:SS
Returns:
str: Confirmation message
"""
print("-"*10)
print("trim_video")
# print(state)
# print("-"*10)
print(config)
print("-"*10)
print(start_time)
print(end_time)
print("-"*10)
video_path_trimmed = f"{uuid.uuid4().hex}.mp4"
ffmpeg_tools.ffmpeg_extract_subclip(state["video_path"] , start_time , end_time ,video_path_trimmed)
return {
"video_path" : video_path_trimmed,
"messages" : [AIMessage(content=f"Video has been trimmed from {start_time} to {end_time}")]
}
@tool
def get_video_duration(state : VideoAppStateGraph):
"""
Get Duration of the Video
Returns:
str: Duration of the video
"""
print("-"*10)
print("get_video_duration")
print(state["video_path"])
print("-"*10)
video = VideoFileClip(state["video_path"])
duration = video.duration
return {
"video_path" : state["video_path"],
"messages" : [AIMessage(content=f"Duration is {int(duration)} seconds")]
}
@tool
def get_video_url(state : VideoAppStateGraph):
"""
Get the video url of updated actions of video
return str: video_url
"""
base = env.str("HOSTED_BACKEND_URL")
return {
"video_path" : state["video_path"],
"messages" : [AIMessage(content=f"{base}/{state["video_path"]}")]
}
# Define tools list
tools = [trim_video, get_video_duration , get_video_url]
# Create tool node
tool_node = ToolNode(tools=tools)
prompt = ChatPromptTemplate.from_messages([
("system", system_prompt),
MessagesPlaceholder(variable_name="messages"),
])
# Initialize the model with tools
model = ChatOpenAI(
temperature=0.5,
api_key=env.str("OPENAI_API_KEY"),
).bind_tools(tools)
def call_model(state):
"""
Process the current state and generate a response.
"""
print("*"*10)
print("call_model")
print(state["video_path"])
print("*"*10)
response = model.invoke(prompt.invoke({"messages": state["messages"]}))
return {"messages": [response]}
# Create the workflow graph
workflow = StateGraph(VideoAppStateGraph)
# Add nodes
workflow.add_node("assistant", call_model)
workflow.add_node("tools", tool_node)
# Add Edges
workflow.add_edge(START, "assistant")
workflow.add_conditional_edges("assistant", tools_condition)
workflow.add_edge("tools", "assistant")
# Compile the app
app = workflow.compile(checkpointer=memory)
class VideoEditor:
def __init__(self):
self.messages: List[HumanMessage] = []
self.video_path : str = video_path
def process_message(self, user_input: str) -> str:
"""Process a single message and return the response"""
self.messages.append(HumanMessage(content=user_input))
response = app.invoke({
"messages": self.messages,
"video_path" : self.video_path
} , config={ "configurable" : {"thread_id": "1312312312"}})
# Extract the last assistant message
last_message = response["messages"][-1]
self.messages.extend(response["messages"])
return last_message.content
def main():
"""Main CLI interface for the video editing agent"""
editor = VideoEditor()
# Print welcome message
console.print("[bold blue]Welcome to the Video Editing Assistant![/bold blue]")
console.print("Type 'quit' or 'exit' to end the conversation.\n")
while True:
try:
# Get user input
user_input = console.input("[bold green]You:[/bold green] ")
# Check for exit command
if user_input.lower() in ['quit', 'exit', 'bye', 'goodbye']:
console.print("\n[bold blue]Goodbye! Thank you for using the Video Editing Assistant.[/bold blue]")
break
# Process the message
response = editor.process_message(user_input)
# Print the response with markdown formatting
console.print("\n[bold purple]Assistant:[/bold purple]")
console.print(Markdown(response))
console.print() # Empty line for better readability
except KeyboardInterrupt:
console.print("\n[bold red]Session terminated by user.[/bold red]")
break
except Exception as e:
console.print(f"\n[bold red]An error occurred: {str(e)}[/bold red]")
console.print("Please try again or type 'exit' to quit.")
if __name__ == "__main__":
main()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Unexpected State Change,
Initially, When, call_model is called, value of state["video_path"] is correct("video.mp4"), but when there is a tool call, unexpectedly, state is being updated, and print inside the trim_video tool says state["video_path"] is "sample_video.mp4"
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.13.0 (tags/v3.13.0:60403a5, Oct 7 2024, 09:38:07) [MSC v.1941 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.33
> langsmith: 0.3.3
> langchain_openai: 0.3.3
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> openai: 1.60.2
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.6
> pytest: Installed. No version info available.
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rich: 13.9.4
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: 0.23.0
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3266/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3266/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3261
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3261/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3261/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3261/events
|
https://github.com/langchain-ai/langgraph/issues/3261
| 2,822,950,746
|
I_kwDOKFU0lc6oQs9a
| 3,261
|
LangSmith Studio does not work in Chrome due to CORS issue
|
{
"login": "adamduren",
"id": 581097,
"node_id": "MDQ6VXNlcjU4MTA5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/581097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adamduren",
"html_url": "https://github.com/adamduren",
"followers_url": "https://api.github.com/users/adamduren/followers",
"following_url": "https://api.github.com/users/adamduren/following{/other_user}",
"gists_url": "https://api.github.com/users/adamduren/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adamduren/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamduren/subscriptions",
"organizations_url": "https://api.github.com/users/adamduren/orgs",
"repos_url": "https://api.github.com/users/adamduren/repos",
"events_url": "https://api.github.com/users/adamduren/events{/privacy}",
"received_events_url": "https://api.github.com/users/adamduren/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-31T10:04:27
| 2025-02-01T10:39:48
| 2025-01-31T20:51:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
langgraph new path/to/your/app --template react-agent-python
langgraph dev
```
### Error Message and Stack Trace (if applicable)
<img width="1643" alt="Image" src="https://github.com/user-attachments/assets/9786d134-4a75-4ece-b94c-48b934d2347f" />
<img width="911" alt="Image" src="https://github.com/user-attachments/assets/9b313210-46bb-4396-b611-a63ea451caf5" />
### Description
The LangGraph Dev Server does not produce the CORS header to make it compatible with accessing Private Local Networks in Chrome 132 (possibly 130+, see references).
### Workarounds
- Use NGROK to hide the private network access
- Visit `chrome://flags` and disable the security for this. (**not recommended!**)
<img width="770" alt="Image" src="https://github.com/user-attachments/assets/61827b94-ff10-4835-a52b-647f829d2228" />
### References
- [Chrome Initial Announcement (2022)](https://developer.chrome.com/blog/private-network-access-preflight/)
- [Chrome Enforcement announcements tentatively for Chrome 130+ (Mar 2024)](https://developer.chrome.com/blog/private-network-access-update-2024-03)
### System Info
`python -m langchain_core.sys_info`
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Nov 29 17:22:03 UTC 2024
> Python Version: 3.12.7 (main, Oct 19 2024, 03:00:35) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.15
> langchain_community: 0.3.15
> langsmith: 0.2.11
> langchain_openai: 0.3.1
> langchain_postgres: 0.0.12
> langchain_text_splitters: 0.3.5
> langgraph_api: 0.0.20
> langgraph_cli: 0.1.70
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.51
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.69
> langgraph-checkpoint: 2.0.10
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.9
> orjson: 3.10.15
> packaging: 24.2
> pgvector: 0.2.5
> psycopg: 3.2.3
> psycopg-pool: 3.2.4
> pydantic: 2.9.2
> pydantic-settings: 2.7.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> sqlalchemy: 2.0.37
> sse-starlette: 2.1.3
> starlette: 0.45.3
> structlog: 24.4.0
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
> watchfiles: 1.0.4
> zstandard: Installed. No version info available.
|
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3261/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3259
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3259/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3259/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3259/events
|
https://github.com/langchain-ai/langgraph/issues/3259
| 2,822,762,917
|
I_kwDOKFU0lc6oP_Gl
| 3,259
|
Streaming with ReAct Agent + Ollama does not work?
|
{
"login": "ahenkes1",
"id": 62153181,
"node_id": "MDQ6VXNlcjYyMTUzMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/62153181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahenkes1",
"html_url": "https://github.com/ahenkes1",
"followers_url": "https://api.github.com/users/ahenkes1/followers",
"following_url": "https://api.github.com/users/ahenkes1/following{/other_user}",
"gists_url": "https://api.github.com/users/ahenkes1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahenkes1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahenkes1/subscriptions",
"organizations_url": "https://api.github.com/users/ahenkes1/orgs",
"repos_url": "https://api.github.com/users/ahenkes1/repos",
"events_url": "https://api.github.com/users/ahenkes1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahenkes1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-31T08:26:05
| 2025-01-31T17:18:57
| 2025-01-31T17:18:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Discussed in https://github.com/langchain-ai/langgraph/discussions/3215
<div type='discussions-op-text'>
<sup>Originally posted by **ahenkes1** January 27, 2025</sup>
Hello everyone. I have built a ReAct agent using the Llama3.2 model from Ollama. I used
`langgraph.prebuilt.create_react_agent` and `langchain_ollama.ChatOllama`. I created some tools and added them to the model.
For pretty printing, I am using the following code to astream over the agent:
```
async for chk in agent.astream(prompt, config=config): # type: ignore
result = list(chk.values())[0]
try:
message = result["messages"][-1]
role = message.response_metadata["message"].role
tools = message.tool_calls
if role == "assistant" and tools == []:
result = "AI: " + message.content
formatted_string = textwrap.fill(result, width=79)
print(formatted_string)
print(f"{79 * '-'}\n")
except Exception:
pass
```
This works fine, but the behavior is odd. There is no per token streaming, but instead the whole answer is plotted. I recall that I read somewhere, that tooling and streaming is incompatible in langgraph. Can someone make sense out of this?</div>
|
{
"login": "ahenkes1",
"id": 62153181,
"node_id": "MDQ6VXNlcjYyMTUzMTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/62153181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahenkes1",
"html_url": "https://github.com/ahenkes1",
"followers_url": "https://api.github.com/users/ahenkes1/followers",
"following_url": "https://api.github.com/users/ahenkes1/following{/other_user}",
"gists_url": "https://api.github.com/users/ahenkes1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahenkes1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahenkes1/subscriptions",
"organizations_url": "https://api.github.com/users/ahenkes1/orgs",
"repos_url": "https://api.github.com/users/ahenkes1/repos",
"events_url": "https://api.github.com/users/ahenkes1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahenkes1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3259/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3257
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3257/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3257/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3257/events
|
https://github.com/langchain-ai/langgraph/issues/3257
| 2,822,413,196
|
I_kwDOKFU0lc6oOpuM
| 3,257
|
When StreamWriter is used as an argument in a node function in StateGraph.add_node, mypy raises a type error
|
{
"login": "ryowk",
"id": 22524302,
"node_id": "MDQ6VXNlcjIyNTI0MzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/22524302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryowk",
"html_url": "https://github.com/ryowk",
"followers_url": "https://api.github.com/users/ryowk/followers",
"following_url": "https://api.github.com/users/ryowk/following{/other_user}",
"gists_url": "https://api.github.com/users/ryowk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryowk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryowk/subscriptions",
"organizations_url": "https://api.github.com/users/ryowk/orgs",
"repos_url": "https://api.github.com/users/ryowk/repos",
"events_url": "https://api.github.com/users/ryowk/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryowk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-31T04:26:23
| 2025-02-03T16:25:22
| 2025-02-03T16:25:22
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from langchain_core.messages import AIMessage
from langgraph.graph import START, StateGraph, MessagesState, END
from langgraph.types import StreamWriter
async def my_node(
state: MessagesState,
writer: StreamWriter, # <-- provide StreamWriter to write chunks to be streamed
):
chunks = [
"Four",
"score",
"and",
"seven",
"years",
"ago",
"our",
"fathers",
"...",
]
for chunk in chunks:
# write the chunk to be streamed using stream_mode=custom
writer(chunk)
return {"messages": [AIMessage(content=" ".join(chunks))]}
# Define a new graph
workflow = StateGraph(MessagesState)
workflow.add_node("model", my_node)
workflow.add_edge(START, "model")
workflow.add_edge("model", END)
app = workflow.compile()
```
### Error Message and Stack Trace (if applicable)
```shell
a.py:31: error: Argument 2 to "add_node" of "StateGraph" has incompatible type "Callable[[MessagesState, Callable[[Any], None]], Coroutine[Any, Any, Any]]"; expected "Runnable[Any, Any] | Callable[[Any], Any] | Callable[[Any], Awaitable[Any]] | Callable[[Iterator[Any]], Iterator[Any]] | Callable[[AsyncIterator[Any]], AsyncIterator[Any]] | _RunnableCallableSync[Any, Any] | _RunnableCallableAsync[Any, Any] | _RunnableCallableIterator[Any, Any] | _RunnableCallableAsyncIterator[Any, Any] | Mapping[str, Any]" [arg-type]
Found 1 error in 1 file (checked 1 source file)
```
### Description
* I’m trying to use StreamWriter as an argument in node
* The example code is from https://langchain-ai.github.io/langgraph/how-tos/streaming-content/#define-the-graph
Expected Behavior
* No type error should be raised by mypy since StreamWriter should be a valid argument
Actual Behavior
* mypy raises a type error about add_node
* Without the StreamWriter argument, the error does not occur
### System Info
langgraph: 0.2.69
mypy: 1.14.1
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:05:14 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T8103
> Python Version: 3.12.8 (main, Jan 31 2025, 13:06:09) [Clang 16.0.0 (clang-1600.0.26.4)]
Package Information
-------------------
> langchain_core: 0.3.33
> langsmith: 0.3.3
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.6
> pytest: Installed. No version info available.
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rich: Installed. No version info available.
> tenacity: 9.0.0
> typing-extensions: 4.12.2
> zstandard: 0.23.0
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3257/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3257/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3249
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3249/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3249/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3249/events
|
https://github.com/langchain-ai/langgraph/issues/3249
| 2,820,699,608
|
I_kwDOKFU0lc6oIHXY
| 3,249
|
Node with multiple incoming edges not executed correctly when combined with conditional edges
|
{
"login": "snopoke",
"id": 249606,
"node_id": "MDQ6VXNlcjI0OTYwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/249606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snopoke",
"html_url": "https://github.com/snopoke",
"followers_url": "https://api.github.com/users/snopoke/followers",
"following_url": "https://api.github.com/users/snopoke/following{/other_user}",
"gists_url": "https://api.github.com/users/snopoke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snopoke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snopoke/subscriptions",
"organizations_url": "https://api.github.com/users/snopoke/orgs",
"repos_url": "https://api.github.com/users/snopoke/repos",
"events_url": "https://api.github.com/users/snopoke/events{/privacy}",
"received_events_url": "https://api.github.com/users/snopoke/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-30T12:26:07
| 2025-02-06T20:37:26
| 2025-02-06T20:37:24
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
builder = StateGraph(State)
builder.add_node("a", ReturnNodeValue("I'm A"))
builder.add_edge(START, "a")
builder.add_node("b", ReturnNodeValue("I'm B"))
builder.add_node("b2a", ReturnNodeValue("I'm B2a"))
builder.add_node("b2b", ReturnNodeValue("I'm B2b"))
builder.add_node("c", ReturnNodeValue("I'm C"))
builder.add_edge("a", "b")
builder.add_edge("a", "c")
builder.add_edge(["b2a", "b2b"], "c")
builder.add_edge("c", END)
def router(state: State) -> Sequence[str]:
return state["which"]
builder.add_conditional_edges(
"b",
router,
["b2a", "b2b"],
)
builder.set_entry_point("a")
builder.set_finish_point("c")
graph = builder.compile()
print(graph.invoke({"aggregate": [], "which": "b2a"}))
>> Adding I'm A to []
>> Adding I'm B to ["I'm A"]
>> Adding I'm C to ["I'm A"]
>> Adding I'm B2a to ["I'm A", "I'm B", "I'm C"]
>> {'aggregate': ["I'm A", "I'm B", "I'm C", "I'm B2a"], 'which': 'b2a'}
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
In the example code above (diagram below), I would expect node `C` to execute with inputs from `A`, `B2a` and `B2b` but instead it executes only with input from `A` (note that `B2a` and `B2b` have conditional edges to `C`).

Furthermore, making node `C` dependent on all three incoming nodes results in `C` not being executed at all:
```python
builder.add_edge(["a", "b2a", "b2b"], "c")
```
New output:
```
Adding I'm A to []
Adding I'm B to ["I'm A"]
Adding I'm B2a to ["I'm A", "I'm B"]
{'aggregate': ["I'm A", "I'm B", "I'm B2a"], 'which': 'b2a'}
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #52-Ubuntu SMP PREEMPT_DYNAMIC Thu Dec 5 13:09:44 UTC 2024
> Python Version: 3.11.10 (main, Oct 8 2024, 00:19:50) [Clang 18.1.8 ]
Package Information
-------------------
> langchain_core: 0.3.25
> langchain: 0.3.9
> langchain_community: 0.3.3
> langsmith: 0.1.137
> langchain_anthropic: 0.2.3
> langchain_openai: 0.2.3
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.11
> anthropic: 0.37.1
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> httpx: 0.24.1
> jsonpatch: 1.33
> numpy: 1.26.2
> openai: 1.52.1
> orjson: 3.10.0
> packaging: 23.2
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.1
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.23
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3249/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3246
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3246/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3246/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3246/events
|
https://github.com/langchain-ai/langgraph/issues/3246
| 2,819,649,878
|
I_kwDOKFU0lc6oEHFW
| 3,246
|
Error Triggered When Resuming Graph with Interrupt via `astream` Method Using `Command` Object and Config Parameters
|
{
"login": "lanhui100",
"id": 150805673,
"node_id": "U_kgDOCP0cqQ",
"avatar_url": "https://avatars.githubusercontent.com/u/150805673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lanhui100",
"html_url": "https://github.com/lanhui100",
"followers_url": "https://api.github.com/users/lanhui100/followers",
"following_url": "https://api.github.com/users/lanhui100/following{/other_user}",
"gists_url": "https://api.github.com/users/lanhui100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lanhui100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lanhui100/subscriptions",
"organizations_url": "https://api.github.com/users/lanhui100/orgs",
"repos_url": "https://api.github.com/users/lanhui100/repos",
"events_url": "https://api.github.com/users/lanhui100/events{/privacy}",
"received_events_url": "https://api.github.com/users/lanhui100/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-30T00:45:12
| 2025-01-30T09:19:26
| 2025-01-30T00:52:35
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
# Set up the state
from langgraph.graph import MessagesState, START
# Set up the tool
# We will have one real tool - a search tool
# We'll also have one "fake" tool - a "ask_human" tool
# Here we define any ACTUAL tools
from langchain_core.tools import tool
from langgraph.prebuilt import ToolNode
from langgraph.types import interrupt, Command
@tool
def search_tool(query: str):
"""Call to surf the web."""
# This is a placeholder for the actual implementation
# Don't let the LLM know this though 😊
return f"I looked up: {query}. Result: It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
tools = [search_tool]
tool_node = ToolNode(tools)
# Set up the model
from langchain_openai import ChatOpenAI
import os
from dotenv import load_dotenv
load_dotenv()
model="deepseek-v3"
model="deepseek-r1"
model="qwen-turbo"
model="qwen-max-0125"
api_key = os.getenv("QWEN_API_KEY")
base_url = os.getenv("QWEN_BASE_URL")
model = ChatOpenAI(model=model, api_key=api_key, base_url=base_url)
from pydantic import BaseModel
# We are going "bind" all tools to the model
# We have the ACTUAL tools from above, but we also need a mock tool to ask a human
# Since `bind_tools` takes in tools but also just tool definitions,
# We can define a tool definition for `ask_human`
class AskHuman(BaseModel):
"""Ask the human a question"""
question: str
model = model.bind_tools(tools + [AskHuman])
# Define nodes and conditional edges
# Define the function that determines whether to continue or not
def should_continue(state):
messages = state["messages"]
last_message = messages[-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return END
# If tool call is asking Human, we return that node
# You could also add logic here to let some system know that there's something that requires Human input
# For example, send a slack message, etc
elif last_message.tool_calls[0]["name"] == "AskHuman":
return "ask_human"
# Otherwise if there is, we continue
else:
return "action"
# Define the function that calls the model
async def call_model(state):
messages = state["messages"]
response = await model.ainvoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
# We define a fake node to ask the human
def ask_human(state):
tool_call_id = state["messages"][-1].tool_calls[0]["id"]
location = interrupt("Please provide your location:")
tool_message = [{"tool_call_id": tool_call_id, "type": "tool", "content": location}]
return {"messages": tool_message}
# Build the graph
from langgraph.graph import END, StateGraph
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the three nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
workflow.add_node("ask_human", ask_human)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("action", "agent")
# After we get back the human response, we go back to the agent
workflow.add_edge("ask_human", "agent")
# Set up memory
from langgraph.checkpoint.memory import MemorySaver
from IPython.display import Image, display
memory = MemorySaver()
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
# We add a breakpoint BEFORE the `ask_human` node so it never executes
app = workflow.compile(checkpointer=memory)
config = {"configurable": {"thread_id": "2"}}
async for event in app.astream(
{
"messages": [
(
"user",
"Use the search tool to ask the user where they are, then look up the weather there",
)
]
},
config,
stream_mode="values",
):
event["messages"][-1].pretty_print()
async for event in app.astream_log(Command(resume="san francisco"), config, stream_mode="values"):
print(event.model_dump_json(indent=2))
```
### Error Message and Stack Trace (if applicable)
```shell
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[13], line 2
1 config = {"configurable": {"thread_id": "2"}}
----> 2 async for event in app.astream(
3 {
4 "messages": [
5 (
6 "user",
7 "Use the search tool to ask the user where they are, then look up the weather there",
8 )
9 ]
10 },
11 config,
12 stream_mode="values",
13 ):
14 event["messages"][-1].pretty_print()
File ~/miniforge3/lib/python3.10/site-packages/langgraph/pregel/__init__.py:1899, in Pregel.astream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
1893 # Similarly to Bulk Synchronous Parallel / Pregel model
1894 # computation proceeds in steps, while there are channel updates
1895 # channel updates from step N are only visible in step N+1
1896 # channels are guaranteed to be immutable for the duration of the step,
1897 # with channel updates applied only at the transition between steps
1898 while loop.tick(input_keys=self.input_channels):
-> 1899 async for _ in runner.atick(
1900 loop.tasks.values(),
1901 timeout=self.step_timeout,
1902 retry_policy=self.retry_policy,
1903 get_waiter=get_waiter,
1904 ):
1905 # emit output
1906 for o in output():
1907 yield o
File ~/miniforge3/lib/python3.10/site-packages/langgraph/pregel/runner.py:444, in PregelRunner.atick(self, tasks, reraise, timeout, retry_policy, get_waiter)
442 t = tasks[0]
443 try:
--> 444 await arun_with_retry(
445 t,
446 retry_policy,
447 stream=self.use_astream,
448 configurable={
449 CONFIG_KEY_SEND: partial(writer, t),
450 CONFIG_KEY_CALL: partial(call, t),
451 },
452 )
453 self.commit(t, None)
454 except Exception as exc:
File ~/miniforge3/lib/python3.10/site-packages/langgraph/pregel/retry.py:128, in arun_with_retry(task, retry_policy, stream, configurable)
126 break
127 else:
--> 128 return await task.proc.ainvoke(task.input, config)
129 except ParentCommand as exc:
130 ns: str = config[CONF][CONFIG_KEY_CHECKPOINT_NS]
File ~/miniforge3/lib/python3.10/site-packages/langgraph/utils/runnable.py:499, in RunnableSeq.ainvoke(self, input, config, **kwargs)
495 config = patch_config(
496 config, callbacks=run_manager.get_child(f"seq:step:{i + 1}")
497 )
498 if i == 0:
--> 499 input = await step.ainvoke(input, config, **kwargs)
500 else:
501 input = await step.ainvoke(input, config)
File ~/miniforge3/lib/python3.10/site-packages/langgraph/utils/runnable.py:289, in RunnableCallable.ainvoke(self, input, config, **kwargs)
287 ret = await asyncio.create_task(coro, context=context)
288 else:
--> 289 ret = await self.afunc(*args, **kwargs)
290 if isinstance(ret, Runnable) and self.recurse:
291 return await ret.ainvoke(input, config)
File ~/miniforge3/lib/python3.10/site-packages/langchain_core/runnables/config.py:588, in run_in_executor(executor_or_config, func, *args, **kwargs)
584 raise RuntimeError from exc
586 if executor_or_config is None or isinstance(executor_or_config, dict):
587 # Use default executor with context copied from current context
--> 588 return await asyncio.get_running_loop().run_in_executor(
589 None,
590 cast(Callable[..., T], partial(copy_context().run, wrapper)),
591 )
593 return await asyncio.get_running_loop().run_in_executor(executor_or_config, wrapper)
File ~/miniforge3/lib/python3.10/concurrent/futures/thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File ~/miniforge3/lib/python3.10/site-packages/langchain_core/runnables/config.py:579, in run_in_executor..wrapper()
577 def wrapper() -> T:
578 try:
--> 579 return func(*args, **kwargs)
580 except StopIteration as exc:
581 # StopIteration can't be set on an asyncio.Future
582 # it raises a TypeError and leaves the Future pending forever
583 # so we need to convert it to a RuntimeError
584 raise RuntimeError from exc
Cell In[12], line 87, in ask_human(state)
85 def ask_human(state):
86 tool_call_id = state["messages"][-1].tool_calls[0]["id"]
---> 87 location = interrupt("Please provide your location:")
88 tool_message = [{"tool_call_id": tool_call_id, "type": "tool", "content": location}]
89 return {"messages": tool_message}
File ~/miniforge3/lib/python3.10/site-packages/langgraph/types.py:471, in interrupt(value)
468 from langgraph.errors import GraphInterrupt
469 from langgraph.utils.config import get_config
--> 471 conf = get_config()["configurable"]
472 # track interrupt index
473 scratchpad: PregelScratchpad = conf[CONFIG_KEY_SCRATCHPAD]
File ~/miniforge3/lib/python3.10/site-packages/langgraph/utils/config.py:337, in get_config()
335 return var_config
336 else:
--> 337 raise RuntimeError("Called get_config outside of a runnable context")
RuntimeError: Called get_config outside of a runnable context
```
### Description
try using astream, but there's bug as ErrorMessage
### System Info
ystem Information
------------------
> OS: Linux
> OS Version: #183-Ubuntu SMP Mon Oct 2 11:28:33 UTC 2023
> Python Version: 3.10.11 | packaged by conda-forge | (main, May 10 2023, 18:58:44) [GCC 11.3.0]
Package Information
-------------------
> langchain_core: 0.3.32
> langchain: 0.3.16
> langsmith: 0.3.2
> langchain_ollama: 0.2.2
> langchain_openai: 0.2.14
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: 4.0.3
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> ollama: 0.4.7
> openai: 1.60.2
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.5
> pytest: Installed. No version info available.
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rich: 13.9.4
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: 0.23.0
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3246/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3206
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3206/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3206/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3206/events
|
https://github.com/langchain-ai/langgraph/issues/3206
| 2,811,294,878
|
I_kwDOKFU0lc6nkPSe
| 3,206
|
Subgraph checkpointer=True causes subgraph to be skipped
|
{
"login": "shengbo-ma",
"id": 44483080,
"node_id": "MDQ6VXNlcjQ0NDgzMDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/44483080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shengbo-ma",
"html_url": "https://github.com/shengbo-ma",
"followers_url": "https://api.github.com/users/shengbo-ma/followers",
"following_url": "https://api.github.com/users/shengbo-ma/following{/other_user}",
"gists_url": "https://api.github.com/users/shengbo-ma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shengbo-ma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shengbo-ma/subscriptions",
"organizations_url": "https://api.github.com/users/shengbo-ma/orgs",
"repos_url": "https://api.github.com/users/shengbo-ma/repos",
"events_url": "https://api.github.com/users/shengbo-ma/events{/privacy}",
"received_events_url": "https://api.github.com/users/shengbo-ma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-26T03:21:43
| 2025-02-12T10:39:36
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from typing import Literal
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, StateGraph
from langgraph.graph.state import Command
from langgraph.types import interrupt
from rich import get_console
from typing_extensions import TypedDict
###############
# Subgraph
###############
class SubGraphState(TypedDict, total=False):
parent_counter: int
sub_counter: int
def subgraph_accumulator(state: SubGraphState) -> SubGraphState:
get_console().print("---subgraph counter node---")
get_console().print(f"{state = }")
# ask for human approval
human_feedback = interrupt("get human feedback")
print(f"{human_feedback = }")
# continue counting
sub_counter = state["sub_counter"] + 1 if "sub_counter" in state else 1
return {"sub_counter": sub_counter}
sub_graph = (
StateGraph(SubGraphState)
.add_node(subgraph_accumulator)
.add_edge(START, subgraph_accumulator.__name__)
.add_edge(subgraph_accumulator.__name__, END)
.compile(
checkpointer=True, # BUG: This causes an issue that subgraph nodes are not executed at all after first interruption
)
)
sub_graph.name = "sub"
###############
# Parent Graph
###############
MAX_ITERATION = 3
class ParentGraphState(TypedDict):
parent_counter: int
def parent_graph_accumulator(
state: ParentGraphState,
) -> Command[Literal["sub", "__end__"]]:
print("---parent counter node---")
get_console().print(f"{state = }")
parent_counter = state["parent_counter"] + 1 if "parent_counter" in state else 0
# goto end when max iteration reaches
goto = sub_graph.get_name() if parent_counter < MAX_ITERATION else END
get_console().print(f"going to node {goto}")
return Command(
update={
"parent_counter": parent_counter,
},
goto=goto,
)
parent_agent = (
StateGraph(ParentGraphState)
.add_node(parent_graph_accumulator)
.add_node(sub_graph)
.add_edge(START, parent_graph_accumulator.__name__)
.add_edge(sub_graph.get_name(), parent_graph_accumulator.__name__)
.compile(checkpointer=MemorySaver())
)
# visualize graph
mermaid_graph = parent_agent.get_graph(xray=True).draw_mermaid()
print(mermaid_graph)
###############
# Conversation
###############
config: RunnableConfig = {"configurable": {"thread_id": "42"}, "recursion_limit": MAX_ITERATION+1}
inputs = [
ParentGraphState(parent_counter=0),
Command(resume="human feedback 1"),
Command(resume="human feedback 2"),
]
for input_ in inputs:
print(f"{input_ = }")
for event in parent_agent.stream(
# resume the conversation
input_,
config,
stream_mode="updates",
subgraphs=True,
):
print("Streaming event ...")
print(event)
```
### Error Message and Stack Trace (if applicable)
```shell
input_ = {'parent_counter': 0}
---parent counter node---
state = {'parent_counter': 0}
going to node sub
Streaming event ...
((), {'parent_graph_accumulator': {'parent_counter': 1}})
---subgraph counter node---
state = {'parent_counter': 1}
Streaming event ...
((), {'__interrupt__': (Interrupt(value='get human feedback', resumable=True, ns=['sub', 'subgraph_accumulator:f187d019-da4b-d432-bcd2-cea142aa7e35'], when='during'),)})
input_ = Command(resume='human feedback 1')
---subgraph counter node---
state = {'parent_counter': 1}
human_feedback = 'human feedback 1'
Streaming event ...
(('sub',), {'subgraph_accumulator': {'sub_counter': 1}})
Streaming event ...
((), {'sub': {'parent_counter': 1}})
---parent counter node---
state = {'parent_counter': 1}
going to node sub
Streaming event ...
((), {'parent_graph_accumulator': {'parent_counter': 2}})
Streaming event ...
((), {'sub': {'parent_counter': 1}}) <------- BUG: should be subgraph execution like (('sub',), {'subgraph_accumulator': {...}})
---parent counter node---
state = {'parent_counter': 1}
going to node sub
Streaming event ...
((), {'parent_graph_accumulator': {'parent_counter': 2}})
Streaming event ...
((), {'sub': {'parent_counter': 1}})
---parent counter node---
state = {'parent_counter': 1}
going to node sub
Streaming event ...
((), {'parent_graph_accumulator': {'parent_counter': 2}})
Traceback (most recent call last):
File "/home/linux/arcgis-ai-assistants/python/arcgis-assistant/.tmp/subgraph_state_lose/loop_subgraph_with_interrupt.py", line 99, in <module>
for event in parent_agent.stream(
File "/home/linux/miniconda3/envs/test/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1690, in stream
raise GraphRecursionError(msg)
langgraph.errors.GraphRecursionError: Recursion limit of 4 reached without hitting a stop condition. You can increase the limit by setting the `recursion_limit` config key.
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/GRAPH_RECURSION_LIMIT
```
### Description
I encountered an issue when building a multi-agent graph for [multi-turn conversations](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/#multi-turn-conversation), where a subgraph has a human feedback node. It interrupts and takes human feedback.
Here is an example graph to reproduce the issue
- A parent graph has a loop, calling a sub graph until the parent counter reaches a pre-defined limit.
- Subgraph node interrupts and take human feedback.
- The sub graph should remember its state from previous run (`checkpointer=True`).
Expected Behavior
- graph should interrupt twice, and resume with human inputs
- subgraph should should persist its state on each run (since `checkpointer=True`)
Actual Behavior
- The first interrupt and resume is as expected
- The second interrupt never happens. The parent graph never executes sub graph counter node after resuming the first interrupt. The sub graph nodes output the same parent counter (=1) repeatedly, leading to recursion limit error since parent counter does not increase.
Observation
If removing `checkpointer=True`, the graph executes as expected, i.e. the parent counter increases correctly. No bug. (In this cause, ff course, the sub graph states from previous run is not persisted )
It seems in subgraph assigning `checkpointer=True` and calling `interrupt` conflicts in some way.
```mermaid
%%{init: {'flowchart': {'curve': 'linear'}}}%%
graph TD;
__start__([<p>__start__</p>]):::first
parent_graph_accumulator(parent_graph_accumulator)
sub(sub)
__end__([<p>__end__</p>]):::last
__start__ --> parent_graph_accumulator;
sub --> parent_graph_accumulator;
parent_graph_accumulator -.-> sub;
parent_graph_accumulator -.-> __end__;
classDef default fill:#f2f0ff,line-height:1.2
classDef first fill-opacity:0
classDef last fill:#bfb6fc
```
LangGraph Version
`0.2.67`
### System Info
System Information
------------------
> OS: Linux
> OS Version: #135~20.04.1-Ubuntu SMP Mon Oct 7 13:56:22 UTC 2024
> Python Version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.3.31
> langsmith: 0.3.1
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.6
> pytest: Installed. No version info available.
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rich: 13.9.4
> tenacity: 9.0.0
> typing-extensions: 4.12.2
> zstandard: 0.23.0
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3206/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3206/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3205
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3205/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3205/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3205/events
|
https://github.com/langchain-ai/langgraph/issues/3205
| 2,811,272,690
|
I_kwDOKFU0lc6nkJ3y
| 3,205
|
How to use langgraph in production with different storage state
|
{
"login": "surendransuri",
"id": 94795305,
"node_id": "U_kgDOBaZ2KQ",
"avatar_url": "https://avatars.githubusercontent.com/u/94795305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surendransuri",
"html_url": "https://github.com/surendransuri",
"followers_url": "https://api.github.com/users/surendransuri/followers",
"following_url": "https://api.github.com/users/surendransuri/following{/other_user}",
"gists_url": "https://api.github.com/users/surendransuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surendransuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surendransuri/subscriptions",
"organizations_url": "https://api.github.com/users/surendransuri/orgs",
"repos_url": "https://api.github.com/users/surendransuri/repos",
"events_url": "https://api.github.com/users/surendransuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/surendransuri/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| true
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-26T02:07:55
| 2025-01-27T14:23:26
| 2025-01-27T14:23:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from langgraph.state import StateGraph
Chain=StateGraph(list)
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
Hello, I am trying to use langgraph in production application with my azure resources. How to change the message memory state to cosmos db. I have already functions written to insert and fetch , update items from cosmos Db. How to implement this with langgraph. Please guide me on this.
### System Info
Python langgraph
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3205/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3199
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3199/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3199/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3199/events
|
https://github.com/langchain-ai/langgraph/issues/3199
| 2,810,169,663
|
I_kwDOKFU0lc6nf8k_
| 3,199
|
run does not change config based on assistant_id
|
{
"login": "weinberg",
"id": 1189473,
"node_id": "MDQ6VXNlcjExODk0NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1189473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weinberg",
"html_url": "https://github.com/weinberg",
"followers_url": "https://api.github.com/users/weinberg/followers",
"following_url": "https://api.github.com/users/weinberg/following{/other_user}",
"gists_url": "https://api.github.com/users/weinberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weinberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weinberg/subscriptions",
"organizations_url": "https://api.github.com/users/weinberg/orgs",
"repos_url": "https://api.github.com/users/weinberg/repos",
"events_url": "https://api.github.com/users/weinberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/weinberg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-24T18:48:00
| 2025-01-24T18:48:58
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
"""
Langgraph Server code
"""
from dataclasses import dataclass, field, fields
from typing import Annotated, Dict, List, Optional, Sequence, cast
from langchain_core.messages import AIMessage
from langchain_core.runnables import RunnableConfig
from langgraph.graph import StateGraph
from typing import Sequence
from react_agent.state import InputState, State
from react_agent.tools import TOOLS
from react_agent.utils import load_chat_model
from langchain_core.runnables import RunnableConfig, ensure_config
from langchain_core.messages import AnyMessage
from langgraph.graph import add_messages
@dataclass
class State(InputState):
messages: Annotated[Sequence[AnyMessage], add_messages] = field(
default_factory=list
)
@dataclass(kw_only=True)
class Configuration:
prompt: str = field(default="Just reply with MISSING_PROMPT")
@classmethod
def from_runnable_config(
cls, config: Optional[RunnableConfig] = None
) -> "Configuration":
"""Create a Configuration instance from a RunnableConfig object."""
config = ensure_config(config)
configurable = config.get("configurable") or {}
_fields = {f.name for f in fields(cls) if f.init}
return cls(**{k: v for k, v in configurable.items() if k in _fields})
async def call_model(
state: State, config: RunnableConfig
) -> Dict[str, List[AIMessage]]:
configuration = Configuration.from_runnable_config(config)
model = load_chat_model("anthropic/claude-3-5-sonnet-20240620")
system_message = configuration.prompt
response = cast(
AIMessage,
await model.ainvoke(
[{"role": "system", "content": system_message}, *state.messages], config
),
)
return {"messages": [response]}
builder = StateGraph(State, input=InputState, config_schema=Configuration)
builder.add_node(call_model)
builder.add_edge("__start__", "call_model")
builder.add_edge("call_model", "__end__")
graph = builder.compile()
```
### Error Message and Stack Trace (if applicable)
From a jupyter notebook:
```shell
%pip install langgraph-sdk
Requirement already satisfied: langgraph-sdk in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (0.1.51)
Requirement already satisfied: httpx>=0.25.2 in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from langgraph-sdk) (0.28.1)
Requirement already satisfied: orjson>=3.10.1 in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from langgraph-sdk) (3.10.15)
Requirement already satisfied: anyio in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from httpx>=0.25.2->langgraph-sdk) (4.8.0)
Requirement already satisfied: certifi in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from httpx>=0.25.2->langgraph-sdk) (2024.12.14)
Requirement already satisfied: httpcore==1.* in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from httpx>=0.25.2->langgraph-sdk) (1.0.7)
Requirement already satisfied: idna in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from httpx>=0.25.2->langgraph-sdk) (3.10)
Requirement already satisfied: h11<0.15,>=0.13 in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from httpcore==1.*->httpx>=0.25.2->langgraph-sdk) (0.14.0)
Requirement already satisfied: sniffio>=1.1 in /Users/josh/dev/langgraph-jupyter/env/lib/python3.13/site-packages (from anyio->httpx>=0.25.2->langgraph-sdk) (1.3.1)
Note: you may need to restart the kernel to use updated packages.
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("LANGSMITH_API_KEY")
from langgraph_sdk import get_client
URL="http://localhost:2024"
client = get_client(url=URL,api_key=os.getenv('LANGSMITH_API_KEY'))
assistant_id = "agent"
thread = await client.threads.create()
assistant_one = await client.assistants.create(
graph_id="agent",
config={"configurable": {"prompt": "just respond with ASSISTANT ONE"}},
assistant_id="11111111-1111-1111-1111-111111111111",
if_exists="do_nothing",
name="asssistant one"
)
assistant_two = await client.assistants.create(
graph_id="agent",
config={"configurable": {"prompt": "just respond with ASSISTANT TWO"}},
assistant_id="22222222-2222-2222-2222-222222222222",
if_exists="do_nothing",
name="asssistant two"
)
thread = await client.threads.create(
metadata={"number":1},
if_exists="raise"
)
thread
{'thread_id': 'fba20607-a345-4c52-99e2-270122b5604c',
'created_at': '2025-01-24T18:39:41.939930+00:00',
'updated_at': '2025-01-24T18:39:41.939934+00:00',
'metadata': {'number': 1},
'status': 'idle',
'config': {},
'values': None}
result_a = await client.runs.wait(
thread_id=thread["thread_id"],
assistant_id=assistant_one["assistant_id"],
input={"messages": [{"role": "user", "content": "hello"}]},
)
result_a["messages"][1]["content"]
'ASSISTANT ONE'
result_b = await client.runs.wait(
thread_id=thread["thread_id"],
assistant_id=assistant_two["assistant_id"],
input={"messages": [{"role": "user", "content": "hello"}]},
)
result_b["messages"][1]["content"]
'ASSISTANT ONE'
```
### Description
I have two assistants. I start a thread and complete a run with the first assistant. Then I submit a run with the second assistant. The config provided to the node is always from the first assistant. Notice in the output from my notebook above that it prints "ASSISTANT ONE" twice even though the second run is using assistant_two which should print "ASSISTANT TWO". If I change the order and call assistant_two first then it will print "ASSISTANT TWO" twice.
I expected that the assistant_id I pass in with the run would cause the configuration for that assistant to be provided to the node.
I notice in LangGraph Studio I can pick an assistant for each run and it will actually work. But looking at the network request I can see Studio is passing in a `config` parameter with all the configuration of the selected assistant including the prompt. I don't want to have to do this and it seems counter-intuitive that I should have to. I expected I could use the assistant_id for this.
Thanks!
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.11.11 (main, Jan 18 2025, 10:11:10) [Clang 16.0.0 (clang-1600.0.26.6)]
Package Information
-------------------
> langchain_core: 0.3.30
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.11
> langchain_anthropic: 0.3.3
> langchain_fireworks: 0.2.6
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.5
> langgraph_api: 0.0.16
> langgraph_cli: 0.1.67
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.51
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.43.1
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fireworks-ai: 0.15.11
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.64
> langgraph-checkpoint: 2.0.10
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.8
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> sse-starlette: 2.1.3
> starlette: 0.45.2
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
> watchfiles: 1.0.4
> zstandard: Installed. No version info available.
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3199/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3193
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3193/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3193/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3193/events
|
https://github.com/langchain-ai/langgraph/issues/3193
| 2,809,225,032
|
I_kwDOKFU0lc6ncV9I
| 3,193
|
AsyncConnectionPool AsyncPostgresSaver cannot send pipeline when not in pipeline mode
|
{
"login": "lam-dm",
"id": 70374477,
"node_id": "MDQ6VXNlcjcwMzc0NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/70374477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lam-dm",
"html_url": "https://github.com/lam-dm",
"followers_url": "https://api.github.com/users/lam-dm/followers",
"following_url": "https://api.github.com/users/lam-dm/following{/other_user}",
"gists_url": "https://api.github.com/users/lam-dm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lam-dm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lam-dm/subscriptions",
"organizations_url": "https://api.github.com/users/lam-dm/orgs",
"repos_url": "https://api.github.com/users/lam-dm/repos",
"events_url": "https://api.github.com/users/lam-dm/events{/privacy}",
"received_events_url": "https://api.github.com/users/lam-dm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-24T11:37:01
| 2025-01-24T14:34:09
| 2025-01-24T14:34:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
#main.py
@asynccontextmanager
async def lifespan(self, app: FastAPI):
try:
async with AsyncConnectionPool(
conninfo=DB_URI, #connection information
kwargs={
"autocommit": True,
"prepare_threshold": 0,
"row_factory": dict_row
},
min_size=5, #minimum size of the pool
max_size=15, #maximum size of the pool
) as pool, pool.connection() as conn:
await AsyncPostgresSaver(conn).setup()
yield {"conn": conn}
except Exception as e:
logger.error(f"Error setting up connection pool: {e}")
raise
finally:
logger.info(f"End of lifespan")
#chat.py
# Use the connection directly from request.state.pool
async with request.state.conn as conn: # Use 'conn' directly
checkpointer = AsyncPostgresSaver(conn)
tools = [
GetInformationTool(
metadata={"information": only_structure}),
ValidateInformationTool(
metadata={"information": only_structure}),
RegisterInformationTool(metadata={
"information": only_structure,
"user_id": user_id,
"family_id": family_id,
"organization_id": organization_id,
"uuid": uuid,
"session_id": session_id
})
]
graph_builder = RegistrationGraphBuilder(model=model, prompt=registration_prompt, system_prompt=system_message,
member_name=member_name, language="Japanese", tools=tools, checkpointer=checkpointer)
graph = graph_builder.build()
config = {"configurable": {"thread_id": thread_id}}
result = await graph.ainvoke(
{
"messages": [HumanMessage(content=user_message)]
},
config
)
content = self.handle_result(result)
return ResponseHandler.ok(message=VALID_RESPONSE_MESSAGE, body=content)
```
### Error Message and Stack Trace (if applicable)
```shell
OperationalError('sending prepared query failed: cannot send pipeline when not in pipeline mode\nanother command is already in progress\ncannot exit pipeline mode while busy\ncannot enter pipeline mode, connection not idle\ncannot enter pipeline mode, connection not idle\ncannot enter pipeline mode, connection not idle\nanother command is already in progress\nanother command is already in progress')Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 1836, in astream
async with AsyncPregelLoop(
File "/usr/local/lib/python3.10/site-packages/langgraph/pregel/loop.py", line 988, in __aenter__
saved = await self.checkpointer.aget_tuple(self.checkpoint_config)
File "/usr/local/lib/python3.10/site-packages/langgraph/checkpoint/postgres/aio.py", line 186, in aget_tuple
await cur.execute(
File "/usr/local/lib/python3.10/site-packages/psycopg/cursor_async.py", line 97, in execute
raise ex.with_traceback(None)
psycopg.OperationalError: sending prepared query failed: cannot send pipeline when not in pipeline mode
another command is already in progress
cannot exit pipeline mode while busy
cannot enter pipeline mode, connection not idle
cannot enter pipeline mode, connection not idle
cannot enter pipeline mode, connection not idle
another command is already in progress
another command is already in progress
```
### Description
I'm developing a chatbot with FastAPI and using LangGraph with prebuilt create_react_agent
I'm using AsyncConnectionPool and AsyncPostgresSaver
I have a problem with managing the connection pool of Postgres with LangGraph, I saw this error: **cannot send pipeline when not in pipeline mode**
Could you help me to explain what happened and if I missed something, many thanks.
I appreciate that.
### System Info
fastapi==0.115.6
langchain==0.3.14
langgraph==0.2.62
langgraph-checkpoint-postgres==2.0.10
psycopg==3.2.3
psycopg-pool==3.2.4
|
{
"login": "lam-dm",
"id": 70374477,
"node_id": "MDQ6VXNlcjcwMzc0NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/70374477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lam-dm",
"html_url": "https://github.com/lam-dm",
"followers_url": "https://api.github.com/users/lam-dm/followers",
"following_url": "https://api.github.com/users/lam-dm/following{/other_user}",
"gists_url": "https://api.github.com/users/lam-dm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lam-dm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lam-dm/subscriptions",
"organizations_url": "https://api.github.com/users/lam-dm/orgs",
"repos_url": "https://api.github.com/users/lam-dm/repos",
"events_url": "https://api.github.com/users/lam-dm/events{/privacy}",
"received_events_url": "https://api.github.com/users/lam-dm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3193/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3175
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3175/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3175/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3175/events
|
https://github.com/langchain-ai/langgraph/issues/3175
| 2,807,976,547
|
I_kwDOKFU0lc6nXlJj
| 3,175
|
Missing Nodes Arrows In LangGraph Platform
|
{
"login": "soufiene-slimi",
"id": 10948245,
"node_id": "MDQ6VXNlcjEwOTQ4MjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/10948245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soufiene-slimi",
"html_url": "https://github.com/soufiene-slimi",
"followers_url": "https://api.github.com/users/soufiene-slimi/followers",
"following_url": "https://api.github.com/users/soufiene-slimi/following{/other_user}",
"gists_url": "https://api.github.com/users/soufiene-slimi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soufiene-slimi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soufiene-slimi/subscriptions",
"organizations_url": "https://api.github.com/users/soufiene-slimi/orgs",
"repos_url": "https://api.github.com/users/soufiene-slimi/repos",
"events_url": "https://api.github.com/users/soufiene-slimi/events{/privacy}",
"received_events_url": "https://api.github.com/users/soufiene-slimi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292607,
"node_id": "LA_kwDOKFU0lc8AAAABW2Sv_w",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-23T21:41:36
| 2025-02-11T04:31:42
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from typing import TypedDict
from langgraph.graph import StateGraph, START, END
class GraphState(TypedDict):
query: str
documents: list
attempts: int
def retrieve_docs(state: GraphState):
documents = []
return {
"documents": documents,
"attempts": state["attempts"] + 1
}
def grade_data(state: GraphState):
return {}
workflow1 = StateGraph(GraphState)
workflow1.add_node("retrieve_docs", retrieve_docs)
workflow1.add_node("grade_data", grade_data)
workflow1.add_edge(START, "retrieve_docs")
workflow1.add_edge("retrieve_docs", "grade_data")
workflow1.add_edge("grade_data", END)
workflow2 = StateGraph(GraphState)
workflow2.add_node("retrieve docs", retrieve_docs)
workflow2.add_node("grade_data", grade_data)
workflow2.add_edge(START, "retrieve docs")
workflow2.add_edge("retrieve docs", "grade_data")
workflow2.add_edge("grade_data", END)
works = workflow1.compile()
fails = workflow2.compile()
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
The arrows are missing in the fails graph as shown in the pics:
# OK
<img width="1272" alt="Image" src="https://github.com/user-attachments/assets/ea5cc8c1-6268-4d88-b2fd-44b24225f392" />
# NOK
<img width="1285" alt="Image" src="https://github.com/user-attachments/assets/e06dcb8b-d551-4211-ab38-8936ef71dd70" />
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.3.0: Mon Jan 6 21:59:47 PST 2025; root:xnu-11215.81.4~9/RELEASE_ARM64_T6000
> Python Version: 3.12.3 | packaged by Anaconda, Inc. | (main, May 6 2024, 14:46:42) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_ollama: 0.2.2
> langchain_openai: 0.3.1
> langchain_qdrant: 0.2.0
> langchain_text_splitters: 0.3.5
> langgraph_api: 0.0.15
> langgraph_cli: 0.1.65
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.48
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> fastembed: Installed. No version info available.
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.61
> langgraph-checkpoint: 2.0.9
> langsmith-pyo3: Installed. No version info available.
> numpy: 2.2.1
> ollama: 0.4.5
> openai: 1.60.0
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.4
> pydantic-settings: 2.7.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> qdrant-client: 1.13.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> sse-starlette: 2.1.3
> starlette: 0.45.2
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
> watchfiles: 1.0.3
> zstandard: Installed. No version info available.
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3175/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3175/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3168
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3168/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3168/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3168/events
|
https://github.com/langchain-ai/langgraph/issues/3168
| 2,807,391,076
|
I_kwDOKFU0lc6nVWNk
| 3,168
|
Anthropic API error when using prebuilt create_react_agent after model ends turn with empty content message
|
{
"login": "maxmamis",
"id": 3718504,
"node_id": "MDQ6VXNlcjM3MTg1MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3718504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxmamis",
"html_url": "https://github.com/maxmamis",
"followers_url": "https://api.github.com/users/maxmamis/followers",
"following_url": "https://api.github.com/users/maxmamis/following{/other_user}",
"gists_url": "https://api.github.com/users/maxmamis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxmamis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxmamis/subscriptions",
"organizations_url": "https://api.github.com/users/maxmamis/orgs",
"repos_url": "https://api.github.com/users/maxmamis/repos",
"events_url": "https://api.github.com/users/maxmamis/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxmamis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-23T16:38:22
| 2025-02-06T20:37:39
| 2025-02-06T20:37:39
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import os
from typing import Literal
from dotenv import load_dotenv
from pprint import pprint
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from langchain_anthropic import ChatAnthropic
from langgraph.prebuilt import create_react_agent
load_dotenv()
@tool("NameThread")
def name_thread_tool(name: str):
"""
Gives this message thread a nice name to show the customer. Keep it to a few words. Quick, witty, specific.
"""
return name
prompt = """
You are a helpful assistant that can chat with a user.
You are communicating via text message, so make sure your responses are appropriate for that format: brief and VERY casual.
NAMING THE THREAD:
Call the NameThread tool to give the thread a name to show the user.
Make sure to ALWAYS give the thread a name after the very first message from the user!
As the user gives you more information and the conversation changes, make sure to update the thread name.
DO NOT continue responding to the user after naming the thread until they respond with a new message.
"""
tools = [name_thread_tool]
model = ChatAnthropic(
model="claude-3-5-sonnet-latest",
temperature=0,
)
memory = MemorySaver()
graph = create_react_agent(model, tools=tools, checkpointer=memory, state_modifier=prompt)
config = {"configurable": {"thread_id": "123"}}
pprint(graph.invoke({"messages": [("user", "Hi, I'm max.")]}, config))
print("\n##########################\n")
pprint(graph.invoke({"messages": [("user", "How are you doing?")]}, config))
```
### Error Message and Stack Trace (if applicable)
```shell
Traceback (most recent call last):
File "/Users/max/code/langchain_issue_demo/test.py", line 55, in <module>
pprint(graph.invoke({"messages": [("user", "How are you doing?")]}, config))
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 1961, in invoke
for chunk in self.stream(
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 1670, in stream
for _ in runner.tick(
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langgraph/pregel/runner.py", line 231, in tick
run_with_retry(
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langgraph/pregel/retry.py", line 40, in run_with_retry
return task.proc.invoke(task.input, config)
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langgraph/utils/runnable.py", line 462, in invoke
input = step.invoke(input, config, **kwargs)
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langgraph/utils/runnable.py", line 218, in invoke
ret = context.run(self.func, *args, **kwargs)
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langgraph/prebuilt/chat_agent_executor.py", line 628, in call_model
response = model_runnable.invoke(state, config)
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3022, in invoke
input = context.run(step.invoke, input, config)
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 5352, in invoke
return self.bound.invoke(
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 790, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 647, in generate
raise e
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 637, in generate
self._generate_with_cache(
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 855, in _generate_with_cache
result = self._generate(
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/langchain_anthropic/chat_models.py", line 796, in _generate
data = self._client.messages.create(**payload)
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/anthropic/_utils/_utils.py", line 275, in wrapper
return func(*args, **kwargs)
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/anthropic/resources/messages/messages.py", line 904, in create
return self._post(
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 1282, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 959, in request
return self._request(
File "/Users/max/code/langchain_issue_demo/.venv/lib/python3.10/site-packages/anthropic/_base_client.py", line 1063, in _request
raise self._make_status_error_from_response(err.response) from None
anthropic.BadRequestError: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'messages.3: all messages must have non-empty content except for the optional final assistant message'}}
```
### Description
When using the prebuilt `create_react_agent` with Anthropic models, in some cases the agent will end a turn with a message with empty content. For example, my demo code produces the following output after the first user message:
```
{'messages': [HumanMessage(content="Hi, I'm max.", additional_kwargs={}, response_metadata={}, id='e869f686-d31a-4716-beed-c00608fd0469'),
AIMessage(content=[{'text': 'Hey Max! Let me give this chat a name to start us off.', 'type': 'text'}, {'id': 'toolu_01PY3e9tqPGcCs7tfk5qV1C2', 'input': {'name': 'Meeting Max! 👋'}, 'name': 'NameThread', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_011EGXw5nitFBxkmLN4NymqV', 'model': 'claude-3-5-sonnet-20241022', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 527, 'output_tokens': 74}}, id='run-23cfb30e-7269-4ae3-a97e-45166f57c44f-0', tool_calls=[{'name': 'NameThread', 'args': {'name': 'Meeting Max! 👋'}, 'id': 'toolu_01PY3e9tqPGcCs7tfk5qV1C2', 'type': 'tool_call'}], usage_metadata={'input_tokens': 527, 'output_tokens': 74, 'total_tokens': 601, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),
ToolMessage(content='Meeting Max! 👋', name='NameThread', id='b37f9855-bc9d-450f-ab90-df49e5b91d55', tool_call_id='toolu_01PY3e9tqPGcCs7tfk5qV1C2'),
AIMessage(content=[], additional_kwargs={}, response_metadata={'id': 'msg_01TeW2BazQH6PS7X7rbb6vEp', 'model': 'claude-3-5-sonnet-20241022', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 618, 'output_tokens': 3}}, id='run-2a669331-b4c3-40fe-bb0e-b20736c45d94-0', usage_metadata={'input_tokens': 618, 'output_tokens': 3, 'total_tokens': 621, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}})]}
```
Note the empty content in the final AIMessage. On the next turn, this message is included in the input to the agent, and causes Anthropic to return an error:
```
anthropic.BadRequestError: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'messages.3: all messages must have non-empty content except for the optional final assistant message'}}
```
I can filter these empty messages out manually, but I would expect this behavior to be included in the prebuilt workflow. Additionally, filtering out empty messages in this example _will_ cause errors when using OpenAI models, so in order to be model-agnostic, I need to carefully special-case my handling.
**Expected Result**: No error, agent continues as normal
**Observed Result**: Error is thrown on any subsequent message sent to this thread
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Wed Jul 31 20:48:04 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T6030
> Python Version: 3.11.3 (main, Apr 7 2023, 21:05:46) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.15
> langchain_community: 0.3.15
> langsmith: 0.3.1
> langchain_anthropic: 0.3.3
> langchain_openai: 0.3.1
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.44.0
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.60.0
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> pytest: Installed. No version info available.
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rich: Installed. No version info available.
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: 0.23.0
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3168/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3164
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3164/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3164/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3164/events
|
https://github.com/langchain-ai/langgraph/issues/3164
| 2,806,862,570
|
I_kwDOKFU0lc6nTVLq
| 3,164
|
ERROR: extension "vector" is not available
|
{
"login": "daflood",
"id": 28933836,
"node_id": "MDQ6VXNlcjI4OTMzODM2",
"avatar_url": "https://avatars.githubusercontent.com/u/28933836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daflood",
"html_url": "https://github.com/daflood",
"followers_url": "https://api.github.com/users/daflood/followers",
"following_url": "https://api.github.com/users/daflood/following{/other_user}",
"gists_url": "https://api.github.com/users/daflood/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daflood/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daflood/subscriptions",
"organizations_url": "https://api.github.com/users/daflood/orgs",
"repos_url": "https://api.github.com/users/daflood/repos",
"events_url": "https://api.github.com/users/daflood/events{/privacy}",
"received_events_url": "https://api.github.com/users/daflood/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-23T13:06:12
| 2025-01-25T03:33:15
| 2025-01-25T03:33:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
langgraph up
```
### Error Message and Stack Trace (if applicable)
```shell
Starting LangGraph API server...
For local dev, requires env var LANGSMITH_API_KEY with access to LangGraph Cloud closed beta.
For production use, requires a license key in env var LANGGRAPH_CLOUD_LICENSE_KEY.
| Starting...#0 building with "desktop-linux" instance using docker driver
#1 [langgraph-api internal] load build definition from Dockerfile
#1 transferring dockerfile: 541B done
#1 DONE 0.0s
#2 [langgraph-api internal] load metadata for docker.io/langchain/langgraph-api:3.11
#2 DONE 0.0s
#3 [langgraph-api internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s
#4 [langgraph-api internal] load build context
#4 DONE 0.0s
#5 [langgraph-api 1/4] FROM docker.io/langchain/langgraph-api:3.11@sha256:f8a020d5d61fa4b19f276b635542b25faea7d71239a903ffca3e729b34a2aea8
#5 resolve docker.io/langchain/langgraph-api:3.11@sha256:f8a020d5d61fa4b19f276b635542b25faea7d71239a903ffca3e729b34a2aea8
#5 ...
#6 [langgraph-api auth] langchain/langgraph-api:pull token for registry-1.docker.io
#6 DONE 0.0s
#5 [langgraph-api 1/4] FROM docker.io/langchain/langgraph-api:3.11@sha256:f8a020d5d61fa4b19f276b635542b25faea7d71239a903ffca3e729b34a2aea8
#5 resolve docker.io/langchain/langgraph-api:3.11@sha256:f8a020d5d61fa4b19f276b635542b25faea7d71239a903ffca3e729b34a2aea8 0.8s done
#5 DONE 0.8s
#5 [langgraph-api 1/4] FROM docker.io/langchain/langgraph-api:3.11@sha256:f8a020d5d61fa4b19f276b635542b25faea7d71239a903ffca3e729b34a2aea8
#5 DONE 0.9s
#4 [langgraph-api internal] load build context
#4 transferring context: 283.50kB 0.1s done
#4 DONE 0.1s
#7 [langgraph-api 2/4] ADD . /deps/memory-template-fablr
#7 DONE 0.5s
#8 [langgraph-api 3/4] RUN PYTHONDONTWRITEBYTECODE=1 pip install --no-cache-dir -c /api/constraints.txt -e /deps/*
#8 0.240 Obtaining file:///deps/memory-template-fablr
#8 0.242 Installing build dependencies: started
#8 1.178 Installing build dependencies: finished with status 'done'
#8 1.179 Checking if build backend supports build_editable: started
#8 1.238 Checking if build backend supports build_editable: finished with status 'done'
#8 1.238 Getting requirements to build editable: started
#8 1.333 Getting requirements to build editable: finished with status 'done'
#8 1.334 Preparing editable metadata (pyproject.toml): started
#8 1.421 Preparing editable metadata (pyproject.toml): finished with status 'done'
#8 1.433 Requirement already satisfied: langgraph<0.3.0,>=0.2.53 in /usr/local/lib/python3.11/site-packages (from memory-graph==0.0.1) (0.2.66)
#8 1.433 Requirement already satisfied: langgraph-checkpoint>=2.0.8 in /usr/local/lib/python3.11/site-packages (from memory-graph==0.0.1) (2.0.10)
#8 1.499 Collecting langchain-openai>=0.2.1 (from memory-graph==0.0.1)
#8 1.574 Downloading langchain_openai-0.3.1-py3-none-any.whl.metadata (2.7 kB)
#8 1.596 Collecting langchain-anthropic>=0.2.1 (from memory-graph==0.0.1)
#8 1.614 Downloading langchain_anthropic-0.3.3-py3-none-any.whl.metadata (2.3 kB)
#8 1.700 Collecting langchain>=0.3.8 (from memory-graph==0.0.1)
#8 1.719 Downloading langchain-0.3.15-py3-none-any.whl.metadata (7.1 kB)
#8 1.736 Collecting python-dotenv>=1.0.1 (from memory-graph==0.0.1)
#8 1.753 Downloading python_dotenv-1.0.1-py3-none-any.whl.metadata (23 kB)
#8 1.762 Requirement already satisfied: langgraph-sdk>=0.1.40 in /usr/local/lib/python3.11/site-packages (from memory-graph==0.0.1) (0.1.51)
#8 1.780 Collecting trustcall>=0.0.21 (from memory-graph==0.0.1)
#8 1.799 Downloading trustcall-0.0.28-py3-none-any.whl.metadata (29 kB)
#8 1.812 Requirement already satisfied: PyYAML>=5.3 in /usr/local/lib/python3.11/site-packages (from langchain>=0.3.8->memory-graph==0.0.1) (6.0.2)
#8 1.954 Collecting SQLAlchemy<3,>=1.4 (from langchain>=0.3.8->memory-graph==0.0.1)
#8 1.975 Downloading SQLAlchemy-2.0.37-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (9.6 kB)
#8 2.139 Collecting aiohttp<4.0.0,>=3.8.3 (from langchain>=0.3.8->memory-graph==0.0.1)
#8 2.158 Downloading aiohttp-3.11.11-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (7.7 kB)
#8 2.160 Requirement already satisfied: langchain-core<0.4.0,>=0.3.31 in /usr/local/lib/python3.11/site-packages (from langchain>=0.3.8->memory-graph==0.0.1) (0.3.31)
#8 2.174 Collecting langchain-text-splitters<0.4.0,>=0.3.3 (from langchain>=0.3.8->memory-graph==0.0.1)
#8 2.194 Downloading langchain_text_splitters-0.3.5-py3-none-any.whl.metadata (2.3 kB)
#8 2.195 Requirement already satisfied: langsmith<0.4,>=0.1.17 in /usr/local/lib/python3.11/site-packages (from langchain>=0.3.8->memory-graph==0.0.1) (0.3.1)
#8 2.287 Collecting numpy<2,>=1.22.4 (from langchain>=0.3.8->memory-graph==0.0.1)
#8 2.306 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (62 kB)
#8 2.315 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.3/62.3 kB 8.6 MB/s eta 0:00:00
#8 2.321 Requirement already satisfied: pydantic<3.0.0,>=2.7.4 in /usr/local/lib/python3.11/site-packages (from langchain>=0.3.8->memory-graph==0.0.1) (2.10.5)
#8 2.321 Requirement already satisfied: requests<3,>=2 in /usr/local/lib/python3.11/site-packages (from langchain>=0.3.8->memory-graph==0.0.1) (2.32.3)
#8 2.322 Requirement already satisfied: tenacity!=8.4.0,<10,>=8.1.0 in /usr/local/lib/python3.11/site-packages (from langchain>=0.3.8->memory-graph==0.0.1) (9.0.0)
#8 2.348 Collecting anthropic<1,>=0.41.0 (from langchain-anthropic>=0.2.1->memory-graph==0.0.1)
#8 2.366 Downloading anthropic-0.44.0-py3-none-any.whl.metadata (23 kB)
#8 2.382 Collecting defusedxml<0.8.0,>=0.7.1 (from langchain-anthropic>=0.2.1->memory-graph==0.0.1)
#8 2.402 Downloading defusedxml-0.7.1-py2.py3-none-any.whl.metadata (32 kB)
#8 2.444 Collecting openai<2.0.0,>=1.58.1 (from langchain-openai>=0.2.1->memory-graph==0.0.1)
#8 2.461 Downloading openai-1.60.0-py3-none-any.whl.metadata (27 kB)
#8 2.485 Collecting tiktoken<1,>=0.7 (from langchain-openai>=0.2.1->memory-graph==0.0.1)
#8 2.507 Downloading tiktoken-0.8.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (6.6 kB)
#8 2.514 Requirement already satisfied: msgpack<2.0.0,>=1.1.0 in /usr/local/lib/python3.11/site-packages (from langgraph-checkpoint>=2.0.8->memory-graph==0.0.1) (1.1.0)
#8 2.516 Requirement already satisfied: httpx>=0.25.2 in /usr/local/lib/python3.11/site-packages (from langgraph-sdk>=0.1.40->memory-graph==0.0.1) (0.28.1)
#8 2.517 Requirement already satisfied: orjson>=3.10.1 in /usr/local/lib/python3.11/site-packages (from langgraph-sdk>=0.1.40->memory-graph==0.0.1) (3.10.15)
#8 2.602 Collecting dydantic<1.0.0,>=0.0.7 (from trustcall>=0.0.21->memory-graph==0.0.1)
#8 2.625 Downloading dydantic-0.0.7-py3-none-any.whl.metadata (3.6 kB)
#8 2.626 Requirement already satisfied: jsonpatch<2.0,>=1.33 in /usr/local/lib/python3.11/site-packages (from trustcall>=0.0.21->memory-graph==0.0.1) (1.33)
#8 2.651 Collecting aiohappyeyeballs>=2.3.0 (from aiohttp<4.0.0,>=3.8.3->langchain>=0.3.8->memory-graph==0.0.1)
#8 2.669 Downloading aiohappyeyeballs-2.4.4-py3-none-any.whl.metadata (6.1 kB)
#8 2.687 Collecting aiosignal>=1.1.2 (from aiohttp<4.0.0,>=3.8.3->langchain>=0.3.8->memory-graph==0.0.1)
#8 2.707 Downloading aiosignal-1.3.2-py2.py3-none-any.whl.metadata (3.8 kB)
#8 2.727 Collecting attrs>=17.3.0 (from aiohttp<4.0.0,>=3.8.3->langchain>=0.3.8->memory-graph==0.0.1)
#8 2.746 Downloading attrs-24.3.0-py3-none-any.whl.metadata (11 kB)
#8 2.785 Collecting frozenlist>=1.1.1 (from aiohttp<4.0.0,>=3.8.3->langchain>=0.3.8->memory-graph==0.0.1)
#8 2.804 Downloading frozenlist-1.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (13 kB)
#8 2.883 Collecting multidict<7.0,>=4.5 (from aiohttp<4.0.0,>=3.8.3->langchain>=0.3.8->memory-graph==0.0.1)
#8 2.901 Downloading multidict-6.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (5.0 kB)
#8 2.924 Collecting propcache>=0.2.0 (from aiohttp<4.0.0,>=3.8.3->langchain>=0.3.8->memory-graph==0.0.1)
#8 2.944 Downloading propcache-0.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (9.2 kB)
#8 3.080 Collecting yarl<2.0,>=1.17.0 (from aiohttp<4.0.0,>=3.8.3->langchain>=0.3.8->memory-graph==0.0.1)
#8 3.098 Downloading yarl-1.18.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (69 kB)
#8 3.102 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 69.2/69.2 kB 21.0 MB/s eta 0:00:00
#8 3.108 Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.11/site-packages (from anthropic<1,>=0.41.0->langchain-anthropic>=0.2.1->memory-graph==0.0.1) (4.8.0)
#8 3.126 Collecting distro<2,>=1.7.0 (from anthropic<1,>=0.41.0->langchain-anthropic>=0.2.1->memory-graph==0.0.1)
#8 3.142 Downloading distro-1.9.0-py3-none-any.whl.metadata (6.8 kB)
#8 3.174 Collecting jiter<1,>=0.4.0 (from anthropic<1,>=0.41.0->langchain-anthropic>=0.2.1->memory-graph==0.0.1)
#8 3.192 Downloading jiter-0.8.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (5.2 kB)
#8 3.194 Requirement already satisfied: sniffio in /usr/local/lib/python3.11/site-packages (from anthropic<1,>=0.41.0->langchain-anthropic>=0.2.1->memory-graph==0.0.1) (1.3.1)
#8 3.194 Requirement already satisfied: typing-extensions<5,>=4.10 in /usr/local/lib/python3.11/site-packages (from anthropic<1,>=0.41.0->langchain-anthropic>=0.2.1->memory-graph==0.0.1) (4.12.2)
#8 3.202 Requirement already satisfied: certifi in /usr/local/lib/python3.11/site-packages (from httpx>=0.25.2->langgraph-sdk>=0.1.40->memory-graph==0.0.1) (2024.12.14)
#8 3.203 Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.11/site-packages (from httpx>=0.25.2->langgraph-sdk>=0.1.40->memory-graph==0.0.1) (1.0.7)
#8 3.203 Requirement already satisfied: idna in /usr/local/lib/python3.11/site-packages (from httpx>=0.25.2->langgraph-sdk>=0.1.40->memory-graph==0.0.1) (3.10)
#8 3.206 Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/lib/python3.11/site-packages (from httpcore==1.*->httpx>=0.25.2->langgraph-sdk>=0.1.40->memory-graph==0.0.1) (0.14.0)
#8 3.208 Requirement already satisfied: jsonpointer>=1.9 in /usr/local/lib/python3.11/site-packages (from jsonpatch<2.0,>=1.33->trustcall>=0.0.21->memory-graph==0.0.1) (3.0.0)
#8 3.211 Requirement already satisfied: packaging<25,>=23.2 in /usr/local/lib/python3.11/site-packages (from langchain-core<0.4.0,>=0.3.31->langchain>=0.3.8->memory-graph==0.0.1) (24.2)
#8 3.219 Requirement already satisfied: requests-toolbelt<2.0.0,>=1.0.0 in /usr/local/lib/python3.11/site-packages (from langsmith<0.4,>=0.1.17->langchain>=0.3.8->memory-graph==0.0.1) (1.0.0)
#8 3.220 Requirement already satisfied: zstandard<0.24.0,>=0.23.0 in /usr/local/lib/python3.11/site-packages (from langsmith<0.4,>=0.1.17->langchain>=0.3.8->memory-graph==0.0.1) (0.23.0)
#8 3.255 Collecting tqdm>4 (from openai<2.0.0,>=1.58.1->langchain-openai>=0.2.1->memory-graph==0.0.1)
#8 3.273 Downloading tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)
#8 3.276 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.7/57.7 kB 38.1 MB/s eta 0:00:00
#8 3.284 Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.11/site-packages (from pydantic<3.0.0,>=2.7.4->langchain>=0.3.8->memory-graph==0.0.1) (0.7.0)
#8 3.284 Requirement already satisfied: pydantic-core==2.27.2 in /usr/local/lib/python3.11/site-packages (from pydantic<3.0.0,>=2.7.4->langchain>=0.3.8->memory-graph==0.0.1) (2.27.2)
#8 3.292 Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/site-packages (from requests<3,>=2->langchain>=0.3.8->memory-graph==0.0.1) (3.4.1)
#8 3.293 Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/site-packages (from requests<3,>=2->langchain>=0.3.8->memory-graph==0.0.1) (2.3.0)
#8 3.393 Collecting greenlet!=0.4.17 (from SQLAlchemy<3,>=1.4->langchain>=0.3.8->memory-graph==0.0.1)
#8 3.412 Downloading greenlet-3.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (3.8 kB)
#8 3.547 Collecting regex>=2022.1.18 (from tiktoken<1,>=0.7->langchain-openai>=0.2.1->memory-graph==0.0.1)
#8 3.566 Downloading regex-2024.11.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (40 kB)
#8 3.569 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.5/40.5 kB 14.8 MB/s eta 0:00:00
#8 3.669 Downloading langchain-0.3.15-py3-none-any.whl (1.0 MB)
#8 3.715 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.0/1.0 MB 23.2 MB/s eta 0:00:00
#8 3.734 Downloading langchain_anthropic-0.3.3-py3-none-any.whl (22 kB)
#8 3.754 Downloading langchain_openai-0.3.1-py3-none-any.whl (54 kB)
#8 3.755 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.3/54.3 kB 400.6 MB/s eta 0:00:00
#8 3.773 Downloading python_dotenv-1.0.1-py3-none-any.whl (19 kB)
#8 3.790 Downloading trustcall-0.0.28-py3-none-any.whl (24 kB)
#8 3.808 Downloading aiohttp-3.11.11-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.7 MB)
#8 3.841 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 59.9 MB/s eta 0:00:00
#8 3.862 Downloading anthropic-0.44.0-py3-none-any.whl (208 kB)
#8 3.866 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 208.6/208.6 kB 217.8 MB/s eta 0:00:00
#8 3.884 Downloading defusedxml-0.7.1-py2.py3-none-any.whl (25 kB)
#8 3.907 Downloading dydantic-0.0.7-py3-none-any.whl (8.6 kB)
#8 3.929 Downloading langchain_text_splitters-0.3.5-py3-none-any.whl (31 kB)
#8 3.955 Downloading numpy-1.26.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (14.2 MB)
#8 4.086 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.2/14.2 MB 129.0 MB/s eta 0:00:00
#8 4.106 Downloading openai-1.60.0-py3-none-any.whl (456 kB)
#8 4.111 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 456.1/456.1 kB 162.7 MB/s eta 0:00:00
#8 4.131 Downloading SQLAlchemy-2.0.37-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (3.2 MB)
#8 4.161 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2/3.2 MB 112.7 MB/s eta 0:00:00
#8 4.182 Downloading tiktoken-0.8.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.1 MB)
#8 4.200 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 76.8 MB/s eta 0:00:00
#8 4.219 Downloading aiohappyeyeballs-2.4.4-py3-none-any.whl (14 kB)
#8 4.239 Downloading aiosignal-1.3.2-py2.py3-none-any.whl (7.6 kB)
#8 4.264 Downloading attrs-24.3.0-py3-none-any.whl (63 kB)
#8 4.269 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.4/63.4 kB 93.2 MB/s eta 0:00:00
#8 4.288 Downloading distro-1.9.0-py3-none-any.whl (20 kB)
#8 4.311 Downloading frozenlist-1.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (276 kB)
#8 4.315 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 276.4/276.4 kB 201.5 MB/s eta 0:00:00
#8 4.337 Downloading greenlet-3.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (640 kB)
#8 4.343 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 640.4/640.4 kB 117.7 MB/s eta 0:00:00
#8 4.362 Downloading jiter-0.8.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (335 kB)
#8 4.369 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 335.5/335.5 kB 243.8 MB/s eta 0:00:00
#8 4.387 Downloading multidict-6.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (131 kB)
#8 4.389 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 131.1/131.1 kB 457.2 MB/s eta 0:00:00
#8 4.407 Downloading propcache-0.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (232 kB)
#8 4.410 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 232.1/232.1 kB 334.9 MB/s eta 0:00:00
#8 4.433 Downloading regex-2024.11.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (792 kB)
#8 4.439 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 792.1/792.1 kB 207.5 MB/s eta 0:00:00
#8 4.458 Downloading tqdm-4.67.1-py3-none-any.whl (78 kB)
#8 4.460 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 78.5/78.5 kB 250.6 MB/s eta 0:00:00
#8 4.481 Downloading yarl-1.18.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (340 kB)
#8 4.483 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 340.6/340.6 kB 389.2 MB/s eta 0:00:00
#8 4.531 Building wheels for collected packages: memory-graph
#8 4.532 Building editable for memory-graph (pyproject.toml): started
#8 4.641 Building editable for memory-graph (pyproject.toml): finished with status 'done'
#8 4.641 Created wheel for memory-graph: filename=memory_graph-0.0.1-0.editable-py3-none-any.whl size=12058 sha256=231700013530c5ec53d51858097ee84bff1feaa31e80e3767011fdb6bb1da20f
#8 4.641 Stored in directory: /tmp/pip-ephem-wheel-cache-4xloqqw2/wheels/c5/7a/de/6394e8973710799d9374a0b7e96b0800e1979dc4e97f5a93ea
#8 4.643 Successfully built memory-graph
#8 4.769 Installing collected packages: tqdm, regex, python-dotenv, propcache, numpy, multidict, jiter, greenlet, frozenlist, distro, defusedxml, attrs, aiohappyeyeballs, yarl, tiktoken, SQLAlchemy, aiosignal, openai, dydantic, anthropic, aiohttp, langchain-text-splitters, langchain-openai, langchain-anthropic, langchain, trustcall, memory-graph
#8 6.783 Successfully installed SQLAlchemy-2.0.37 aiohappyeyeballs-2.4.4 aiohttp-3.11.11 aiosignal-1.3.2 anthropic-0.44.0 attrs-24.3.0 defusedxml-0.7.1 distro-1.9.0 dydantic-0.0.7 frozenlist-1.5.0 greenlet-3.1.1 jiter-0.8.2 langchain-0.3.15 langchain-anthropic-0.3.3 langchain-openai-0.3.1 langchain-text-splitters-0.3.5 memory-graph-0.0.1 multidict-6.1.0 numpy-1.26.4 openai-1.60.0 propcache-0.2.1 python-dotenv-1.0.1 regex-2024.11.6 tiktoken-0.8.0 tqdm-4.67.1 trustcall-0.0.28 yarl-1.18.3
#8 6.858
#8 6.858 [notice] A new release of pip is available: 24.0 -> 24.3.1
#8 6.858 [notice] To update, run: pip install --upgrade pip
#8 DONE 7.3s
#9 [langgraph-api 4/4] WORKDIR /deps/memory-template-fablr
#9 DONE 0.0s
#10 [langgraph-api] exporting to image
#10 exporting layers
#10 exporting layers 2.2s done
#10 exporting manifest sha256:39ad610bb2738b5579213c7f4df8a3abe393a77b5676a48ff06ea3fda76b8e59
#10 exporting manifest sha256:39ad610bb2738b5579213c7f4df8a3abe393a77b5676a48ff06ea3fda76b8e59 done
#10 exporting config sha256:156a740bbc10092b69f2a576f17bd4aacc4276526d318850caf7e3f4537ad0bd done
#10 exporting attestation manifest sha256:5e28adc6229387a563d26e2124efd582fcb5c23461b4961b7d903f2dcc949e05 done
#10 exporting manifest list sha256:2255a8f540d49db3441f4866eae7bcee9836d959d9c3edfcc3ec3b83e7213a8e done
#10 naming to docker.io/library/memory-template-fablr-langgraph-api:latest done
#10 unpacking to docker.io/library/memory-template-fablr-langgraph-api:latest
#10 unpacking to docker.io/library/memory-template-fablr-langgraph-api:latest 0.7s done
#10 DONE 3.0s
#11 [langgraph-api] resolving provenance for metadata file
#11 DONE 0.0s
Attaching to langgraph-api-1, langgraph-postgres-1, langgraph-redis-1
langgraph-redis-1 | 1:C 23 Jan 2025 12:50:31.408 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
langgraph-redis-1 | 1:C 23 Jan 2025 12:50:31.408 # Redis version=6.2.16, bits=64, commit=00000000, modified=0, pid=1, just started
langgraph-redis-1 | 1:C 23 Jan 2025 12:50:31.408 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
langgraph-redis-1 | 1:M 23 Jan 2025 12:50:31.408 * monotonic clock: POSIX clock_gettime
langgraph-redis-1 | 1:M 23 Jan 2025 12:50:31.408 * Running mode=standalone, port=6379.
langgraph-redis-1 | 1:M 23 Jan 2025 12:50:31.408 # Server initialized
langgraph-redis-1 | 1:M 23 Jan 2025 12:50:31.410 * Loading RDB produced by version 6.2.16
langgraph-redis-1 | 1:M 23 Jan 2025 12:50:31.410 * RDB age 568754 seconds
langgraph-redis-1 | 1:M 23 Jan 2025 12:50:31.410 * RDB memory usage when created 0.81 Mb
langgraph-redis-1 | 1:M 23 Jan 2025 12:50:31.410 # Done loading RDB, keys loaded: 0, keys expired: 0.
langgraph-redis-1 | 1:M 23 Jan 2025 12:50:31.410 * DB loaded from disk: 0.000 seconds
langgraph-redis-1 | 1:M 23 Jan 2025 12:50:31.410 * Ready to accept connections
langgraph-postgres-1 |
langgraph-postgres-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
langgraph-postgres-1 |
langgraph-api-1 | 2025-01-23T12:50:37.367224Z [info ] Using auth of type=noop [langgraph_api.auth.middleware] api_revision=bbed8a5 api_variant=local
langgraph-api-1 | 2025-01-23T12:50:37.368050Z [info ] Started server process [1] [uvicorn.error] api_revision=bbed8a5 api_variant=local color_message='Started server process [\x1b[36m%d\x1b[0m]'
langgraph-api-1 | 2025-01-23T12:50:37.368114Z [info ] Waiting for application startup. [uvicorn.error] api_revision=bbed8a5 api_variant=local
langgraph-api-1 | 2025-01-23T12:50:37.368281Z [warning ] No license key found, running in test mode with LangSmith API key. For production use, set LANGGRAPH_CLOUD_LICENSE_KEY in environment. [langgraph_license.validation] api_revision=bbed8a5 api_variant=local
langgraph-api-1 | 2025-01-23T12:50:37.522194Z [info ] HTTP Request: GET https://api.smith.langchain.com/auth?langgraph-api=true "HTTP/1.1 200 OK" [httpx] api_revision=bbed8a5 api_variant=local
langgraph-api-1 | 2025-01-23T12:50:37.562562Z [warning ] /api/langgraph_api/graph.py:470: LangChainBetaWarning: The function `init_embeddings` is in beta. It is actively being worked on, so the API may change.
langgraph-api-1 | [py.warnings] api_revision=bbed8a5 api_variant=local
langgraph-api-1 | 2025-01-23T12:50:37.782794Z [info ] Setting up vector index [langgraph_storage.database] api_revision=bbed8a5 api_variant=local store_config={'index': {'dims': 1536, 'embed': OpenAIEmbeddings(client=<openai.resources.embeddings.Embeddings object at 0xffffab5c8390>, async_client=<openai.resources.embeddings.AsyncEmbeddings object at 0xffffab580150>, model='text-embedding-3-small', dimensions=None, deployment='text-embedding-ada-002', openai_api_version=None, openai_api_base=None, openai_api_type=None, openai_proxy=None, embedding_ctx_length=8191, openai_api_key=SecretStr('**********'), openai_organization=None, allowed_special=None, disallowed_special=None, chunk_size=1000, max_retries=2, request_timeout=None, headers=None, tiktoken_enabled=True, tiktoken_model_name=None, show_progress_bar=False, model_kwargs={}, skip_empty=False, default_headers=None, default_query=None, retry_min_seconds=4, retry_max_seconds=20, http_client=None, http_async_client=None, check_embedding_ctx_length=True)}}
langgraph-api-1 | 2025-01-23T12:50:37.787183Z [error ] Traceback (most recent call last):
langgraph-api-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 693, in lifespan
langgraph-api-1 | async with self.lifespan_context(app) as maybe_state:
langgraph-api-1 | File "/usr/local/lib/python3.11/contextlib.py", line 210, in __aenter__
langgraph-api-1 | return await anext(self.gen)
langgraph-api-1 | ^^^^^^^^^^^^^^^^^^^^^
langgraph-api-1 | File "/api/langgraph_api/lifespan.py", line 29, in lifespan
langgraph-api-1 | File "/usr/local/lib/python3.11/site-packages/langgraph_storage/database.py", line 149, in start_pool
langgraph-api-1 | await migrate_vector_index()
langgraph-api-1 | File "/usr/local/lib/python3.11/site-packages/langgraph_storage/database.py", line 138, in migrate_vector_index
langgraph-api-1 | await lg_store.setup_vector_index(store)
langgraph-api-1 | File "/usr/local/lib/python3.11/site-packages/langgraph_storage/store.py", line 94, in setup_vector_index
langgraph-api-1 | await cur.execute(sql)
langgraph-api-1 | File "/usr/local/lib/python3.11/site-packages/psycopg/cursor_async.py", line 97, in execute
langgraph-api-1 | raise ex.with_traceback(None)
langgraph-api-1 | psycopg.errors.FeatureNotSupported: extension "vector" is not available
langgraph-api-1 | DETAIL: Could not open extension control file "/usr/share/postgresql/16/extension/vector.control": No such file or directory.
langgraph-api-1 | HINT: The extension must first be installed on the system where PostgreSQL is running.
langgraph-api-1 | [uvicorn.error] api_revision=bbed8a5 api_variant=local
langgraph-api-1 | 2025-01-23T12:50:37.787265Z [error ] Application startup failed. Exiting. [uvicorn.error] api_revision=bbed8a5 api_variant=local
langgraph-api-1 exited with code 3
Service langgraph-api Building
Service langgraph-api Built
Container memory-template-fablr-langgraph-postgres-1 Created
Container memory-template-fablr-langgraph-redis-1 Created
Container memory-template-fablr-langgraph-api-1 Recreate
Container memory-template-fablr-langgraph-api-1 Recreated
langgraph-postgres-1 | 2025-01-23 12:50:31.428 UTC [1] LOG: starting PostgreSQL 16.6 (Debian 16.6-1.pgdg120+1) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
langgraph-postgres-1 | 2025-01-23 12:50:31.428 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
langgraph-postgres-1 | 2025-01-23 12:50:31.428 UTC [1] LOG: listening on IPv6 address "::", port 5432
langgraph-postgres-1 | 2025-01-23 12:50:31.432 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
langgraph-postgres-1 | 2025-01-23 12:50:31.437 UTC [29] LOG: database system was shut down at 2025-01-16 22:51:17 UTC
langgraph-postgres-1 | 2025-01-23 12:50:31.441 UTC [1] LOG: database system is ready to accept connections
langgraph-postgres-1 | 2025-01-23 12:50:37.786 UTC [41] ERROR: extension "vector" is not available
langgraph-postgres-1 | 2025-01-23 12:50:37.786 UTC [41] DETAIL: Could not open extension control file "/usr/share/postgresql/16/extension/vector.control": No such file or directory.
langgraph-postgres-1 | 2025-01-23 12:50:37.786 UTC [41] HINT: The extension must first be installed on the system where PostgreSQL is running.
langgraph-postgres-1 | 2025-01-23 12:50:37.786 UTC [41] STATEMENT:
langgraph-postgres-1 | CREATE EXTENSION IF NOT EXISTS vector;
langgraph-postgres-1 |
Aborting on container exit...
Container memory-template-fablr-langgraph-api-1 Stopping
Container memory-template-fablr-langgraph-api-1 Stopped
Container memory-template-fablr-langgraph-postgres-1 Stopping
Container memory-template-fablr-langgraph-redis-1 Stopping
Container memory-template-fablr-langgraph-postgres-1 Stopped
Container memory-template-fablr-langgraph-redis-1 Stopped
```
### Description
Trying to load [the](https://github.com/langchain-ai/memory-template) using Langgraph CLI. This vector error was just fixed with version .35 of the Langgraph Studio client but it's still seems to be present in the CLI version of Langgraph.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.2.0: Fri Dec 6 19:03:40 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6041
> Python Version: 3.11.0 (v3.11.0:deaf509e8f, Oct 24 2022, 14:43:23) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langsmith: 0.2.10
> langchain_anthropic: 0.3.1
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.5
> langgraph_api: 0.0.16
> langgraph_cli: 0.1.67
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.51
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.42.0
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> defusedxml: 0.7.1
> httpx: 0.28.1
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.62
> langgraph-checkpoint: 2.0.9
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.7
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.5
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> sse-starlette: 2.1.3
> starlette: 0.45.2
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.34.0
> watchfiles: 1.0.4
> zstandard: Installed. No version info available.
|
{
"login": "dqbd",
"id": 1443449,
"node_id": "MDQ6VXNlcjE0NDM0NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1443449?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dqbd",
"html_url": "https://github.com/dqbd",
"followers_url": "https://api.github.com/users/dqbd/followers",
"following_url": "https://api.github.com/users/dqbd/following{/other_user}",
"gists_url": "https://api.github.com/users/dqbd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dqbd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dqbd/subscriptions",
"organizations_url": "https://api.github.com/users/dqbd/orgs",
"repos_url": "https://api.github.com/users/dqbd/repos",
"events_url": "https://api.github.com/users/dqbd/events{/privacy}",
"received_events_url": "https://api.github.com/users/dqbd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3164/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3153
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3153/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3153/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3153/events
|
https://github.com/langchain-ai/langgraph/issues/3153
| 2,805,364,733
|
I_kwDOKFU0lc6nNnf9
| 3,153
|
No command in response
|
{
"login": "LouiseAbela",
"id": 25531248,
"node_id": "MDQ6VXNlcjI1NTMxMjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/25531248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LouiseAbela",
"html_url": "https://github.com/LouiseAbela",
"followers_url": "https://api.github.com/users/LouiseAbela/followers",
"following_url": "https://api.github.com/users/LouiseAbela/following{/other_user}",
"gists_url": "https://api.github.com/users/LouiseAbela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LouiseAbela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LouiseAbela/subscriptions",
"organizations_url": "https://api.github.com/users/LouiseAbela/orgs",
"repos_url": "https://api.github.com/users/LouiseAbela/repos",
"events_url": "https://api.github.com/users/LouiseAbela/events{/privacy}",
"received_events_url": "https://api.github.com/users/LouiseAbela/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-22T21:18:17
| 2025-01-23T03:16:12
| 2025-01-23T03:16:11
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from langchain_ollama import ChatOllama
#llm = OpenAI(temperature=0)
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from typing import List, Optional, Literal
from langchain_core.language_models.chat_models import BaseChatModel
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.types import Command
from langchain_core.messages import HumanMessage, trim_messages
class State(MessagesState):
next: str
def make_supervisor_node(llm: BaseChatModel, members: list[str]) -> str:
options = members + ["FINISH"]
system_prompt = (
"You are a supervisor tasked with managing a conversation between the"
f" following workers: {members}. Given the following user request,"
" respond with the worker to act next. Each worker will perform a"
" task and respond with their results and status. When finished,"
" respond with FINISH."
)
class Router(TypedDict):
"""Worker to route to next. If no workers needed, route to FINISH."""
next: Literal[*options]
def supervisor_node(state: State) -> Command[Literal[*members, "__end__"]]:
messages = [
{"role": "system", "content": system_prompt},
] + state["messages"]
response = llm.with_structured_output(Router).invoke(messages)
goto = response["next"]
if goto == "FINISH":
goto = END
return Command(goto=goto, update={"next": goto})
return supervisor_node
llm = ChatOllama(
model= "llama3.2",
temperature=0
)
search_agent = create_react_agent(llm, tools=[tavily_tool], state_modifier="You are a researcher. DO NOT do any math. And always return the state")
def search_node(state: State) -> Command[Literal["supervisor"]]:
result = search_agent.invoke(state)
print("here")
update = HumanMessage(content=result["messages"][-1].content, name="search")
command = Command(
update={
"messages": [
HumanMessage(content=result["messages"][-1].content, name="search")
]
},
# We want our workers to ALWAYS "report back" to the supervisor when done
goto="supervisor",
)
return command
web_scraper_agent = create_react_agent(llm, tools=[scrape_webpages])
def web_scraper_node(state: State) -> Command[Literal["supervisor"]]:
result = web_scraper_agent.invoke(state)
return Command(
update={
"messages": [
from typing import Annotated, List
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.tools import tool
from langchain_ollama import ChatOllama
from langchain_core.messages import HumanMessage
from langgraph.prebuilt import create_react_agent
llm = ChatOllama(
model= "llama3.2",
temperature=0
)
@tool
def scrape_webpages(urls: List[str]) -> str:
"""Use requests and bs4 to scrape the provided web pages for detailed information."""
loader = WebBaseLoader(urls)
docs = loader.load()
return "\n\n".join(
[
f'<Document name="{doc.metadata.get("title", "")}">\n{doc.page_content}\n</Document>'
for doc in docs
]
)
search_agent = create_react_agent(llm, tools=[tavily_tool], state_modifier="You are a researcher. DO NOT do any math. And always return the state")
def search_node(state: State) -> Command[Literal["supervisor"]]:
result = search_agent.invoke(state)
print("here")
update = HumanMessage(content=result["messages"][-1].content, name="search")
command = Command(
update={
"messages": [
HumanMessage(content=result["messages"][-1].content, name="search")
]
},
# We want our workers to ALWAYS "report back" to the supervisor when done
goto="supervisor",
)
return command
web_scraper_agent = create_react_agent(llm, tools=[scrape_webpages])
def web_scraper_node(state: State) -> Command[Literal["supervisor"]]:
result = web_scraper_agent.invoke(state)
return Command(
update={
"messages": [
HumanMessage(content=result["messages"][-1].content, name="web_scraper")
]
},
# We want our workers to ALWAYS "report back" to the supervisor when done
goto="supervisor",
)
research_supervisor_node = make_supervisor_node(llm, ["search", "web_scraper"])
research_builder = StateGraph(State)
research_builder.add_node("supervisor", research_supervisor_node)
research_builder.add_node("search", search_node)
research_builder.add_node("web_scraper", web_scraper_node)
research_builder.add_edge(START, "supervisor")
research_graph = research_builder.compile()
for s in research_graph.stream(
{"messages": [("user", "when is the water temperature in Copenhagen today?")]},
{"recursion_limit": 100},
):
print(s)
print("---")
```
### Error Message and Stack Trace (if applicable)
```shell
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[118], line 1
----> 1 for s in research_graph.stream(
2 {"messages": [("user", "when is the water temperature in Copenhagen today?")]},
3 {"recursion_limit": 100},
4 ):
5 print(s)
6 print("---")
File ~/.local/pipx/venvs/jupyter/lib/python3.13/site-packages/langgraph/pregel/__init__.py:1670, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
1664 # Similarly to Bulk Synchronous Parallel / Pregel model
1665 # computation proceeds in steps, while there are channel updates.
1666 # Channel updates from step N are only visible in step N+1
1667 # channels are guaranteed to be immutable for the duration of the step,
1668 # with channel updates applied only at the transition between steps.
1669 while loop.tick(input_keys=self.input_channels):
-> 1670 for _ in runner.tick(
1671 loop.tasks.values(),
1672 timeout=self.step_timeout,
1673 retry_policy=self.retry_policy,
1674 get_waiter=get_waiter,
1675 ):
1676 # emit output
1677 yield from output()
1678 # emit output
File ~/.local/pipx/venvs/jupyter/lib/python3.13/site-packages/langgraph/pregel/runner.py:231, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
229 t = tasks[0]
230 try:
--> 231 run_with_retry(
232 t,
233 retry_policy,
234 configurable={
235 CONFIG_KEY_SEND: partial(writer, t),
236 CONFIG_KEY_CALL: partial(call, t),
237 },
238 )
239 self.commit(t, None)
240 except Exception as exc:
File ~/.local/pipx/venvs/jupyter/lib/python3.13/site-packages/langgraph/pregel/retry.py:40, in run_with_retry(task, retry_policy, configurable)
38 task.writes.clear()
39 # run the task
---> 40 return task.proc.invoke(task.input, config)
41 except ParentCommand as exc:
42 ns: str = config[CONF][CONFIG_KEY_CHECKPOINT_NS]
File ~/.local/pipx/venvs/jupyter/lib/python3.13/site-packages/langgraph/utils/runnable.py:462, in RunnableSeq.invoke(self, input, config, **kwargs)
458 config = patch_config(
459 config, callbacks=run_manager.get_child(f"seq:step:{i + 1}")
460 )
461 if i == 0:
--> 462 input = step.invoke(input, config, **kwargs)
463 else:
464 input = step.invoke(input, config)
File ~/.local/pipx/venvs/jupyter/lib/python3.13/site-packages/langgraph/utils/runnable.py:226, in RunnableCallable.invoke(self, input, config, **kwargs)
224 else:
225 context.run(_set_config_context, config)
--> 226 ret = context.run(self.func, *args, **kwargs)
227 if isinstance(ret, Runnable) and self.recurse:
228 return ret.invoke(input, config)
Cell In[114], line 34, in make_supervisor_node.<locals>.supervisor_node(state)
29 messages = [
30 {"role": "system", "content": system_prompt},
31 ] + state["messages"]
32 response = llm.with_structured_output(Router).invoke(messages)
---> 34 goto = response["next"]
35 if goto == "FINISH":
36 goto = END
TypeError: 'NoneType' object is not subscriptable
During task with name 'supervisor' and id 'ce47368d-11a1-1e6a-b74b-31245d84ce0f'
```
### Description
I'm trying to run the Multi-agent example with Ollama - LLama3.2 - and it crashed due to an empty response.
I do get some responses sometimes but then it crashes.
### System Info
Package Information
-------------------
> langchain_core: 0.3.30
> langsmith: 0.2.10
> langchain_ollama: 0.2.2
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3153/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3120
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3120/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3120/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3120/events
|
https://github.com/langchain-ai/langgraph/issues/3120
| 2,800,915,870
|
I_kwDOKFU0lc6m8pWe
| 3,120
|
Thread runs sometimes throws HttpException, Graph 'xxx' not found
|
{
"login": "Jackoder",
"id": 3930446,
"node_id": "MDQ6VXNlcjM5MzA0NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3930446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jackoder",
"html_url": "https://github.com/Jackoder",
"followers_url": "https://api.github.com/users/Jackoder/followers",
"following_url": "https://api.github.com/users/Jackoder/following{/other_user}",
"gists_url": "https://api.github.com/users/Jackoder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jackoder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jackoder/subscriptions",
"organizations_url": "https://api.github.com/users/Jackoder/orgs",
"repos_url": "https://api.github.com/users/Jackoder/repos",
"events_url": "https://api.github.com/users/Jackoder/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jackoder/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-21T07:27:56
| 2025-02-11T08:05:31
| 2025-02-11T08:05:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import requests
import time
import logging
# Configure logging
logging.basicConfig(
filename='thread_process.log',
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
console = logging.StreamHandler()
console.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console.setFormatter(formatter)
logging.getLogger('').addHandler(console)
def log_request_response(response):
logging.info(f"Request URL: {response.request.url}")
logging.info(f"Request Method: {response.request.method}")
logging.info(f"Request Headers: {response.request.headers}")
if response.request.body:
logging.info(f"Request Body: {response.request.body}")
logging.info(f"Response Status Code: {response.status_code}")
logging.info(f"Response Headers: {response.headers}")
logging.info(f"Response Body: {response.text}")
import uuid
def create_thread():
# thread_id = str(uuid.uuid4()) # Generate a UUID for thread_id
url = "http://127.0.0.1:8123/threads"
payload = {
}
response = requests.post(url, json=payload)
log_request_response(response)
response.raise_for_status()
thread_id = response.json()["thread_id"]
logging.info(f"Thread created with ID: {thread_id}")
return thread_id
def run_thread(thread_id):
url = f"http://127.0.0.1:8123/threads/{thread_id}/runs/wait"
payload = {
"assistant_id": "0676914a-25a7-595a-a130-6f9e1ad87f7d",
"input": {
"knowledge": "Camera",
"node": "Rule"
},
"after_seconds": 1
}
response = requests.post(url, json=payload)
log_request_response(response)
response.raise_for_status()
run_id = response.json()["run_id"]
logging.info(f"Thread run initiated with Run ID: {run_id}")
return run_id
def get_thread_status(thread_id):
url = f"http://127.0.0.1:8123/threads/{thread_id}/runs"
response = requests.get(url)
log_request_response(response)
response.raise_for_status()
status = response.json()[0]["status"]
logging.info(f"Current status: {status}")
return status
def get_thread_result(thread_id):
url = f"http://127.0.0.1:8123/threads/{thread_id}"
response = requests.get(url)
log_request_response(response)
response.raise_for_status()
result = response.json()
logging.info(f"Thread result: {result}")
return result
def execute_test():
try:
thread_id = create_thread()
retry_count = 0
run_id = run_thread(thread_id)
status = get_thread_status(thread_id)
while status != "success":
if status == "error":
retry_count += 1
if retry_count > 3:
logging.error("Thread run failed after 3 retries.")
return False
logging.warning(f"Thread run encountered an error. Retrying {retry_count}/3...")
run_id = run_thread(thread_id) # Retry running the thread
# time.sleep(2)
status = get_thread_status(thread_id)
result = get_thread_result(thread_id)
return True
except Exception as e:
logging.error(f"Exception occurred: {e}")
return False
def main():
success_count = 0
total_runs = 10
for i in range(total_runs):
logging.info(f"Starting test run {i+1}/{total_runs}")
if execute_test():
success_count += 1
logging.info(f"Test run {i+1}/{total_runs} completed")
success_rate = (success_count / total_runs) * 100
logging.info(f"Success rate: {success_rate}%")
if __name__ == "__main__":
main()
```
### Error Message and Stack Trace (if applicable)
```shell
2025-01-21 12:33:33,813 - INFO - Request URL: http://127.0.0.1:8123/threads
2025-01-21 12:33:33,813 - INFO - Request Method: POST
2025-01-21 12:33:33,813 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '2', 'Content-Type': 'application/json'}
2025-01-21 12:33:33,813 - INFO - Request Body: b'{}'
2025-01-21 12:33:33,813 - INFO - Response Status Code: 200
2025-01-21 12:33:33,813 - INFO - Response Headers: {'date': 'Tue, 21 Jan 2025 04:33:33 GMT', 'server': 'uvicorn', 'content-length': '220', 'content-type': 'application/json'}
2025-01-21 12:33:33,813 - INFO - Response Body: {"thread_id":"2a6608cc-ec49-41f6-8926-a249dda7ff97","created_at":"2025-01-21T12:33:34.919186+08:00","updated_at":"2025-01-21T12:33:34.919186+08:00","metadata":{},"status":"idle","config":{},"values":null,"interrupts":{}}
2025-01-21 12:33:33,813 - INFO - Thread created with ID: 2a6608cc-ec49-41f6-8926-a249dda7ff97
2025-01-21 12:33:35,551 - INFO - Request URL: http://127.0.0.1:8123/threads/2a6608cc-ec49-41f6-8926-a249dda7ff97/runs/wait
2025-01-21 12:33:35,551 - INFO - Request Method: POST
2025-01-21 12:33:35,552 - INFO - Request Headers: {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '126', 'Content-Type': 'application/json'}
2025-01-21 12:33:35,552 - INFO - Request Body: b'{"assistant_id": "0676914a-25a7-595a-a130-6f9e1ad87f7d", "input": {"knowledge": "Camera", "node": "Rule"}, "after_seconds": 1}'
2025-01-21 12:33:35,552 - INFO - Response Status Code: 200
2025-01-21 12:33:35,552 - INFO - Response Headers: {'date': 'Tue, 21 Jan 2025 04:33:33 GMT', 'server': 'uvicorn', 'location': '/threads/2a6608cc-ec49-41f6-8926-a249dda7ff97/runs/1efd7b0d-fa56-665c-a80c-a655ff544eea/join', 'content-type': 'application/json', 'transfer-encoding': 'chunked'}
2025-01-21 12:33:35,553 - INFO - Response Body: {"__error__":{"error":"HTTPException","message":"404: Graph 'rule' not found"}}
2025-01-21 12:33:35,553 - ERROR - Exception occurred: 'run_id'
```
### Description
After I finished deploying using Docker, the Runs API sometimes throws a Graph 'rule' not found error, with an error rate of about 20%. Is there a problem somewhere, and how should it be resolved?
https://langchain-ai.github.io/langgraph/how-tos/deploy-self-hosted/#using-docker
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue Nov 5 00:21:55 UTC 2024
> Python Version: 3.11.11 (main, Dec 4 2024, 08:55:08) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.3.30
> langchain: 0.3.12
> langchain_community: 0.3.10
> langsmith: 0.1.147
> langchain_anthropic: 0.3.0
> langchain_fireworks: 0.2.5
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.3
> langchainhub: 0.1.21
> langgraph_api: 0.0.15
> langgraph_cli: 0.1.65
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.48
> langgraph_storage: Installed. No version info available.
> langserve: 0.3.0
Other Dependencies
------------------
> aiohttp: 3.11.10
> anthropic: 0.40.0
> async-timeout: Installed. No version info available.
> click: 8.1.7
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: 0.115.6
> fireworks-ai: 0.15.10
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.63
> langgraph-checkpoint: 2.0.10
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.7
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.3
> pydantic-settings: 2.6.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> sse-starlette: 2.1.3
> starlette: 0.41.3
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.8.0
> types-requests: 2.32.0.20241016
> typing-extensions: 4.12.2
> uvicorn: 0.32.1
> watchfiles: 1.0.0
|
{
"login": "Jackoder",
"id": 3930446,
"node_id": "MDQ6VXNlcjM5MzA0NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3930446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jackoder",
"html_url": "https://github.com/Jackoder",
"followers_url": "https://api.github.com/users/Jackoder/followers",
"following_url": "https://api.github.com/users/Jackoder/following{/other_user}",
"gists_url": "https://api.github.com/users/Jackoder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jackoder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jackoder/subscriptions",
"organizations_url": "https://api.github.com/users/Jackoder/orgs",
"repos_url": "https://api.github.com/users/Jackoder/repos",
"events_url": "https://api.github.com/users/Jackoder/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jackoder/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3120/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3115
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3115/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3115/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3115/events
|
https://github.com/langchain-ai/langgraph/issues/3115
| 2,800,557,373
|
I_kwDOKFU0lc6m7R09
| 3,115
|
Issue in Stream Execution When Using Command within a Subgraph
|
{
"login": "yamato0811",
"id": 64122953,
"node_id": "MDQ6VXNlcjY0MTIyOTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/64122953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yamato0811",
"html_url": "https://github.com/yamato0811",
"followers_url": "https://api.github.com/users/yamato0811/followers",
"following_url": "https://api.github.com/users/yamato0811/following{/other_user}",
"gists_url": "https://api.github.com/users/yamato0811/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yamato0811/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yamato0811/subscriptions",
"organizations_url": "https://api.github.com/users/yamato0811/orgs",
"repos_url": "https://api.github.com/users/yamato0811/repos",
"events_url": "https://api.github.com/users/yamato0811/events{/privacy}",
"received_events_url": "https://api.github.com/users/yamato0811/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6658011276,
"node_id": "LA_kwDOKFU0lc8AAAABjNkwjA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/investigate",
"name": "investigate",
"color": "375C45",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-21T02:29:32
| 2025-02-09T15:40:27
| 2025-01-31T15:01:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from typing import TypedDict
from langgraph.graph import END, START, StateGraph
from langgraph.types import Command
from typing_extensions import Literal
class State(TypedDict):
node_name: str
foo: str
def subgraph_node_1(state: State) -> Command[Literal["subgraph_node_2"]]:
return Command(
goto="subgraph_node_2",
update={
"node_name": "subgraph_node_1",
"foo": "Update at subgraph_node_1!",
},
)
def subgraph_node_2(state: State) -> Command:
return Command(
goto="node_3",
update={"node_name": "subgraph_node_2"},
graph=Command.PARENT,
)
subgraph_builder = StateGraph(State)
subgraph_builder.add_node(subgraph_node_1)
subgraph_builder.add_node(subgraph_node_2)
subgraph_builder.add_edge(START, "subgraph_node_1")
subgraph_builder.add_edge("subgraph_node_2", END)
subgraph = subgraph_builder.compile()
# Define main graph
def node_1(state: State) -> Command[Literal["node_2"]]:
return Command(
goto="node_2",
update={"node_name": "node_1"},
)
def node_3(state: State) -> Command[Literal["__end__"]]:
return Command(
goto=END,
update={"node_name": "node_3"},
)
main_builder = StateGraph(State)
main_builder.add_node("node_1", node_1)
main_builder.add_node("node_2", subgraph)
main_builder.add_node("node_3", node_3)
main_builder.add_edge(START, "node_1")
main_builder.add_edge("node_2", "node_3")
main_graph = main_builder.compile()
# Build subgraph
with open("graph.md", "w") as file:
file.write(f"\n{main_graph.get_graph(xray=1).draw_mermaid()}")
initial = {"node_name": ["__start__"]}
for chunk in main_graph.stream(initial, stream_mode="values", subgraphs=True):
print(chunk)
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
While using the LangGraph library and attempting stream execution with a Command within a subgraph, I encountered the following two issues:
1. **Part of the subgraph's state is not passed to the parent graph.**
- `'foo': 'Update at subgraph_node_1!'` is not displayed in the parent graph.
2. **The output of the subgraph's last node is not displayed.**
- The execution result at `subgraph_node_2` is not shown.
<img src="https://github.com/user-attachments/assets/d1aee13d-bf53-4c01-8e4b-a348863fcd8a" width="200px">
【Output】
```
((), {'node_name': ['__start__']})
((), {'node_name': 'node_1'})
(('node_2:d41305f4-c2ba-78d9-5d9b-ed1c6b8549da',), {'node_name': 'node_1'})
(('node_2:d41305f4-c2ba-78d9-5d9b-ed1c6b8549da',), {'node_name': 'subgraph_node_1', 'foo': 'Update at subgraph_node_1!'})
((), {'node_name': 'subgraph_node_2'})
((), {'node_name': 'node_3'})
```
---
When not using Command, the result was as expected.
【Expected Output】
```
((), {'node_name': ['__start__']})
((), {'node_name': 'node_1'})
(('node_2:cd87a0ec-b602-da30-ffca-48950974937f',), {'node_name': 'node_1'})
(('node_2:cd87a0ec-b602-da30-ffca-48950974937f',), {'node_name': 'subgraph_node_1', 'foo': 'Update at subgraph_node_1!'})
(('node_2:cd87a0ec-b602-da30-ffca-48950974937f',), {'node_name': 'subgraph_node_2', 'foo': 'Update at subgraph_node_1!'})
((), {'node_name': 'subgraph_node_2', 'foo': 'Update at subgraph_node_1!'})
((), {'node_name': 'node_3', 'foo': 'Update at subgraph_node_1!'})
```
【Expected Output Code】
```python
from typing import TypedDict
from langgraph.graph import END, START, StateGraph
class State(TypedDict):
node_name: str
foo: str
def subgraph_node_1(state: State):
return {
"node_name": "subgraph_node_1",
"foo": "Update at subgraph_node_1!",
}
def subgraph_node_2(state: State):
return {"node_name": "subgraph_node_2"}
subgraph_builder = StateGraph(State)
subgraph_builder.add_node(subgraph_node_1)
subgraph_builder.add_node(subgraph_node_2)
subgraph_builder.add_edge(START, "subgraph_node_1")
subgraph_builder.add_edge("subgraph_node_1", "subgraph_node_2")
subgraph_builder.add_edge("subgraph_node_2", END)
subgraph = subgraph_builder.compile()
# Define main graph
def node_1(state: State):
return {"node_name": "node_1"}
def node_3(state: State):
return {"node_name": "node_3"}
main_builder = StateGraph(State)
main_builder.add_node("node_1", node_1)
main_builder.add_node("node_2", subgraph)
main_builder.add_node("node_3", node_3)
main_builder.add_edge(START, "node_1")
main_builder.add_edge("node_1", "node_2")
main_builder.add_edge("node_2", "node_3")
main_builder.add_edge("node_3", END)
main_graph = main_builder.compile()
# Build subgraph
with open("graph.md", "w") as file:
file.write(f"```mermaid\n{main_graph.get_graph(xray=1).draw_mermaid()}```")
initial = {"node_name": ["__start__"]}
for chunk in main_graph.stream(initial, stream_mode="values", subgraphs=True):
print(chunk)
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #54~20.04.1-Ubuntu SMP Fri Oct 6 22:04:33 UTC 2023
> Python Version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.14
> langsmith: 0.2.3
> langchain_aws: 0.2.3
> langchain_openai: 0.2.4
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.43
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.10
> async-timeout: Installed. No version info available.
> boto3: 1.35.37
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.53.0
> orjson: 3.10.12
> packaging: 23.2
> pydantic: 2.10.3
> PyYAML: 6.0.1
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3115/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3115/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3104
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3104/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3104/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3104/events
|
https://github.com/langchain-ai/langgraph/issues/3104
| 2,797,951,515
|
I_kwDOKFU0lc6mxVob
| 3,104
|
Confusing behavior when using `add_conditional_edges` with `BaseModel`
|
{
"login": "observerw",
"id": 20661574,
"node_id": "MDQ6VXNlcjIwNjYxNTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/20661574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/observerw",
"html_url": "https://github.com/observerw",
"followers_url": "https://api.github.com/users/observerw/followers",
"following_url": "https://api.github.com/users/observerw/following{/other_user}",
"gists_url": "https://api.github.com/users/observerw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/observerw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/observerw/subscriptions",
"organizations_url": "https://api.github.com/users/observerw/orgs",
"repos_url": "https://api.github.com/users/observerw/repos",
"events_url": "https://api.github.com/users/observerw/events{/privacy}",
"received_events_url": "https://api.github.com/users/observerw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-20T01:06:47
| 2025-01-23T06:16:18
| 2025-01-23T06:16:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Discussed in https://github.com/langchain-ai/langgraph/discussions/2226
<div type='discussions-op-text'>
<sup>Originally posted by **observerw** October 30, 2024</sup>
Let's say we have the following graph:
```python
class State(TypedDict):
a: int
class NodeAInput(State):
private_b: int | None
class NodeAOutput(TypedDict):
private_a: int
class NodeBInput(State):
private_a: int
class NodeBOutput(TypedDict):
private_b: int
def node_a(input: NodeAInput) -> NodeAOutput:
return NodeAOutput(private_a=1)
def node_b(input: NodeBInput) -> NodeBOutput:
return NodeBOutput(private_b=2)
# !!! Confusing behavior happens here
def node_b_edge(input: Any):
print(f"input type: {type(input)}, value: {input}")
return END
graph = StateGraph(State)
graph.add_node(node_a)
graph.add_node(node_b)
graph.add_edge("node_a", "node_b")
graph.add_conditional_edges("node_b", node_b_edge)
graph.set_entry_point("node_a")
graph = graph.compile()
graph.invoke(State(a=1))
```
The output is:
```bash
input type: <class '__main__.NodeBInput'>, value: a=1 private_a=1
```
After I added `node_b_edge` after `node_b`, I was expecting this function to accept a `State` with a `private_b` field, or at least a complete `State`. However, **the function actually accepts a parameter of type `NodeBInput`** instead of `State`! This is very confusing and results in an error if I wish to use the `private_b` field to determine the output of `node_b_edge`.
Nonetheless, when I ditched the BaseModel and used TypedDict, the run results were:
```bash
input type: <class 'dict'>, value: {'private_b': 2, 'a': 1, 'private_a': 1}
```
Well, at least I was able to access the `private_b`, So I guess the problems here associated with the use of BaseModel.
Is this behavior expected? How can I access `private_b` when using BaseModel?</div>
|
{
"login": "observerw",
"id": 20661574,
"node_id": "MDQ6VXNlcjIwNjYxNTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/20661574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/observerw",
"html_url": "https://github.com/observerw",
"followers_url": "https://api.github.com/users/observerw/followers",
"following_url": "https://api.github.com/users/observerw/following{/other_user}",
"gists_url": "https://api.github.com/users/observerw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/observerw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/observerw/subscriptions",
"organizations_url": "https://api.github.com/users/observerw/orgs",
"repos_url": "https://api.github.com/users/observerw/repos",
"events_url": "https://api.github.com/users/observerw/events{/privacy}",
"received_events_url": "https://api.github.com/users/observerw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3104/timeline
| null |
duplicate
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3102
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3102/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3102/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3102/events
|
https://github.com/langchain-ai/langgraph/issues/3102
| 2,797,602,128
|
I_kwDOKFU0lc6mwAVQ
| 3,102
|
路由函数正常 但是图的运行顺序出错
|
{
"login": "The-uyu",
"id": 81976081,
"node_id": "MDQ6VXNlcjgxOTc2MDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/81976081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-uyu",
"html_url": "https://github.com/The-uyu",
"followers_url": "https://api.github.com/users/The-uyu/followers",
"following_url": "https://api.github.com/users/The-uyu/following{/other_user}",
"gists_url": "https://api.github.com/users/The-uyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-uyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-uyu/subscriptions",
"organizations_url": "https://api.github.com/users/The-uyu/orgs",
"repos_url": "https://api.github.com/users/The-uyu/repos",
"events_url": "https://api.github.com/users/The-uyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-uyu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292643,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwIw",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/invalid",
"name": "invalid",
"color": "e4e669",
"default": true,
"description": "This doesn't seem right"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-19T12:34:52
| 2025-01-31T14:58:48
| 2025-01-31T14:58:48
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
workflow.add_conditional_edges(
"tools",
check_ToM,
{
# 达到指定次数,更新ToM
"tom": "tom",
# 返回agent
"agent": "agent",
},
)
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
、
我的路由函数都输出了正确的路由为什么没有按照预期的顺序运行?
### System Info
因为错误的模型运行顺序导致了错误如下
ValueError: {'message': 'Provider returned error', 'code': 400, 'metadata': {'raw': {'__kind': 'OK', 'data': '{\n "error": {\n "message": "An assistant message with \'tool_calls\' must be followed by tool messages responding to each \'tool_call_id\'. The following tool_call_ids did not have response messages: call_Rad8sO1nwvG0fRKww4DlSJBM",\n "type": "invalid_request_error",\n "param": "messages.[15].role",\n "code": null\n }\n}'}, 'provider_name': 'OpenAI'}}
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3102/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3100
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3100/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3100/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3100/events
|
https://github.com/langchain-ai/langgraph/issues/3100
| 2,797,372,440
|
I_kwDOKFU0lc6mvIQY
| 3,100
|
react-agent-python template does not work
|
{
"login": "viren-vii",
"id": 56278281,
"node_id": "MDQ6VXNlcjU2Mjc4Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/56278281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/viren-vii",
"html_url": "https://github.com/viren-vii",
"followers_url": "https://api.github.com/users/viren-vii/followers",
"following_url": "https://api.github.com/users/viren-vii/following{/other_user}",
"gists_url": "https://api.github.com/users/viren-vii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/viren-vii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/viren-vii/subscriptions",
"organizations_url": "https://api.github.com/users/viren-vii/orgs",
"repos_url": "https://api.github.com/users/viren-vii/repos",
"events_url": "https://api.github.com/users/viren-vii/events{/privacy}",
"received_events_url": "https://api.github.com/users/viren-vii/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-19T02:38:11
| 2025-01-27T14:36:03
| 2025-01-27T14:36:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
langgraph new path/to/your/app --template react-agent-python
```
### Error Message and Stack Trace (if applicable)
```shell
File "C:\Python311\Lib\site-packages\starlette\routing.py", line 693, in lifespan
async with self.lifespan_context(app) as maybe_state:
File "C:\Python311\Lib\contextlib.py", line 204, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langgraph_api\lifespan.py", line 30, in lifespan
await collect_graphs_from_env(True)
File "C:\Python311\Lib\site-packages\langgraph_api\graph.py", line 257, in collect_graphs_from_env
graph = await run_in_executor(None, _graph_from_spec, spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langchain_core\runnables\config.py", line 588, in run_in_executor
return await asyncio.get_running_loop().run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langchain_core\runnables\config.py", line 579, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langgraph_api\graph.py", line 295, in _graph_from_spec
modspec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "D:\projects\forexdropoff\react-agent\./src/react_agent/graph.py", line 14, in <module>
from react_agent.configuration import Configuration
ModuleNotFoundError: No module named 'react_agent'
Could not import python module for graph:
GraphSpec(id='agent', path='./src/react_agent/graph.py', module=None, variable='graph', config=None)
This error likely means you haven't installed your project and its dependencies yet. Before running the server, install your project:
If you are using requirements.txt:
python -m pip install -r requirements.txt
If you are using pyproject.toml or setuptools:
python -m pip install -e .
Make sure to run this command from your project's root directory (where your setup.py or pyproject.toml is located)
[uvicorn.error] api_variant=local_dev
2025-01-19T02:33:06.962480Z [error ] Application startup failed. Exiting. [uvicorn.error] api_variant=local_dev
```
### Description
I am trying to run the template on the local server.
Despite following the docs the same error is triggering. I have tried to fix the packages but have not been able to go around.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.30
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.11
> langchain_anthropic: 0.3.3
> langchain_fireworks: 0.2.6
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.43.1
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fireworks-ai: 0.15.11
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.8
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3100/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3099
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3099/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3099/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3099/events
|
https://github.com/langchain-ai/langgraph/issues/3099
| 2,797,299,064
|
I_kwDOKFU0lc6mu2V4
| 3,099
|
DOC: Should the Quickstart be upgraded from model="claude-3-5-sonnet-20240620"?
|
{
"login": "royseto",
"id": 543049,
"node_id": "MDQ6VXNlcjU0MzA0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/543049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royseto",
"html_url": "https://github.com/royseto",
"followers_url": "https://api.github.com/users/royseto/followers",
"following_url": "https://api.github.com/users/royseto/following{/other_user}",
"gists_url": "https://api.github.com/users/royseto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royseto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royseto/subscriptions",
"organizations_url": "https://api.github.com/users/royseto/orgs",
"repos_url": "https://api.github.com/users/royseto/repos",
"events_url": "https://api.github.com/users/royseto/events{/privacy}",
"received_events_url": "https://api.github.com/users/royseto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-19T00:08:08
| 2025-01-20T15:22:11
| 2025-01-20T15:22:10
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Issue with current documentation:
I am a beginner to both LangGraph and Anthropic/Claude. Please excuse my n00b question if this is off base or I could raise this in a more appropriate forum.
Currently (January 18, 2025), the LangGraph Quickstart tutorial at https://langchain-ai.github.io/langgraph/tutorials/introduction/ uses
`llm = ChatAnthropic(model="claude-3-5-sonnet-20240620")`
Is it time to upgrade this to a newer model, such as
`llm = ChatAnthropic(model="claude-3-5-sonnet-20241022")` ?
I asked Perplexity Pro, which [told me](https://www.perplexity.ai/search/this-is-a-question-about-curre-ExXchH7YRQCMC2Texj679Q) that the 20241022 model is now probably better for this kind of activity.
### Idea or request for content:
_No response_
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3099/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3098
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3098/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3098/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3098/events
|
https://github.com/langchain-ai/langgraph/issues/3098
| 2,797,079,286
|
I_kwDOKFU0lc6muAr2
| 3,098
|
DOC: minimal functioning example of agent inbox usage
|
{
"login": "SethTurin",
"id": 601471,
"node_id": "MDQ6VXNlcjYwMTQ3MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/601471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SethTurin",
"html_url": "https://github.com/SethTurin",
"followers_url": "https://api.github.com/users/SethTurin/followers",
"following_url": "https://api.github.com/users/SethTurin/following{/other_user}",
"gists_url": "https://api.github.com/users/SethTurin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SethTurin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SethTurin/subscriptions",
"organizations_url": "https://api.github.com/users/SethTurin/orgs",
"repos_url": "https://api.github.com/users/SethTurin/repos",
"events_url": "https://api.github.com/users/SethTurin/events{/privacy}",
"received_events_url": "https://api.github.com/users/SethTurin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-18T17:04:35
| 2025-01-28T17:33:23
| 2025-01-28T17:33:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Issue with current documentation:
Hello
I've been trying desperately for roughly the last 12 hours to get Agent Inbox working. Nothing I've tried actually succeeds at making a message show up in the inbox.
Can someone please provide a minimal but working repository to plug into langgraph, that I can ping via the langgraph server and just see messages show up?
### Idea or request for content:
_No response_
|
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3098/timeline
| null |
not_planned
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3097
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3097/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3097/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3097/events
|
https://github.com/langchain-ai/langgraph/issues/3097
| 2,796,801,371
|
I_kwDOKFU0lc6ms81b
| 3,097
|
DeepSeek V3 API Call Does Not Stop Automatically in LangGraph with ChatOpenAI()
|
{
"login": "ksmooi",
"id": 158563737,
"node_id": "U_kgDOCXN9mQ",
"avatar_url": "https://avatars.githubusercontent.com/u/158563737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksmooi",
"html_url": "https://github.com/ksmooi",
"followers_url": "https://api.github.com/users/ksmooi/followers",
"following_url": "https://api.github.com/users/ksmooi/following{/other_user}",
"gists_url": "https://api.github.com/users/ksmooi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksmooi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksmooi/subscriptions",
"organizations_url": "https://api.github.com/users/ksmooi/orgs",
"repos_url": "https://api.github.com/users/ksmooi/repos",
"events_url": "https://api.github.com/users/ksmooi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksmooi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-18T06:47:40
| 2025-01-20T15:18:23
| 2025-01-20T15:18:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
!pip install -qU langchain-openai
!pip install -qU langchain_community
!pip install -qU langchain_experimental
!pip install -qU langgraph
!pip install -qU duckduckgo-search
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from langchain_community.tools import DuckDuckGoSearchRun
from kaggle_secrets import UserSecretsClient
# the code is run on the kaggle notebook
llm_api_key = UserSecretsClient().get_secret("api-key-deepseek")
model = ChatOpenAI(model="deepseek-chat", temperature=0, openai_api_key=llm_api_key, openai_api_base='https://api.deepseek.com')
# Initialize the search tool
search = DuckDuckGoSearchRun()
def search_web(query: str) -> str:
'''Perform a web search using DuckDuckGo and return the results.'''
return search.invoke(query)
tools = [search_web]
graph = create_react_agent(model, tools=tools)
inputs = {"messages": [("user", "Search for the latest news on AI advancements")]}
for s in graph.stream(inputs, stream_mode="values"):
message = s["messages"][-1]
if isinstance(message, tuple):
print(message)
else:
message.pretty_print()
```
### Error Message and Stack Trace (if applicable)
```shell
**Actual Behavior:**
The API call continues indefinitely, making repeated tool calls and generating new search queries without stopping. This results in an infinite loop of search queries and responses.
**Example Output:**
================================ Human Message =================================
Search for the latest news on AI advancements
================================== Ai Message ==================================
Tool Calls:
search_web (call_0_bbac5567-4a8c-4e13-8752-ad37a91f6f27)
Call ID: call_0_bbac5567-4a8c-4e13-8752-ad37a91f6f27
Args:
query: latest news on AI advancements 2023
================================= Tool Message =================================
Name: search_web
With broad AI comes broad risks in everything from misinformation to AI-related privacy risks, failures, and mistakes. 2023 saw an increasing flow of regulation, from the United States AI Bill of ... 2022 was the year that generative artificial intelligence (AI) exploded into the public consciousness, and 2023 was the year it began to take root in the business world. 2024 thus stands to be a pivotal year for the future of AI, as researchers and enterprises seek to establish how this evolutionary leap in technology can be most practically integrated into our everyday lives. While overall AI private investment decreased in 2023, funding for generative AI sharply increased. The sector attracted $25.2 billion last year, nearly nine times the investment of 2022 and about 30 times the amount in 2019. Generative AI accounted for over a quarter of all AI-related private investment in 2023. Artificial intelligence. Download RSS feed: News Articles / In the Media / Audio. Displaying 1 - 15 of 1293 news articles related to this topic. Show: News Articles. In the Media. Audio. Explained: Generative AI's environmental impact. Rapid development and deployment of powerful generative AI models comes with environmental consequences ... GPU chips play a pivotal role in enhancing overall performance for genAI applications, underscoring their significance for future advancements. Defining AI trends for 2023. Amid the myriad of AI trends and themes that characterized 2023, four major themes emerged, shaping the narrative and capturing widespread attention:
================================== Ai Message ==================================
Tool Calls:
search_web (call_0_744fb5ec-24f1-4e8d-b3d0-5704b72b60a7)
Call ID: call_0_744fb5ec-24f1-4e8d-b3d0-5704b72b60a7
Args:
query: latest AI advancements 2023
================================= Tool Message =================================
Name: search_web
Uncover the transformative AI achievements of 2023, with a focus on NLP, computer vision, robotics, voice recognition, and HCI. This comprehensive overview highlights the latest advancements and their far-reaching impacts on technology, industry, and society. Top AI achievements of 2023. AI achievements in 2023 were quite phenomenal. March 30, 2023: GitHub presents AutoGTP based on ChatGPT-4.; May 12, 2023: OpenAI releases ChatGPT Plugins; May 25, 2023: Nvidia's stock price tripled in less than eight months, surging by 27% overnight and approaching a market cap of $1 trillion, driven by the skyrocketing demand for GPUs due to the growing interest in AI.; May 25, 2023: Google launches Google Search Generative Experience. The year 2023 marked a significant period in the evolution of artificial intelligence (AI), characterized by groundbreaking advancements across various domains. From the development of ... 2022 was the year that generative artificial intelligence (AI) exploded into the public consciousness, and 2023 was the year it began to take root in the business world. 2024 thus stands to be a pivotal year for the future of AI, as researchers and enterprises seek to establish how this evolutionary leap in technology can be most practically integrated into our everyday lives. AI Highlights from May 2023. Advancements in robotics took center stage in May 2023 as an AI system enabled robots to autonomously conduct scientific experiments. While Sanctuary AI's Phoenix Robot became the world's first commercially available general-purpose humanoid robot, Tesla's Optimus Bot also showed significant progress. OpenAI ...
================================== Ai Message ==================================
Tool Calls:
search_web (call_0_ba4b87e9-2096-49f2-840c-f1790cacc0f4)
Call ID: call_0_ba4b87e9-2096-49f2-840c-f1790cacc0f4
Args:
query: AI advancements 2023
================================= Tool Message =================================
Name: search_web
The report, which is in its seventh edition, covers trends such as technical advancements in AI, public perceptions of the technology and the geopolitical dynamics surrounding its development. Emerging Technologies ... In 2023, 25 AI-related regulations were enacted in the US, growing the total number by 56.3%. Compare that to 2016, when just ... 2023 AI Recap: Innovations in AI Technology. OpenAI's GPT-4: Revolutionized user interactions with AI through advanced natural language understanding, setting a new standard in generative models.; Multimodal Capabilities: GPT-4's ability to process various types of data, including text and images, marked a significant step towards more versatile AI systems. This year's AI Index — a 500-page report tracking 2023's worldwide trends in AI — is out.. The index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. This year's report covers the rise of multimodal foundation models ... 2022 was the year that generative artificial intelligence (AI) exploded into the public consciousness, and 2023 was the year it began to take root in the business world. 2024 thus stands to be a pivotal year for the future of AI, as researchers and enterprises seek to establish how this evolutionary leap in technology can be most practically integrated into our everyday lives. GPU chips play a pivotal role in enhancing overall performance for genAI applications, underscoring their significance for future advancements. Defining AI trends for 2023. Amid the myriad of AI trends and themes that characterized 2023, four major themes emerged, shaping the narrative and capturing widespread attention:
================================== Ai Message ==================================
Tool Calls:
search_web (call_0_9d944700-80fc-4092-a60c-d75e36d351d1)
Call ID: call_0_9d944700-80fc-4092-a60c-d75e36d351d1
Args:
query: AI trends 2023
================================= Tool Message =================================
Name: search_web
This year's AI Index — a 500-page report tracking 2023's worldwide trends in AI — is out.. The index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. This year's report covers the rise of multimodal foundation models ... Defining AI trends for 2023. Amid the myriad of AI trends and themes that characterized 2023, four major themes emerged, shaping the narrative and capturing widespread attention: Business adoption: Although consumer genAI applications grabbed headlines, 2023 also saw increased business experimentation. Enterprise adoption, startup activity and ... Get 90+ pages of charts and data detailing the latest venture trends in AI. First name. Last name. Email. ... In 2023, AI startups raised $42.5B across 2,500 equity rounds. Although down 10% year-over-year (YoY), AI funding fell far less than broader venture funding (-42% in 2023). AI deal volume decreased by 24% YoY — also less than the ... 2022 was the year that generative artificial intelligence (AI) exploded into the public consciousness, and 2023 was the year it began to take root in the business world. 2024 thus stands to be a pivotal year for the future of AI, as researchers and enterprises seek to establish how this evolutionary leap in technology can be most practically integrated into our everyday lives. March 30, 2023: GitHub presents AutoGTP based on ChatGPT-4.; May 12, 2023: OpenAI releases ChatGPT Plugins; May 25, 2023: Nvidia's stock price tripled in less than eight months, surging by 27% overnight and approaching a market cap of $1 trillion, driven by the skyrocketing demand for GPUs due to the growing interest in AI.; May 25, 2023: Google launches Google Search Generative Experience.
================================== Ai Message ==================================
Tool Calls:
search_web (call_0_3a3d86af-8b05-482a-ac02-4fc1c4437cfc)
Call ID: call_0_3a3d86af-8b05-482a-ac02-4fc1c4437cfc
Args:
query: AI regulations 2023
================================= Tool Message =================================
Name: search_web
New Framework Advances AI Innovation While Protecting U.S. National Security. Washington, D.C. — Today, the Department of Commerce's Bureau of Industry and Security (BIS) announced controls on advanced computing chips and certain closed artificial intelligence (AI) model weights, alongside new license exceptions and updates to the Data Center Validated End User (VEU) authorization. Section 1. Purpose. Artificial intelligence (AI) is a defining technology of our era. ... and Executive Order 14110 of November 1, 2023 (Safe, Secure, and Trustworthy Development and Use of ... In October 2023, President Joe Biden issued a broad executive order on AI (AI Order), which the administration touted as a vehicle to establish AI safety and security standards while protecting privacy, advancing civil rights and promoting innovation. However, most of the AI Order was a series of directives to various federal agencies to study ... In October 2023, President Joe Biden issued a broad executive order on AI ... we expect to see states take an even more active role in enacting state-specific AI regulations. Monitor Legal and Regulatory Developments: Stay informed about new state and federal AI regulations, along with the developing case law related to AI and ERISA litigation. Establish a process for ...
================================== Ai Message ==================================
Tool Calls:
search_web (call_0_10269960-464f-4abd-a458-b863da1710b4)
Call ID: call_0_10269960-464f-4abd-a458-b863da1710b4
Args:
query: AI investment trends 2023
search_web (call_1_97eabda5-9144-4a28-93f9-e21ec851b5ba)
Call ID: call_1_97eabda5-9144-4a28-93f9-e21ec851b5ba
Args:
query: AI private investment 2023
================================= Tool Message =================================
Name: search_web
Private equity and venture capital firms poured more than twice as much money into generative AI companies in 2023 compared with the prior year, and dealmaking got off to a fast start in 2024. ... Incera said. Private equity-backed investment peaked at $927.7 million in the third quarter of 2023, up from $121.5 million recorded in the prior ... The volume of private investment in artificial intelligence (AI) worldwide in 2023 had its largest share concentrated in the United States of America, with approximately 67.2 billion U.S. Generative AI accounted for over a quarter of all AI-related private investments in 2023. U.S. Wins $$ Race. And again, in 2023 the United States dominates in AI private investment. In 2023, the $67.2 billion invested in the U.S. was roughly 8.7 times greater than the amount invested in the next highest country, China, and 17.8 times the amount ... The graph below shows the yearly number of private equity platform and add-on investments in AI. Quick stats. 315 Private equity firms with current investments in the Artificial Intelligence and Machine Learning industry. 573 current private equity investments in AI/ML companies (2013 - 2023, 52 exits) Download our free list As from 2013 to 2023, the sum of private investments in artificial intelligence (AI) worldwide has had its largest share coming from the United States of America, accounting for over 336 billion U.S.
================================== Ai Message ==================================
Tool Calls:
search_web (call_0_1667e946-5452-48ab-b5e2-d6e3fb9a526b)
Call ID: call_0_1667e946-5452-48ab-b5e2-d6e3fb9a526b
Args:
query: AI investment trends 2023
search_web (call_1_d7bdf09b-5892-4e60-91a8-63a4b898a209)
Call ID: call_1_d7bdf09b-5892-4e60-91a8-63a4b898a209
Args:
query: AI private investment 2023
================================= Tool Message =================================
Name: search_web
Private equity and venture capital firms poured more than twice as much money into generative AI companies in 2023 compared with the prior year, and dealmaking got off to a fast start in 2024. ... Incera said. Private equity-backed investment peaked at $927.7 million in the third quarter of 2023, up from $121.5 million recorded in the prior ... The volume of private investment in artificial intelligence (AI) worldwide in 2023 had its largest share concentrated in the United States of America, with approximately 67.2 billion U.S. Generative AI accounted for over a quarter of all AI-related private investments in 2023. U.S. Wins $$ Race. And again, in 2023 the United States dominates in AI private investment. In 2023, the $67.2 billion invested in the U.S. was roughly 8.7 times greater than the amount invested in the next highest country, China, and 17.8 times the amount ... Figure 3 shows that no country, at least on a private investment basis, is spending anything close to what the U.S. is spending, which likely means it will be difficult to challenge U.S. leadership in this area. Figure 3: Private Investment in AI by Geographic Area, 2023. 2023 Was a Year of Building AI Applications As from 2013 to 2023, the sum of private investments in artificial intelligence (AI) worldwide has had its largest share coming from the United States of America, accounting for over 336 billion U.S.
================================== Ai Message ==================================
Tool Calls:
search_web (call_0_cb9e01b0-7850-4038-a727-aa7086c97191)
Call ID: call_0_cb9e01b0-7850-4038-a727-aa7086c97191
Args:
query: AI investment trends 2023
search_web (call_1_d658f5fe-7db0-42e1-81a6-d003f231efe6)
Call ID: call_1_d658f5fe-7db0-42e1-81a6-d003f231efe6
Args:
query: AI private investment 2023
search_web (call_2_7ccf4fdf-3921-41b9-b805-9157d25998be)
Call ID: call_2_7ccf4fdf-3921-41b9-b805-9157d25998be
Args:
query: AI regulations 2023
search_web (call_3_df9e132a-1328-4db8-8984-3051c59dcee2)
Call ID: call_3_df9e132a-1328-4db8-8984-3051c59dcee2
Args:
query: AI advancements 2023
search_web (call_4_46a83a96-205c-47c0-8c67-94d07ac91894)
Call ID: call_4_46a83a96-205c-47c0-8c67-94d07ac91894
Args:
query: latest AI advancements 2023
search_web (call_5_d506554b-9025-407b-8cdc-9154d547ab88)
Call ID: call_5_d506554b-9025-407b-8cdc-9154d547ab88
Args:
query: latest news on AI advancements 2023
================================= Tool Message =================================
Name: search_web
With broad AI comes broad risks in everything from misinformation to AI-related privacy risks, failures, and mistakes. 2023 saw an increasing flow of regulation, from the United States AI Bill of ... Artificial intelligence. Download RSS feed: News Articles / In the Media / Audio. Displaying 1 - 15 of 1293 news articles related to this topic. Show: News Articles. In the Media. Audio. Explained: Generative AI's environmental impact. Rapid development and deployment of powerful generative AI models comes with environmental consequences ... March 30, 2023: GitHub presents AutoGTP based on ChatGPT-4.; May 12, 2023: OpenAI releases ChatGPT Plugins; May 25, 2023: Nvidia's stock price tripled in less than eight months, surging by 27% overnight and approaching a market cap of $1 trillion, driven by the skyrocketing demand for GPUs due to the growing interest in AI.; May 25, 2023: Google launches Google Search Generative Experience. GPU chips play a pivotal role in enhancing overall performance for genAI applications, underscoring their significance for future advancements. Defining AI trends for 2023. Amid the myriad of AI trends and themes that characterized 2023, four major themes emerged, shaping the narrative and capturing widespread attention: AI Highlights from May 2023. Advancements in robotics took center stage in May 2023 as an AI system enabled robots to autonomously conduct scientific experiments. While Sanctuary AI's Phoenix Robot became the world's first commercially available general-purpose humanoid robot, Tesla's Optimus Bot also showed significant progress. OpenAI ...
```
### Description
**Describe the bug**
When using `ChatOpenAI()` in LangChain with a custom tool (e.g., `DuckDuckGoSearchRun`), the API call does not terminate automatically after completing the task. Instead, it continues to make repeated tool calls indefinitely, leading to an infinite loop of search queries and responses.
**Expected behavior**
The API call should terminate after completing the task (e.g., retrieving search results for the query) and return the final response.
### System Info
**Environment:**
- Python 3.x (Kaggle Notebook)
- Libraries: `langchain-openai`, `langchain-community`, `langchain_experimental`, `langgraph`
- Model: `deepseek-chat` (via DeepSeek API)
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3097/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3096
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3096/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3096/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3096/events
|
https://github.com/langchain-ai/langgraph/issues/3096
| 2,796,789,615
|
I_kwDOKFU0lc6ms59v
| 3,096
|
Multi-agent supervisor
|
{
"login": "jason571",
"id": 22809213,
"node_id": "MDQ6VXNlcjIyODA5MjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/22809213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jason571",
"html_url": "https://github.com/jason571",
"followers_url": "https://api.github.com/users/jason571/followers",
"following_url": "https://api.github.com/users/jason571/following{/other_user}",
"gists_url": "https://api.github.com/users/jason571/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jason571/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jason571/subscriptions",
"organizations_url": "https://api.github.com/users/jason571/orgs",
"repos_url": "https://api.github.com/users/jason571/repos",
"events_url": "https://api.github.com/users/jason571/events{/privacy}",
"received_events_url": "https://api.github.com/users/jason571/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292643,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwIw",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/invalid",
"name": "invalid",
"color": "e4e669",
"default": true,
"description": "This doesn't seem right"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-18T06:18:00
| 2025-01-31T14:58:55
| 2025-01-31T14:58:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
I add agent, run shell
class Router(TypedDict):
"""Worker to route to next. If no workers needed, route to FINISH."""
next: Literal["researcher", "coder", "shell", "FINISH"]
def supervisor_node(self, state: State) -> Command[Literal["researcher", "coder", "shell", "__end__"]]:
messages = [
{"role": "system", "content": self.system_prompt},
] + state["messages"]
response = self.llm.with_structured_output(Router).invoke(messages)
goto = response["next"]
if goto == "FINISH":
goto = END
mylogging.vip(response)
return Command(goto=goto, update={"next": goto})
Question: check Memory Usage
TASK can't FINISH, it keeps executing the shell, I've checked response = self.llm.invoke(messages)
response is FINISH.
but self.llm.with_structured_output(Router).invoke(messages) response is {'next': 'shell'}, this is bug.
Agent: [2025-01-18 14:04:30,491]-VIP-[agentSupervisor.py:41]: {'next': 'shell'}
[2025-01-18 14:04:30,498]-VIP-[supervisorProcess.py:97]: shell->supervisor
Starting tool: shell_tool with inputs: {'command': 'free -m'}
[2025-01-18 14:04:31,890]-INFO-[tools.py:24]: executing shell output: total used free shared buff/cache available
Mem: 32064 1075 31085 3 298 30988
Swap: 8192 0 8192
Done tool: shell_tool
The memory usage is as follows:
- Total memory: 32064 MB
- Used memory: 1075 MB
- Free memory: 31085 MB
- Shared memory: 3 MB
- Buffer/cache memory: 298 MB
- Available memory: 30988 MB
The swap memory usage is as follows:
- Total swap: 8192 MB
- Used swap: 0 MB
- Free swap: 8192 MB[2025-01-18 14:04:35,914]-VIP-[agentSupervisor.py:41]: {'next': 'shell'}
[2025-01-18 14:04:35,940]-VIP-[supervisorProcess.py:97]: shell->supervisor
Starting tool: shell_tool with inputs: {'command': 'free -m'}
[2025-01-18 14:04:37,599]-INFO-[tools.py:24]: executing shell output: total used free shared buff/cache available
Mem: 32064 1077 31083 3 298 30986
Swap: 8192 0 8192
Done tool: shell_tool
The memory usage is as follows:
- Total memory: 32064 MB
- Used memory: 1077 MB
- Free memory: 31083 MB
- Shared memory: 3 MB
- Buffer/cache memory: 298 MB
- Available memory: 30986 MB
The swap memory usage is as follows:
- Total swap: 8192 MB
- Used swap: 0 MB
- Free swap: 8192 MB[2025-01-18 14:04:40,450]-VIP-[agentSupervisor.py:41]: {'next': 'shell'}
[2025-01-18 14:04:40,504]-VIP-[supervisorProcess.py:97]: shell->supervisor
[2025-01-18 14:04:41,771]-INFO-[tools.py:24]: executing shell output: total used free shared buff/cache available
Mem: 32064 1079 31081 3 298 30985
Swap: 8192 0 8192
Starting tool: shell_tool with inputs: {'command': 'free -m'}
Done tool: shell_tool
The memory usage is as follows:
- Total memory: 32064 MB
- Used memory: 1079 MB
- Free memory: 31081 MB
- Shared memory: 3 MB
- Buffer/cache memory: 298 MB
- Available memory: 30985 MB
The swap memory usage is as follows:
- Total swap: 8192 MB
- Used swap: 0 MB
- Free swap: 8192 MB[2025-01-18 14:04:44,322]-VIP-[agentSupervisor.py:41]: {'next': 'shell'}
[2025-01-18 14:04:44,500]-VIP-[supervisorProcess.py:97]: shell->supervisor
Starting tool: shell_tool with inputs: {'command': 'free -m'}
[2025-01-18 14:04:45,320]-INFO-[tools.py:24]: executing shell output: total used free shared buff/cache available
Mem: 32064 1079 31081 3 298 30984
Swap: 8192 0 8192
Done tool: shell_tool
[2025-01-18 14:04:45,845]-INFO-[_base_client.py:1087]: Retrying request to /chat/completions in 0.483649 seconds
The memory usage is as follows:
- Total memory: 32064 MB
- Used memory: 1079 MB
- Free memory: 31081 MB
- Shared memory: 3 MB
- Buffer/cache memory: 298 MB
- Available memory: 30984 MB
The swap memory usage is as follows:
- Total swap: 8192 MB
- Used swap: 0 MB
- Free swap: 8192 MB[2025-01-18 14:04:49,860]-VIP-[agentSupervisor.py:41]: {'next': 'shell'}
[2025-01-18 14:04:50,079]-VIP-[supervisorProcess.py:97]: shell->supervisor
[2025-01-18 14:04:50,978]-INFO-[tools.py:24]: executing shell output: total used free shared buff/cache available
Mem: 32064 1080 31080 3 298 30983
Swap: 8192 0 8192
```
### Error Message and Stack Trace (if applicable)
```shell
[2025-01-18 14:04:35,940]-VIP-[supervisorProcess.py:97]: shell->supervisor
Starting tool: shell_tool with inputs: {'command': 'free -m'}
[2025-01-18 14:04:37,599]-INFO-[tools.py:24]: executing shell output: total used free shared buff/cache available
Mem: 32064 1077 31083 3 298 30986
Swap: 8192 0 8192
Done tool: shell_tool
The memory usage is as follows:
- Total memory: 32064 MB
- Used memory: 1077 MB
- Free memory: 31083 MB
- Shared memory: 3 MB
- Buffer/cache memory: 298 MB
- Available memory: 30986 MB
The swap memory usage is as follows:
- Total swap: 8192 MB
- Used swap: 0 MB
- Free swap: 8192 MB[2025-01-18 14:04:40,450]-VIP-[agentSupervisor.py:41]: {'next': 'shell'}
[2025-01-18 14:04:40,504]-VIP-[supervisorProcess.py:97]: shell->supervisor
[2025-01-18 14:04:41,771]-INFO-[tools.py:24]: executing shell output: total used free shared buff/cache available
Mem: 32064 1079 31081 3 298 30985
Swap: 8192 0 8192
Starting tool: shell_tool with inputs: {'command': 'free -m'}
Done tool: shell_tool
The memory usage is as follows:
- Total memory: 32064 MB
- Used memory: 1079 MB
- Free memory: 31081 MB
- Shared memory: 3 MB
- Buffer/cache memory: 298 MB
- Available memory: 30985 MB
The swap memory usage is as follows:
- Total swap: 8192 MB
- Used swap: 0 MB
- Free swap: 8192 MB[2025-01-18 14:04:44,322]-VIP-[agentSupervisor.py:41]: {'next': 'shell'}
[2025-01-18 14:04:44,500]-VIP-[supervisorProcess.py:97]: shell->supervisor
Starting tool: shell_tool with inputs: {'command': 'free -m'}
[2025-01-18 14:04:45,320]-INFO-[tools.py:24]: executing shell output: total used free shared buff/cache available
Mem: 32064 1079 31081 3 298 30984
Swap: 8192 0 8192
Done tool: shell_tool
[2025-01-18 14:04:45,845]-INFO-[_base_client.py:1087]: Retrying request to /chat/completions in 0.483649 seconds
The memory usage is as follows:
- Total memory: 32064 MB
- Used memory: 1079 MB
- Free memory: 31081 MB
- Shared memory: 3 MB
- Buffer/cache memory: 298 MB
- Available memory: 30984 MB
The swap memory usage is as follows:
- Total swap: 8192 MB
- Used swap: 0 MB
- Free swap: 8192 MB[2025-01-18 14:04:49,860]-VIP-[agentSupervisor.py:41]: {'next': 'shell'}
[2025-01-18 14:04:50,079]-VIP-[supervisorProcess.py:97]: shell->supervisor
[2025-01-18 14:04:50,978]-INFO-[tools.py:24]: executing shell output: total used free shared buff/cache available
Mem: 32064 1080 31080 3 298 30983
Swap: 8192 0 8192
```
### Description
Question: check Memory Usage
TASK can't FINISH, it keeps executing the shell, I've checked response = self.llm.invoke(messages)
response is FINISH.
### System Info
I add agent, run shell
class Router(TypedDict):
"""Worker to route to next. If no workers needed, route to FINISH."""
next: Literal["researcher", "coder", "shell", "FINISH"]
def supervisor_node(self, state: State) -> Command[Literal["researcher", "coder", "shell", "__end__"]]:
messages = [
{"role": "system", "content": self.system_prompt},
] + state["messages"]
response = self.llm.with_structured_output(Router).invoke(messages)
goto = response["next"]
if goto == "FINISH":
goto = END
mylogging.vip(response)
return Command(goto=goto, update={"next": goto})
Question: check Memory Usage
TASK can't FINISH, it keeps executing the shell, I've checked response = self.llm.invoke(messages)
response is FINISH.
but self.llm.with_structured_output(Router).invoke(messages) response is {'next': 'shell'}, this is bug.
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3096/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3088
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3088/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3088/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3088/events
|
https://github.com/langchain-ai/langgraph/issues/3088
| 2,796,197,423
|
I_kwDOKFU0lc6mqpYv
| 3,088
|
Subgraph `checkpointer=True` does not work for async invoke
|
{
"login": "shengbo-ma",
"id": 44483080,
"node_id": "MDQ6VXNlcjQ0NDgzMDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/44483080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shengbo-ma",
"html_url": "https://github.com/shengbo-ma",
"followers_url": "https://api.github.com/users/shengbo-ma/followers",
"following_url": "https://api.github.com/users/shengbo-ma/following{/other_user}",
"gists_url": "https://api.github.com/users/shengbo-ma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shengbo-ma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shengbo-ma/subscriptions",
"organizations_url": "https://api.github.com/users/shengbo-ma/orgs",
"repos_url": "https://api.github.com/users/shengbo-ma/repos",
"events_url": "https://api.github.com/users/shengbo-ma/events{/privacy}",
"received_events_url": "https://api.github.com/users/shengbo-ma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6658011276,
"node_id": "LA_kwDOKFU0lc8AAAABjNkwjA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/investigate",
"name": "investigate",
"color": "375C45",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-17T19:55:07
| 2025-01-17T21:40:09
| 2025-01-17T20:40:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, StateGraph
from rich import get_console
from typing_extensions import TypedDict
class SubGraphState(TypedDict, total=False):
run_idx: int
sub_counter: int
async def subgraph_accumulator(state: SubGraphState) -> SubGraphState:
get_console().print("---subgraph counter node---")
get_console().print(f"{state = }")
sub_counter = state["sub_counter"] + 1 if "sub_counter" in state else 0
return {"sub_counter": sub_counter}
sub_graph = (
StateGraph(SubGraphState)
.add_node(subgraph_accumulator)
.add_edge(START, subgraph_accumulator.__name__)
.add_edge(subgraph_accumulator.__name__, END)
.compile(checkpointer=True) # enable subgraph checkpointer for follow-up invokes
)
sub_graph.name = "sub"
class ParentGraphState(TypedDict, total=False):
run_idx: int
parent_counter: int
async def parent_graph_accumulator(state: ParentGraphState) -> ParentGraphState:
print("---parent counter node---")
get_console().print(f"{state = }")
parent_counter = state["parent_counter"] + 1 if "parent_counter" in state else 0
return {
"parent_counter": parent_counter,
}
async def check_after_sub_node(state: ParentGraphState) -> None:
print("---parent check after sub node---")
get_console().print(f"{state = }")
pass
parent_agent = (
StateGraph(ParentGraphState)
.add_node(parent_graph_accumulator)
.add_node(sub_graph)
.add_node(check_after_sub_node)
.add_edge(START, parent_graph_accumulator.__name__)
.add_edge(parent_graph_accumulator.__name__, sub_graph.get_name())
.add_edge(sub_graph.get_name(), check_after_sub_node.__name__)
.add_edge(check_after_sub_node.__name__, END)
.compile(checkpointer=MemorySaver())
)
async def main():
config: RunnableConfig = {"configurable": {"thread_id": "42"}}
for i in range(3):
print(f"---Run Index {i}---")
async for event in parent_agent.astream(
ParentGraphState(run_idx=i),
config,
stream_mode="values",
subgraphs=True,
):
print(event)
if __name__ == "__main__":
import asyncio
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
```shell
No errors.
A output is attached.
The issue is
- The subgraph counter should increase together with run_idx but it does not.
---Run Index 0---
((), {'run_idx': 0})
---parent counter node---
state = {'run_idx': 0}
((), {'run_idx': 0, 'parent_counter': 0})
---subgraph counter node---
state = {'run_idx': 0}
(('call_subgraph:240f14d1-f0c8-7f2f-018c-a10c655385cf',), {'run_idx': 0})
(('call_subgraph:240f14d1-f0c8-7f2f-018c-a10c655385cf',), {'run_idx': 0, 'sub_counter': 0})
---parent check after sub node---
state = {'run_idx': 0, 'parent_counter': 0}
---Run Index 1---
((), {'run_idx': 1, 'parent_counter': 0})
---parent counter node---
state = {'run_idx': 1, 'parent_counter': 0}
((), {'run_idx': 1, 'parent_counter': 1})
---subgraph counter node---
state = {'run_idx': 1}
(('call_subgraph:65507cf5-16ed-1a02-96cf-49a48899a65b',), {'run_idx': 1})
(('call_subgraph:65507cf5-16ed-1a02-96cf-49a48899a65b',), {'run_idx': 1, 'sub_counter': 0})
---parent check after sub node---
state = {'run_idx': 1, 'parent_counter': 1}
---Run Index 2---
((), {'run_idx': 2, 'parent_counter': 1})
---parent counter node---
state = {'run_idx': 2, 'parent_counter': 1}
((), {'run_idx': 2, 'parent_counter': 2})
---subgraph counter node---
state = {'run_idx': 2}
(('call_subgraph:e9ae413d-d6b8-d2a6-4732-072e73341a90',), {'run_idx': 2})
(('call_subgraph:e9ae413d-d6b8-d2a6-4732-072e73341a90',), {'run_idx': 2, 'sub_counter': 0})
---parent check after sub node---
state = {'run_idx': 2, 'parent_counter': 2}
```
### Description
This is a follow-up issue of #3020, where `checkpointer=True` (see #3055) is introduced in `0.2.63` to enable subgraph to carry its state to new invoke from previous invoke.
The bug is
- `checkpointer=True` does not work when parent graph is invoked asynchronously.
LangGraph version
`0.2.63`
### System Info
System Information
------------------
> OS: Linux
> OS Version: #135~20.04.1-Ubuntu SMP Mon Oct 7 13:56:22 UTC 2024
> Python Version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.11
> langsmith: 0.2.10
> langchain_openai: 0.2.12
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.7
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.3
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3088/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3080
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3080/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3080/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3080/events
|
https://github.com/langchain-ai/langgraph/issues/3080
| 2,795,990,242
|
I_kwDOKFU0lc6mp2zi
| 3,080
|
ERROR: column cw.task_path does not exist in LangGraph Checkpoint Postgres >=2.0.12 (Creating an issue with a solution)
|
{
"login": "GhimBoon",
"id": 92513442,
"node_id": "U_kgDOBYOkog",
"avatar_url": "https://avatars.githubusercontent.com/u/92513442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GhimBoon",
"html_url": "https://github.com/GhimBoon",
"followers_url": "https://api.github.com/users/GhimBoon/followers",
"following_url": "https://api.github.com/users/GhimBoon/following{/other_user}",
"gists_url": "https://api.github.com/users/GhimBoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GhimBoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GhimBoon/subscriptions",
"organizations_url": "https://api.github.com/users/GhimBoon/orgs",
"repos_url": "https://api.github.com/users/GhimBoon/repos",
"events_url": "https://api.github.com/users/GhimBoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/GhimBoon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-17T17:42:37
| 2025-01-17T17:42:50
| 2025-01-17T17:42:50
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
class ChatState(MessagesState):
pass
async def chat_node(state: ChatState, *, config: Optional[RunnableConfig] = None) -> ChatState:
system_message = ""
llm = call_llm()
system_message = ("You are GPT-4o with training data up to Oct 2023.\n"
"You are a very careful thinker. Think step by step before answering.\n")
messages = [SystemMessage(content=system_message)] + state["messages"]
response = await llm.ainvoke(messages)
return {"messages": [response]}
async def build_and_run_graph(message: str, config: RunnableConfig, run_method):
connection_kwargs = {
"autocommit": True,
"prepare_threshold": 0,
}
print("Building Agent")
async with AsyncConnectionPool(
conninfo=PG_CONNECTIONSTRING,
max_size=5,
kwargs=connection_kwargs,
) as pool:
checkpointer = AsyncPostgresSaver(pool)
graph = await create_graph(config)
compiled_graph = graph.compile(checkpointer=checkpointer)
message_input = {"messages": [HumanMessage(content=message)]}
return await run_method(compiled_graph, message_input, config)
async def stream_graph(compiled_graph, message_input, config):
ui_message = cl.Message(content="")
async for event in compiled_graph.astream_events(message_input, config=config, version="v1"):
if event["event"] == "on_chat_model_stream" and event["name"] == "chatgpt":
content = event["data"]["chunk"].content or ""
await ui_message.stream_token(token=content)
await ui_message.send()
async def run_agent(message: str, config: RunnableConfig = None):
await build_and_run_graph(message, config, stream_graph)
async def create_graph(config: RunnableConfig):
print("Initializing Graph")
graph = StateGraph(ChatState)
graph.add_node("chat", chat_node)
graph.add_edge(START, "chat")
graph.add_edge("chat", END)
return graph
```
### Error Message and Stack Trace (if applicable)
```shell
ERROR: column cw.task_path does not exist
LINE 27: ...array_agg(array[cw.type::bytea, cw.blob] order by cw.task_pa...
^
Traceback (most recent call last):
File "/home/ubuntu/miniforge3/envs/python3.12/lib/python3.12/site-packages/chainlit/utils.py", line 45, in wrapper
return await user_function(**params_values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/miniforge3/envs/python3.12/lib/python3.12/site-packages/chainlit/callbacks.py", line 121, in with_parent_id
await func(message)
File "/mnt/c/Github/chainlit/app.py", line 193, in on_message
await run_agent(message.content, config=config)
File "/mnt/c/Github/chainlit/agent/run_graph.py", line 41, in run_agent
await build_and_run_graph(message, config, stream_graph)
File "/mnt/c/Github/chainlit/agent/run_graph.py", line 26, in build_and_run_graph
return await run_method(compiled_graph, message_input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/c/Github/chainlit/agent/run_graph.py", line 30, in stream_graph
async for event in compiled_graph.astream_events(message_input, config=config, version="v1"):
File "/home/ubuntu/miniforge3/envs/python3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1386, in astream_events
async for event in event_stream:
File "/home/ubuntu/miniforge3/envs/python3.12/lib/python3.12/site-packages/langchain_core/tracers/event_stream.py", line 781, in _astream_events_implementation_v1
async for log in _astream_log_implementation( # type: ignore[misc]
File "/home/ubuntu/miniforge3/envs/python3.12/lib/python3.12/site-packages/langchain_core/tracers/log_stream.py", line 675, in _astream_log_implementation
await task
File "/home/ubuntu/miniforge3/envs/python3.12/lib/python3.12/site-packages/langchain_core/tracers/log_stream.py", line 629, in consume_astream
async for chunk in runnable.astream(input, config, **kwargs):
File "/home/ubuntu/miniforge3/envs/python3.12/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 1846, in astream
async with AsyncPregelLoop(
File "/home/ubuntu/miniforge3/envs/python3.12/lib/python3.12/site-packages/langgraph/pregel/loop.py", line 1033, in __aenter__
saved = await self.checkpointer.aget_tuple(self.checkpoint_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/miniforge3/envs/python3.12/lib/python3.12/site-packages/langgraph/checkpoint/postgres/aio.py", line 186, in aget_tuple
await cur.execute(
File "/home/ubuntu/miniforge3/envs/python3.12/lib/python3.12/site-packages/psycopg/cursor_async.py", line 97, in execute
raise ex.with_traceback(None)
psycopg.errors.UndefinedColumn: column cw.task_path does not exist
LINE 27: ...array_agg(array[cw.type::bytea, cw.blob] order by cw.task_pa...
```
### Description
LangGraph CheckPoint Postgres 2.0.12 and above is referencing a column that doesn't exist.
Solution is to ensure checkpointer.setup() is called to create the new column.
### System Info
System Information
OS: Linux
OS Version: https://github.com/langchain-ai/langchain/pull/1 SMP Tue Nov 5 00:21:55 UTC 2024
Python Version: 3.12.0 | packaged by conda-forge | (main, Oct 3 2023, 08:43:22) [GCC 12.3.0]
Package Information
langchain_core: 0.3.29
langchain: 0.3.14
langchain_community: 0.3.14
langsmith: 0.2.10
langchain_openai: 0.3.0
langchain_postgres: 0.0.12
langchain_sdk: 0.1.5
langchain_text_splitters: 0.3.5
langchain_unstructured: 0.1.6
langgraph_sdk: 0.1.51
Optional packages not installed
langserve
Other Dependencies
aiohttp: 3.11.11
async-timeout: Installed. No version info available.
dataclasses-json: 0.6.7
httpx: 0.28.1
httpx-sse: 0.4.0
jsonpatch: 1.33
langchain_core>=0.3.0: Installed. No version info available.
langsmith-pyo3: Installed. No version info available.
numpy: 1.26.4
onnxruntime: 1.19.2
openai: 1.59.7
orjson: 3.10.14
packaging: 23.2
pgvector: 0.2.5
psycopg: 3.2.3
psycopg-pool: 3.2.4
pydantic: 2.9.2
pydantic-settings: 2.7.1
PyYAML: 6.0.2
requests: 2.32.3
requests-toolbelt: 1.0.0
SQLAlchemy: 2.0.37
sqlalchemy: 2.0.37
tenacity: 9.0.0
tiktoken: 0.8.0
typing-extensions: 4.12.2
unstructured-client: 0.27.0
unstructured[all-docs]: Installed. No version info available.
zstandard: Installed. No version info available.
|
{
"login": "GhimBoon",
"id": 92513442,
"node_id": "U_kgDOBYOkog",
"avatar_url": "https://avatars.githubusercontent.com/u/92513442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GhimBoon",
"html_url": "https://github.com/GhimBoon",
"followers_url": "https://api.github.com/users/GhimBoon/followers",
"following_url": "https://api.github.com/users/GhimBoon/following{/other_user}",
"gists_url": "https://api.github.com/users/GhimBoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GhimBoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GhimBoon/subscriptions",
"organizations_url": "https://api.github.com/users/GhimBoon/orgs",
"repos_url": "https://api.github.com/users/GhimBoon/repos",
"events_url": "https://api.github.com/users/GhimBoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/GhimBoon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3080/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3072
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3072/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3072/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3072/events
|
https://github.com/langchain-ai/langgraph/issues/3072
| 2,793,416,669
|
I_kwDOKFU0lc6mgCfd
| 3,072
|
Interrupt() when invoked for the second time, failed to wait for the user input
|
{
"login": "Saisiva123",
"id": 36258631,
"node_id": "MDQ6VXNlcjM2MjU4NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/36258631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saisiva123",
"html_url": "https://github.com/Saisiva123",
"followers_url": "https://api.github.com/users/Saisiva123/followers",
"following_url": "https://api.github.com/users/Saisiva123/following{/other_user}",
"gists_url": "https://api.github.com/users/Saisiva123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saisiva123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saisiva123/subscriptions",
"organizations_url": "https://api.github.com/users/Saisiva123/orgs",
"repos_url": "https://api.github.com/users/Saisiva123/repos",
"events_url": "https://api.github.com/users/Saisiva123/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saisiva123/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-16T18:02:59
| 2025-01-22T18:14:02
| 2025-01-22T18:14:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
@tool
def book_appointment(first_name: str, last_name: str, email: str, doctor_name: str, time: str, tool_call_id: Annotated[str, InjectedToolCallId] ):
'''This is responsible to book an appointment using the first name, last name, email, doctor name and appointment time.'''
if first_name and last_name and email and doctor_name and time:
api_response = {'success': True, 'message': 'Successfully booked the appointment'}
return ToolMessage(content=api_response.get('message', ''), name="Book_appointment_tool", tool_call_id = tool_call_id)
else:
return ToolMessage(content='Not able to book the appointment', name="Book_appointment_tool",
tool_call_id=tool_call_id)
@tool
def collect_information(tool_call_id: Annotated[str, InjectedToolCallId]): # This acts like transfer tool that transfers to ask_human_node
'''This is responsible to collect the necessary information like the first name, last name, email, doctor name and appointment time from the user.'''
return Command(goto='ask_human_node', update={'messages': [
ToolMessage(content="Collecting required information from the user", tool_call_id=tool_call_id)]
})
def call_node(state: MessagesState) -> Command[Literal['ask_human_node', '__end__']]:
prompt = '''You are an appointment booking agent who will be responsible to collect the necessary information from the user while booking the appointment.
You would be always require to have following details to book an appointment:
=> First name, last name, email, doctor name and appointment time.
'''
tools = [book_appointment]
model = ChatOpenAI(model="gpt-4o", openai_api_key=os.getenv("OPEN_AI_API_KEY")).bind_tools(tools)
messages = [SystemMessage(content=prompt)] + state['messages']
response = model.invoke(messages)
results = []
if len(response.tool_calls) > 0:
tool_names = {tool.name: tool for tool in tools}
for tool_call in response.tool_calls:
tool_ = tool_names[tool_call["name"]]
tool_input_fields = tool_.get_input_schema().model_json_schema()[
"properties"
]
if "state" in tool_input_fields:
tool_call = {**tool_call, "args": {**tool_call["args"], "state": state}}
tool_response = tool_.invoke(tool_call)
results.append(tool_response)
if len(results) > 0:
return results
else:
return Command(goto='call_node', update={'messages': [AIMessage(content=str(results))]})
return Command(update={'messages': [response]})
def ask_human_node(state: MessagesState) -> Command[Literal['call_node']]:
last_message = state['messages'][-1]
user_response = interrupt({
'id': str(uuid.uuid4()),
'request': last_message
})
if user_response:
return Command(goto='call_node',
resume={'messages': [HumanMessage(content=user_response, name="User_Response")] },
update={'messages': [HumanMessage(content=user_response, name="User_Response")] })
builder = StateGraph(MessagesState)
builder.add_node('call_node', call_node)
builder.add_node('ask_human_node', ask_human_node)
builder.add_edge(START, 'call_node')
builder.add_edge('call_node', END)
```
### Error Message and Stack Trace (if applicable)
```shell
```
### Description
I'm trying to collect the information from the user to book an appointment using some details that needs to be passed to the Book Appointment API.
During the graph execution, the interrupt() method within the ask_human_node is triggered for the first time to request details from the user. After the user submits the details, if any required information is missing, the ask_human_node attempts to gather the missing details using interrupt(). However, at this point, it does not pause the execution and instead continues using the previously cached value.
As mentioned in the documentation I know its the default behavior, but I *REQUEST* to please let us know how to make interrupt wait for the second time as well.
<img width="868" alt="Image" src="https://github.com/user-attachments/assets/9874ede4-8b35-4c1b-9e11-148e54d9611f" />
### System Info
python -m langchain_core.sys_info
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3072/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3072/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3071
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3071/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3071/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3071/events
|
https://github.com/langchain-ai/langgraph/issues/3071
| 2,793,348,321
|
I_kwDOKFU0lc6mfxzh
| 3,071
|
Getting error when trying to stream events from prebuit ReAct agent
|
{
"login": "nikhil23011996",
"id": 195336616,
"node_id": "U_kgDOC6SZqA",
"avatar_url": "https://avatars.githubusercontent.com/u/195336616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikhil23011996",
"html_url": "https://github.com/nikhil23011996",
"followers_url": "https://api.github.com/users/nikhil23011996/followers",
"following_url": "https://api.github.com/users/nikhil23011996/following{/other_user}",
"gists_url": "https://api.github.com/users/nikhil23011996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikhil23011996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikhil23011996/subscriptions",
"organizations_url": "https://api.github.com/users/nikhil23011996/orgs",
"repos_url": "https://api.github.com/users/nikhil23011996/repos",
"events_url": "https://api.github.com/users/nikhil23011996/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikhil23011996/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292643,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwIw",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/invalid",
"name": "invalid",
"color": "e4e669",
"default": true,
"description": "This doesn't seem right"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-16T17:25:50
| 2025-01-31T14:59:32
| 2025-01-31T14:59:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from langgraph.prebuilt import create_react_agent
from langgraph.graph import MessagesState
from utils.tools_utils import tools
from utils.network_agent_utils import manage_memory
from llms.llms import llama_chatmodel_react
import utils.memory_checkpoint as memory
async def initialize_agent():
checkpointer = await memory.get_checkpointer()
# Create the agent with the initialized checkpointer
langgraph_agent = create_react_agent(
model=llama_chatmodel_react,
tools=tools,
checkpointer=checkpointer, # Pass the actual checkpointer instance
state_modifier=manage_memory
)
return langgraph_agent
async def process_chat_stream(message: str, thread_id: str):
"""
Process chat messages using langgraph with streaming implementation that properly
handles ToolMessage events and appends sources as markdown HTML anchor tags after
the model stream completes.
The function processes events in this sequence:
1. Streams model content in real-time
2. Collects sources from ToolMessage events during streaming
3. Appends formatted source links after content streaming ends
Args:
message (str): The input message to process
thread_id (str): Unique identifier for the chat thread
Yields:
str: Content chunks including model output and source information
"""
langgraph_agent = await initialize_agent()
# Initialize our source collection to gather sources during streaming
sources = []
is_streaming_complete = False
async for event in langgraph_agent.astream_events(
{"messages": [HumanMessage(content=message)]},
{"recursion_limit": 10, "configurable": {"thread_id": thread_id}},
version="v2"
):
await process_chunks(event)
event_type = event["event"]
if event_type == "on_chat_model_stream":
# Handle the main content streaming from the language model
chunk = event["data"]["chunk"]
content = chunk.content if hasattr(chunk, "content") else str(chunk)
if content:
# Stream the content chunk immediately
yield content
elif event_type == "on_tool_end":
# Extract source information from ToolMessage
tool_output = event["data"]["output"]
# Handle ToolMessage specific format
if isinstance(tool_output, ToolMessage):
# Extract content from ToolMessage
tool_content = tool_output.content
# Process the content string to extract file information
if isinstance(tool_content, str):
# Split content into lines, handling potential line breaks
lines = tool_content.split('\n')
current_file = None
current_url = None
for line in lines:
# Extract file name and URL using string operations
if line.startswith('file_name ::'):
current_file = line.replace('file_name ::', '').strip()
#elif line.startswith('s3_source_URL ::'):
#current_url = line.replace('s3_source_URL ::', '').strip()
# When we have file name, store them
if current_file:
source_tuple = (current_file)
if source_tuple not in sources:
sources.append(source_tuple)
# Reset for next pair
current_file = None
current_url = None
elif event_type == "on_chain_end":
# Mark streaming as complete when the chain ends
is_streaming_complete = True
# After all streaming is complete, append sources if we have any
if is_streaming_complete and sources:
# Add formatting for the sources section
yield "\n\n" # Add visual separation
yield "**Sources:**\n" # Add header in bold
# Generate and yield source links individually
for file_name in sources:
# Format each source as a markdown bullet point with HTML anchor tag
source_link = f'• <a href="{file_name}" target="_blank">{file_name}</a>\n'
yield source_link
```
### Error Message and Stack Trace (if applicable)
```shell
| File "C:\Users\0047YN744\PycharmProjects\dish-ran-ym\routers\agentic_ran.py", line 129, in process_chat_stream
| async for event in langgraph_agent.astream_events(
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\runnables\base.py", line 1386, in astream_events
| async for event in event_stream:
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\tracers\event_stream.py", line 1012, in _astream_events_implementation_v2
| await task
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\tracers\event_stream.py", line 967, in consume_astream
| async for _ in event_streamer.tap_output_aiter(run_id, stream):
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\tracers\event_stream.py", line 203, in tap_output_aiter
| async for chunk in output:
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\__init__.py", line 1878, in astream
| async for _ in runner.atick(
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\runner.py", line 362, in atick
| await arun_with_retry(
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\pregel\retry.py", line 127, in arun_with_retry
| async for _ in task.proc.astream(task.input, config):
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\utils\runnable.py", line 568, in astream
| async for chunk in aiterator:
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\tracers\event_stream.py", line 180, in tap_output_aiter
| first = await py_anext(output, default=sentinel)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\utils\aiter.py", line 76, in anext_impl
| return await __anext__(iterator)
| ^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\runnables\base.py", line 1453, in atransform
| async for ichunk in input:
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\runnables\base.py", line 1453, in atransform
| async for ichunk in input:
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\runnables\base.py", line 1016, in astream
| yield await self.ainvoke(input, config, **kwargs)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\prebuilt\tool_node.py", line 253, in ainvoke
| return await super().ainvoke(input, config, **kwargs)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\utils\runnable.py", line 236, in ainvoke
| ret = await asyncio.create_task(coro, context=context)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\prebuilt\tool_node.py", line 267, in _afunc
| outputs = await asyncio.gather(
| ^^^^^^^^^^^^^^^^^^^^^
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\prebuilt\tool_node.py", line 350, in _arun_one
| if invalid_tool_message := self._validate_tool_call(call):
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langgraph\prebuilt\tool_node.py", line 436, in _validate_tool_call
| return ToolMessage(
| ^^^^^^^^^^^^
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\messages\tool.py", line 139, in __init__
| super().__init__(content=content, **kwargs)
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\messages\base.py", line 76, in __init__
| super().__init__(content=content, **kwargs)
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\load\serializable.py", line 125, in __init__
| super().__init__(*args, **kwargs)
| File "C:\Users\0047YN744\AppData\Local\Programs\Python\Python311\Lib\site-packages\pydantic\main.py", line 212, in __init__
| validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| pydantic_core._pydantic_core.ValidationError: 1 validation error for ToolMessage
| tool_call_id
| Input should be a valid string [type=string_type, input_value=None, input_type=NoneType]
| For further information visit https://errors.pydantic.dev/2.9/v/string_type
| During task with name 'tools' and id '0ea8bd9b-7b55-a42d-658c-e9d8d69205e7'
+------------------------------------
```
### Description
i am using belwo llm provider interface
from langchain_ibm import ChatWatsonx
### System Info
langgraph==0.2.60
langchain==0.3.14
langchain_ibm==0.3.5
ibm_cloud_sdk_core==3.22.0
psycopg==3.2.3
psycopg_pool==3.2.4
psycopg-binary==3.2.3
langgraph-checkpoint==2.0.9
langgraph-checkpoint-postgres==2.0.9
langgraph-sdk==0.1.48
psycopg2-binary==2.9.10
langchain_community==0.3.14
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3071/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3068
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3068/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3068/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3068/events
|
https://github.com/langchain-ai/langgraph/issues/3068
| 2,793,176,065
|
I_kwDOKFU0lc6mfHwB
| 3,068
|
DOCS: Document checkpointer=True mode
|
{
"login": "eyurtsev",
"id": 3205522,
"node_id": "MDQ6VXNlcjMyMDU1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3205522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eyurtsev",
"html_url": "https://github.com/eyurtsev",
"followers_url": "https://api.github.com/users/eyurtsev/followers",
"following_url": "https://api.github.com/users/eyurtsev/following{/other_user}",
"gists_url": "https://api.github.com/users/eyurtsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eyurtsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eyurtsev/subscriptions",
"organizations_url": "https://api.github.com/users/eyurtsev/orgs",
"repos_url": "https://api.github.com/users/eyurtsev/repos",
"events_url": "https://api.github.com/users/eyurtsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/eyurtsev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 8005779783,
"node_id": "LA_kwDOKFU0lc8AAAAB3S6BRw",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/maintainer",
"name": "maintainer",
"color": "C5DEF5",
"default": false,
"description": "Issue created by library maintainer"
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-16T16:06:59
| 2025-01-16T16:09:48
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Privileged issue
- [x] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
https://github.com/langchain-ai/langgraph/pull/3055
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3068/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3062
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3062/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3062/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3062/events
|
https://github.com/langchain-ai/langgraph/issues/3062
| 2,792,144,823
|
I_kwDOKFU0lc6mbL-3
| 3,062
|
getting double repetative Output from agents tools langraph
|
{
"login": "akashAD98",
"id": 62583018,
"node_id": "MDQ6VXNlcjYyNTgzMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/62583018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akashAD98",
"html_url": "https://github.com/akashAD98",
"followers_url": "https://api.github.com/users/akashAD98/followers",
"following_url": "https://api.github.com/users/akashAD98/following{/other_user}",
"gists_url": "https://api.github.com/users/akashAD98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akashAD98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akashAD98/subscriptions",
"organizations_url": "https://api.github.com/users/akashAD98/orgs",
"repos_url": "https://api.github.com/users/akashAD98/repos",
"events_url": "https://api.github.com/users/akashAD98/events{/privacy}",
"received_events_url": "https://api.github.com/users/akashAD98/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292643,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwIw",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/invalid",
"name": "invalid",
"color": "e4e669",
"default": true,
"description": "This doesn't seem right"
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-16T09:20:44
| 2025-02-01T16:23:02
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
i have 2 chain bing chain & gk chain (using langchain) im calling this inside tools
all_tools = [image_tool, bing_tool,gk_tool]
def setup_langgraph_workflow(self):
"""Set up the langgraph workflow."""
logger.info("lanngraph workflow has started..")
# Define the tools for langgraph
self.tools = self.define_tools()
tool_node = ToolNode(self.tools)
llm_graph = self.main_llm
# Bind tools to LLM
model_with_tools = llm_graph.bind_tools(self.tools)
# Create the workflow
workflow = StateGraph(MessagesState)
# Define the function that determines whether to continue or not
def should_continue(state: MessagesState):
print("should continue using")
messages = state["messages"]
last_message = messages[-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return END
# Otherwise if there is, we continue
else:
return "tools"
def call_model(state: MessagesState):
messages = state["messages"]
response = model_with_tools.invoke(messages)
return {"messages": [response]}
def call_tool(state: MessagesState):
tools_by_name = {
tool.name: tool for tool in self.define_tools()
} # Dynamically map tools by name
messages = state["messages"]
last_message = messages[-1]
output_messages = []
for tool_call in last_message.tool_calls:
try:
tool_result = tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
output_messages.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
except Exception as e:
# Return the error if the tool call fails
output_messages.append(
ToolMessage(
content="",
name=tool_call["name"],
tool_call_id=tool_call["id"],
additional_kwargs={"error": e},
)
)
return {"messages": output_messages}
def should_fallback(
state: MessagesState,
) -> Literal["agent", "remove_failed_tool_call_attempt"]:
messages = state["messages"]
failed_tool_messages = [
msg
for msg in messages
if isinstance(msg, ToolMessage)
and msg.additional_kwargs.get("error") is not None
]
if failed_tool_messages:
return "remove_failed_tool_call_attempt"
return "agent"
def remove_failed_tool_call_attempt(state: MessagesState):
messages = state["messages"]
# instance of AIMessage onwards.
last_ai_message_index = next(
i
for i, msg in reversed(list(enumerate(messages)))
if isinstance(msg, AIMessage)
)
messages_to_remove = messages[last_ai_message_index:]
return {"messages": [RemoveMessage(id=m.id) for m in messages_to_remove]}
# Fallback to a better model if a tool call fails
def call_fallback_model(state: MessagesState):
messages = state["messages"]
response = model_with_tools.invoke(messages)
return {"messages": [response]}
workflow.add_node("agent", call_model)
workflow.add_node("tools", call_tool)
workflow.add_node(
"remove_failed_tool_call_attempt", remove_failed_tool_call_attempt
)
workflow.add_node("fallback_agent", call_fallback_model)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, ["tools", END])
workflow.add_conditional_edges("tools", should_fallback)
workflow.add_edge("remove_failed_tool_call_attempt", "fallback_agent")
workflow.add_edge("fallback_agent", "tools")
# Ensure Redis connection
self.ensure_connection()
with RedisSaver.from_conn_url(REDIS_URL) as checkpointer:
self.checkpointer = checkpointer
self.workflow_app = workflow.compile(checkpointer=self.checkpointer)
def store_thread_id(self, session_id: str, thread_id: str):
"""Store the thread_id in Redis for a given session_id."""
redis_key = f"langgraph_thread_id_{session_id}"
try:
r = redis.Redis.from_url(url=REDIS_URL, decode_responses=True)
r.set(redis_key, thread_id)
except Exception as e:
logger.error(f"Error saving thread ID to Redis: {e}")
def get_thread_id(self, session_id: str) -> Optional[str]:
"""Retrieve the thread_id from Redis for a given session_id."""
redis_key = f"langgraph_thread_id_{session_id}"
try:
r = redis.Redis.from_url(url=REDIS_URL, decode_responses=True)
print("redis used dad")
return r.get(redis_key)
except Exception as e:
logger.error(f"Error retrieving thread ID from Redis: {e}")
return None
def get_answer_from_langgraph(self, query: str, session_id: str) -> str:
"""Fetch answer using LangGraph with streaming support."""
logger.info("Fetching answer using LangGraph with streaming")
try:
if not self.workflow_app:
raise ValueError("LangGraph workflow is not initialized.")
thread_id = self.get_thread_id(self.session_id)
if not thread_id:
thread_id = f"thread-{session_id}"
self.store_thread_id(session_id, thread_id)
logger.info(f"Using thread ID: {thread_id}")
config = {"configurable": {"thread_id": thread_id}}
logger.debug(f"Workflow config: {config}")
inputs = [("user", query)]
first = True
gathered_response = None
# Stream tokens as they are generated
for msg, metadata in self.workflow_app.stream({"messages": inputs}, config, stream_mode="messages"):
if msg.content and not isinstance(msg, HumanMessage):
# Print to console for visibility (optional)
print(msg.content, end="", flush=True)
# Gather the AI message chunks
if isinstance(msg, AIMessageChunk):
if first:
gathered_response = msg
first = False
print("first msg firsts",gathered_response)
else:
gathered_response = gathered_response + msg
# Handle tool calls if present in the chunk
if msg.tool_call_chunks:
print(gathered_response.tool_calls)
# Finalize and return the full response
if gathered_response:
print("gathered responses is",gathered_response)
return gathered_response.content
else:
logger.warning("No AI response received from LangGraph")
return "No response received from LangGraph."
except Exception as e:
logger.error(f"Error in LangGraph response fetching: {e}")
return "An error occurred while processing your request."
```
### Error Message and Stack Trace (if applicable)
```shell
answer_helper.py : __init__ : 217 : Main LLM is callbacks=[<__main__.StreamHandler object at 0x75fb1835d930>] client=<openai.resources.chat.completions.Completions object at 0x75fa8816cb80> async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x75fa8816ec50> root_client=<openai.OpenAI object at 0x75fa88156aa0> root_async_client=<openai.AsyncOpenAI object at 0x75fa8816cbe0> model_name='gpt-4o' temperature=0.01 model_kwargs={} openai_api_key=SecretStr('**********') openai_organization='org-RK0NQC1sHHggOsWSbEz4HqjF' streaming=True max_tokens=4096
answer_helper.py : __init__ : 253 : Initializing LangGraph workflow for Universal AI ok .
answer_helper.py : setup_langgraph_workflow: 651 : lanngraph workflow has started..
answer_helper.py : __init__ : 256 : LangGraph workflow initialized successfully.
we here in universal ai
answer_helper.py : get_answer_from_langgraph: 981 : Fetching answer using LangGraph with streaming
: answer_helper.py : get_answer_from_langgraph: 998 : Workflow config: {'configurable': {'thread_id': 'thread-26669966', 'recursion_limit': 2}}
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
should continue using
get_bing_chains.py : get_answer : 47 : Executing get_answer with query: latest world news June 2025
get_bing_chains.py : get_documents : 56 : get documents from bing search api
get_bing_chains.py : get_documents : 67 : time taken for get results from solr: 1.4878 seconds
get_bing_chains.py : prepare_final_response: 94 : Generating answer from Bing search API and LLM
NG: get_bing_chains.py : prepare_final_response: 125 : Memory is not initialized. Using empty chat history.
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
get_bing_chains.py : prepare_final_response: 156 : Time taken in LLM (bing_chain): 9.8183 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
should continue using
answer_helper.py : compose_final_answer: 1230: compose_final_answer -> {'text': "# Latest World News - June 2025\n\nTo provide you with the most recent and relevant updates from around the globe, here are some key highlights from June 2025:\n\n## Middle East\n- **Israel-Hamas Ceasefire**: A ceasefire agreement has been confirmed between Israel and Hamas, facilitated by the US and Qatar. This includes the release of some hostages, marking a significant development in the ongoing conflict in the Gaza Strip. [Source: AP News](https://apnews.com/world-news)\n\n## Europe\n- **Prince William's Activities**: Prince William, the Prince of Wales, has been actively participating in public events, including attending the Together At Christmas carol service at Westminster Abbey. [Source: Hindustan Times](https://www.hindustantimes.com/world-news/world-news-live-latest-updates-on-politics-economy-conflicts-climate-change-today-january-9-2025-101736365588453.html)\n\n## United States\n- **California Wildfires**: Governor Gavin Newsom has been addressing the aftermath of wildfires in Pacific Palisades, working closely with CalFire and local officials to assess and manage the damage. [Source: Hindustan Times](https://www.hindustantimes.com/world-news/world-news-live-latest-updates-on-politics-economy-conflicts-climate-change-today-january-10-2025-101736449130295.html)\n\n## Global Economy\n- **Russia's Economic Forecast**: The Russian central bank has projected economic growth of 0.5 to 1.5 percent for 2025, a decrease from the previous year's growth rate, indicating a slowdown following a wartime economic boom. [Source: CFR](https://www.cfr.org/article/what-were-watching-around-globe-2025)\n\nThese updates provide a snapshot of significant global events and developments as of June 2025. For more detailed information, you can explore the provided sources.Here are some key highlights from the latest world news in June 2025:\n\n### Middle East\n- **Israel-Hamas Ceasefire**: A ceasefire agreement has been confirmed between Israel and Hamas, facilitated by the US and Qatar. This includes the release of some hostages, marking a significant development in the ongoing conflict in the Gaza Strip. [Read more](https://apnews.com/world-news)\n\n### Europe\n- **Prince William's Activities**: Prince William, the Prince of Wales, has been actively participating in public events, including attending the Together At Christmas carol service at Westminster Abbey. [Read more](https://www.hindustantimes.com/world-news/world-news-live-latest-updates-on-politics-economy-conflicts-climate-change-today-january-9-2025-101736365588453.html)\n\n### United States\n- **California Wildfires**: Governor Gavin Newsom has been addressing the aftermath of wildfires in Pacific Palisades, working closely with CalFire and local officials to assess and manage the damage. [Read more](https://www.hindustantimes.com/world-news/world-news-live-latest-updates-on-politics-economy-conflicts-climate-change-today-january-10-2025-101736449130295.html)\n\n### Global Economy\n- **Russia's Economic Forecast**: The Russian central bank has projected economic growth of 0.5 to 1.5 percent for 2025, a decrease from the previous year's growth rate, indicating a slowdown following a wartime economic boom. [Read more](https://www.cfr.org/article/what-were-watching-around-globe-2025)\n\nThese updates provide a snapshot of significant global events and developments as of June 2025.", 'found_an_answer': True, 'related_articles': [], 'llm_error_occurred': False}
# Latest World News - June 2025
To provide you with the most recent and relevant updates from around the globe, here are some key highlights from June 2025:
## Middle East
- **Israel-Hamas Ceasefire**: A ceasefire agreement has been confirmed between Israel and Hamas, facilitated by the US and Qatar. This includes the release of some hostages, marking a significant development in the ongoing conflict in the Gaza Strip. [Source: AP News](https://apnews.com/world-news)
## Europe
- **Prince William's Activities**: Prince William, the Prince of Wales, has been actively participating in public events, including attending the Together At Christmas carol service at Westminster Abbey. [Source: Hindustan Times](https://www.hindustantimes.com/world-news/world-news-live-latest-updates-on-politics-economy-conflicts-climate-change-today-january-9-2025-101736365588453.html)
## United States
- **California Wildfires**: Governor Gavin Newsom has been addressing the aftermath of wildfires in Pacific Palisades, working closely with CalFire and local officials to assess and manage the damage. [Source: Hindustan Times](https://www.hindustantimes.com/world-news/world-news-live-latest-updates-on-politics-economy-conflicts-climate-change-today-january-10-2025-101736449130295.html)
## Global Economy
- **Russia's Economic Forecast**: The Russian central bank has projected economic growth of 0.5 to 1.5 percent for 2025, a decrease from the previous year's growth rate, indicating a slowdown following a wartime economic boom. [Source: CFR](https://www.cfr.org/article/what-were-watching-around-globe-2025)
These updates provide a snapshot of significant global events and developments as of June 2025. For more detailed information, you can explore the provided sources.Here are some key highlights from the latest world news in June 2025:
### Middle East
- **Israel-Hamas Ceasefire**: A ceasefire agreement has been confirmed between Israel and Hamas, facilitated by the US and Qatar. This includes the release of some hostages, marking a significant development in the ongoing conflict in the Gaza Strip. [Read more](https://apnews.com/world-news)
### Europe
- **Prince William's Activities**: Prince William, the Prince of Wales, has been actively participating in public events, including attending the Together At Christmas carol service at Westminster Abbey. [Read more](https://www.hindustantimes.com/world-news/world-news-live-latest-updates-on-politics-economy-conflicts-climate-change-today-january-9-2025-101736365588453.html)
### United States
- **California Wildfires**: Governor Gavin Newsom has been addressing the aftermath of wildfires in Pacific Palisades, working closely with CalFire and local officials to assess and manage the damage. [Read more](https://www.hindustantimes.com/world-news/world-news-live-latest-updates-on-politics-economy-conflicts-climate-change-today-january-10-2025-101736449130295.html)
### Global Economy
- **Russia's Economic Forecast**: The Russian central bank has projected economic growth of 0.5 to 1.5 percent for 2025, a decrease from the previous year's growth rate, indicating a slowdown following a wartime economic boom. [Read more](https://www.cfr.org/article/what-were-watching-around-globe-2025)
These updates provide a snapshot of significant global events and developments as of June 2025.
```
### Description
i m building langgraph based multiaget system & the output im getting is sometimes doubled so not sure where is issue?
issue with agents streaming or logic .why its calling double output
### System Info
ubantu 22.04 local system
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3062/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3059
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3059/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3059/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3059/events
|
https://github.com/langchain-ai/langgraph/issues/3059
| 2,791,568,431
|
I_kwDOKFU0lc6mY_Qv
| 3,059
|
Unable to run langgraph docker container when using existing postgres db and redis
|
{
"login": "magallardo",
"id": 4683095,
"node_id": "MDQ6VXNlcjQ2ODMwOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4683095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/magallardo",
"html_url": "https://github.com/magallardo",
"followers_url": "https://api.github.com/users/magallardo/followers",
"following_url": "https://api.github.com/users/magallardo/following{/other_user}",
"gists_url": "https://api.github.com/users/magallardo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/magallardo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/magallardo/subscriptions",
"organizations_url": "https://api.github.com/users/magallardo/orgs",
"repos_url": "https://api.github.com/users/magallardo/repos",
"events_url": "https://api.github.com/users/magallardo/events{/privacy}",
"received_events_url": "https://api.github.com/users/magallardo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-16T02:51:01
| 2025-02-23T12:46:19
| 2025-01-17T17:02:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
docker run \
--env-file .env \
-p 8123:8000 \
-e REDIS_URI="redacted" \
-e DATABASE_URI="redacted" \
-e LANGSMITH_API_KEY="redacted" \
my-image
```
### Error Message and Stack Trace (if applicable)
```shell
2025-01-15 21:38:28 2025-01-16T02:38:28.085675Z [info ] Using auth of type=noop [langgraph_api.auth.middleware] api_revision=939d51f api_variant=local
2025-01-15 21:38:28 2025-01-16T02:38:28.087168Z [info ] Started server process [1] [uvicorn.error] api_revision=939d51f api_variant=local color_message=Started server process [%d]
2025-01-15 21:38:28 2025-01-16T02:38:28.087305Z [info ] Waiting for application startup. [uvicorn.error] api_revision=939d51f api_variant=local
2025-01-15 21:38:28 2025-01-16T02:38:28.087515Z [warning ] No license key found, running in test mode with LangSmith API key. For production use, set LANGGRAPH_CLOUD_LICENSE_KEY in environment. [langgraph_license.validation] api_revision=939d51f api_variant=local
2025-01-15 21:38:28 2025-01-16T02:38:28.282471Z [info ] HTTP Request: GET https://api.smith.langchain.com/auth?langgraph-api=true "HTTP/1.1 200 OK" [httpx] api_revision=939d51f api_variant=local
2025-01-15 21:38:28 2025-01-16T02:38:28.327079Z [warning ] error connecting in 'pool-1': connection failed: connection to server at "192.168.1.95", port 5432 failed: server closed the connection unexpectedly
2025-01-15 21:38:28 This probably means the server terminated abnormally
2025-01-15 21:38:28 before or while processing the request. [psycopg.pool] api_revision=939d51f api_variant=local
2025-01-15 21:38:29 2025-01-16T02:38:29.298065Z [warning ] error connecting in 'pool-1': connection failed: connection to server at "192.168.1.95", port 5432 failed: server closed the connection unexpectedly
2025-01-15 21:38:29 This probably means the server terminated abnormally
2025-01-15 21:38:29 before or while processing the request. [psycopg.pool] api_revision=939d51f api_variant=local
2025-01-15 21:38:31 2025-01-16T02:38:31.233615Z [warning ] error connecting in 'pool-1': connection failed: connection to server at "192.168.1.95", port 5432 failed: server closed the connection unexpectedly
2025-01-15 21:38:31 This probably means the server terminated abnormally
2025-01-15 21:38:31 before or while processing the request. [psycopg.pool] api_revision=939d51f api_variant=local
2025-01-15 21:38:35 2025-01-16T02:38:35.103750Z [warning ] error connecting in 'pool-1': connection failed: connection to server at "192.168.1.95", port 5432 failed: server closed the connection unexpectedly
2025-01-15 21:38:35 This probably means the server terminated abnormally
2025-01-15 21:38:35 before or while processing the request. [psycopg.pool] api_revision=939d51f api_variant=local
2025-01-15 21:38:42 2025-01-16T02:38:42.837522Z [warning ] error connecting in 'pool-1': connection failed: connection to server at "192.168.1.95", port 5432 failed: server closed the connection unexpectedly
2025-01-15 21:38:42 This probably means the server terminated abnormally
2025-01-15 21:38:42 before or while processing the request. [psycopg.pool] api_revision=939d51f api_variant=local
2025-01-15 21:38:58 2025-01-16T02:38:58.299583Z [warning ] error connecting in 'pool-1': connection failed: connection to server at "192.168.1.95", port 5432 failed: server closed the connection unexpectedly
2025-01-15 21:38:58 This probably means the server terminated abnormally
2025-01-15 21:38:58 before or while processing the request. [psycopg.pool] api_revision=939d51f api_variant=local
2025-01-15 21:38:58 2025-01-16T02:38:58.325300Z [error ] Traceback (most recent call last):
2025-01-15 21:38:58 File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 693, in lifespan
2025-01-15 21:38:58 async with self.lifespan_context(app) as maybe_state:
2025-01-15 21:38:58 File "/usr/local/lib/python3.11/contextlib.py", line 210, in __aenter__
2025-01-15 21:38:58 return await anext(self.gen)
2025-01-15 21:38:58 ^^^^^^^^^^^^^^^^^^^^^
2025-01-15 21:38:58 File "/api/langgraph_api/lifespan.py", line 29, in lifespan
2025-01-15 21:38:58 File "/usr/local/lib/python3.11/site-packages/langgraph_storage/database.py", line 146, in start_pool
2025-01-15 21:38:58 await _pg_pool.open(wait=True)
2025-01-15 21:38:58 File "/usr/local/lib/python3.11/site-packages/psycopg_pool/pool_async.py", line 387, in open
2025-01-15 21:38:58 await self.wait(timeout=timeout)
2025-01-15 21:38:58 File "/usr/local/lib/python3.11/site-packages/psycopg_pool/pool_async.py", line 174, in wait
2025-01-15 21:38:58 raise PoolTimeout(f"pool initialization incomplete after {timeout} sec")
2025-01-15 21:38:58 psycopg_pool.PoolTimeout: pool initialization incomplete after 30.0 sec
2025-01-15 21:38:58 [uvicorn.error] api_revision=939d51f api_variant=local
2025-01-15 21:38:58 2025-01-16T02:38:58.325529Z [error ] Application startup failed. Exiting. [uvicorn.error] api_revision=939d51f api_variant=local
```
### Description
I am trying to run langgraph against existing postgres and redis containers. The redis and database containers start up fine and ready to receive connections. However, when I run the langgraph server, it fails to connect to database.
The database and redis uris passed to the docker run command are:
postgres://postgres:postgres@192.168.1.95:5432/postgres?sslmode=disable
redis://192.168.1.95:6379
Note that the dabase and redis are started from a separate docker compose and both containers are running fine.
I have tested running the database, redis and langgraph from the same docker-compose file and that seems to work ok.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:00 PDT 2024; root:xnu-10063.141.2~1/RELEASE_X86_64
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.3.29
> langsmith: 0.2.10
> langchain_openai: 0.3.0
> langgraph_cli: 0.1.67
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> click: 8.1.8
> httpx: 0.28.1
> jsonpatch: 1.33
> langgraph-api: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> openai: 1.59.7
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.5
> python-dotenv: Installed. No version info available.
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3059/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3020
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3020/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3020/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3020/events
|
https://github.com/langchain-ai/langgraph/issues/3020
| 2,788,080,818
|
I_kwDOKFU0lc6mLryy
| 3,020
|
Subgraph forgets its state of the first run when it is invoked the second time in a parent graph
|
{
"login": "shengbo-ma",
"id": 44483080,
"node_id": "MDQ6VXNlcjQ0NDgzMDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/44483080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shengbo-ma",
"html_url": "https://github.com/shengbo-ma",
"followers_url": "https://api.github.com/users/shengbo-ma/followers",
"following_url": "https://api.github.com/users/shengbo-ma/following{/other_user}",
"gists_url": "https://api.github.com/users/shengbo-ma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shengbo-ma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shengbo-ma/subscriptions",
"organizations_url": "https://api.github.com/users/shengbo-ma/orgs",
"repos_url": "https://api.github.com/users/shengbo-ma/repos",
"events_url": "https://api.github.com/users/shengbo-ma/events{/privacy}",
"received_events_url": "https://api.github.com/users/shengbo-ma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-14T19:06:05
| 2025-02-12T10:13:19
| 2025-01-16T21:45:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [x] I added a clear and detailed title that summarizes the issue.
- [x] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [x] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Description
> [!NOTE]
This issue has updates in comments, for TL;DR, see [this summary comment](https://github.com/langchain-ai/langgraph/issues/3020#issuecomment-2593830500).
I am building a nested graph multi-agent system for question and answer style chat service.
In this scenario, I will run the same parent graph multiple times, since new follow-up questions comes and trigger a new graph run. Each follow-up question in based on context of the previous runs.
I observed this bug:
- The parent graph works well, all states in previous runs are there. However, the subgraph totally forgets the previous run. The agent state of a subgraph is flushed and it starts from empty.
**LangGraph Version**
langgraph: 0.2.61
### Example Code
```python
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, StateGraph
from rich import get_console
from typing_extensions import TypedDict
class SubGraphState(TypedDict, total=False):
run_idx: int
sub_counter: int
def subgraph_accumulator(state: SubGraphState) -> SubGraphState:
get_console().print("---subgraph counter node---")
get_console().print(f"{state = }")
sub_counter = state["sub_counter"] + 1 if "sub_counter" in state else 0
return {"sub_counter": sub_counter}
sub_graph = (
StateGraph(SubGraphState)
.add_node(subgraph_accumulator)
.add_edge(START, subgraph_accumulator.__name__)
.add_edge(subgraph_accumulator.__name__, END)
.compile()
)
sub_graph.name = "sub"
class ParentGraphState(TypedDict, total=False):
run_idx: int
parent_counter: int
def parent_graph_accumulator(state: ParentGraphState) -> ParentGraphState:
print("---parent counter node---")
get_console().print(f"{state = }")
parent_counter = state["parent_counter"] + 1 if "parent_counter" in state else 0
return {
"parent_counter": parent_counter,
}
parent_agent = (
StateGraph(ParentGraphState)
.add_node(parent_graph_accumulator)
.add_node(sub_graph)
.add_edge(START, parent_graph_accumulator.__name__)
.add_edge(parent_graph_accumulator.__name__, sub_graph.get_name())
.add_edge(sub_graph.get_name(), END)
.compile(checkpointer=MemorySaver())
)
config: RunnableConfig = {"configurable": {"thread_id": "42"}}
for i in range(3):
print(f"---Run Index {i}---")
for event in parent_agent.stream(
ParentGraphState(run_idx=i),
config,
stream_mode="values",
subgraphs=True,
):
print(event)
```
```mermaid
%%{init: {'flowchart': {'curve': 'linear'}}}%%
graph TD;
__start__([<p>__start__</p>]):::first
parent_graph_accumulator(parent_graph_accumulator)
sub(sub)
__end__([<p>__end__</p>]):::last
__start__ --> parent_graph_accumulator;
parent_graph_accumulator --> sub;
sub --> __end__;
classDef default fill:#f2f0ff,line-height:1.2
classDef first fill-opacity:0
classDef last fill:#bfb6fc
```
### Error Message and Stack Trace (if applicable)
No error thrown. Below is the output of the example code.
Parent graph counter increases correctly along with run index, while subgraph counter `sub_counter` gets reset to 0 on each follow-up run. Subgraph counter should be increasing with the `run_idx`.
```shell
---Run Index 0---
((), {'run_idx': 0})
---parent counter node---
state = {'run_idx': 0}
((), {'run_idx': 0, 'parent_counter': 0})
(('sub:45ab0cd7-e60d-1a76-fdec-c4a0b972639f',), {'run_idx': 0})
---subgraph counter node---
state = {'run_idx': 0}
(('sub:45ab0cd7-e60d-1a76-fdec-c4a0b972639f',), {'run_idx': 0, 'sub_counter': 0})
((), {'run_idx': 0, 'parent_counter': 0})
---Run Index 1---
((), {'run_idx': 1, 'parent_counter': 0})
---parent counter node---
state = {'run_idx': 1, 'parent_counter': 0}
((), {'run_idx': 1, 'parent_counter': 1})
(('sub:ad3a2d91-7d1c-b791-f8a1-3e6213226d3c',), {'run_idx': 1})
---subgraph counter node---
state = {'run_idx': 1}
(('sub:ad3a2d91-7d1c-b791-f8a1-3e6213226d3c',), {'run_idx': 1, 'sub_counter': 0})
((), {'run_idx': 1, 'parent_counter': 1})
---Run Index 2---
((), {'run_idx': 2, 'parent_counter': 1})
---parent counter node---
state = {'run_idx': 2, 'parent_counter': 1}
((), {'run_idx': 2, 'parent_counter': 2})
---subgraph counter node---
(('sub:21723217-8cf8-4856-863a-8c8f9b9b9351',), {'run_idx': 2})
state = {'run_idx': 2}
(('sub:21723217-8cf8-4856-863a-8c8f9b9b9351',), {'run_idx': 2, 'sub_counter': 0})
((), {'run_idx': 2, 'parent_counter': 2})
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #135~20.04.1-Ubuntu SMP Mon Oct 7 13:56:22 UTC 2024
> Python Version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.11
> langsmith: 0.2.10
> langchain_openai: 0.2.12
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.7
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.3
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3020/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3020/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3005
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3005/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3005/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3005/events
|
https://github.com/langchain-ai/langgraph/issues/3005
| 2,783,305,167
|
I_kwDOKFU0lc6l5d3P
| 3,005
|
DOC: API reference doesn't mention the “messages” stream mode in stream_mode definition.
|
{
"login": "habib-source",
"id": 179967683,
"node_id": "U_kgDOCroWww",
"avatar_url": "https://avatars.githubusercontent.com/u/179967683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/habib-source",
"html_url": "https://github.com/habib-source",
"followers_url": "https://api.github.com/users/habib-source/followers",
"following_url": "https://api.github.com/users/habib-source/following{/other_user}",
"gists_url": "https://api.github.com/users/habib-source/gists{/gist_id}",
"starred_url": "https://api.github.com/users/habib-source/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/habib-source/subscriptions",
"organizations_url": "https://api.github.com/users/habib-source/orgs",
"repos_url": "https://api.github.com/users/habib-source/repos",
"events_url": "https://api.github.com/users/habib-source/events{/privacy}",
"received_events_url": "https://api.github.com/users/habib-source/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292616,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwCA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
open
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-13T08:21:16
| 2025-01-14T00:41:40
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Issue with current documentation:
stream_mode (Optional[Union[StreamMode, list[StreamMode]]], default: None ) – The mode to stream output, defaults to self.stream_mode. Options are 'values', 'updates', and 'debug'. values: Emit the current values of the state for each step. updates: Emit only the updates to the state for each step. Output is a dict with the node name as key and the updated values as value. debug: Emit debug events for each step.
For example, this is the definition of stream_mode in the async meth under the CompiledStateGraph [url](https://langchain-ai.github.io/langgraph/reference/graphs/#langgraph.graph.state.CompiledStateGraph.astream) with mention for 'values', 'updates', and 'debug' but no 'messages'. I thought it was deprecated or something. But I tried it out and it works.
### Idea or request for content:
add the 'messages' stream mode and it's definition to the stream_mode definition.
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3005/timeline
| null | null | false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3003
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3003/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3003/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3003/events
|
https://github.com/langchain-ai/langgraph/issues/3003
| 2,783,080,314
|
I_kwDOKFU0lc6l4m96
| 3,003
|
Update_State not working well during interruption while using Command function to navigate graph but works well with add_edge functionality.
|
{
"login": "DAKSH1-HUB",
"id": 81084807,
"node_id": "MDQ6VXNlcjgxMDg0ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/81084807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DAKSH1-HUB",
"html_url": "https://github.com/DAKSH1-HUB",
"followers_url": "https://api.github.com/users/DAKSH1-HUB/followers",
"following_url": "https://api.github.com/users/DAKSH1-HUB/following{/other_user}",
"gists_url": "https://api.github.com/users/DAKSH1-HUB/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DAKSH1-HUB/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DAKSH1-HUB/subscriptions",
"organizations_url": "https://api.github.com/users/DAKSH1-HUB/orgs",
"repos_url": "https://api.github.com/users/DAKSH1-HUB/repos",
"events_url": "https://api.github.com/users/DAKSH1-HUB/events{/privacy}",
"received_events_url": "https://api.github.com/users/DAKSH1-HUB/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6658011276,
"node_id": "LA_kwDOKFU0lc8AAAABjNkwjA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/investigate",
"name": "investigate",
"color": "375C45",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-13T05:57:41
| 2025-01-16T14:37:18
| 2025-01-16T14:37:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o-mini")
import random
from typing_extensions import TypedDict, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.types import Command
from langgraph.checkpoint.memory import MemorySaver
class State(TypedDict):
input: str
def step_1(state:State) -> Command[Literal["step_2"]]:
print("---Step 1---")
return Command(goto="step_2",update={"input":"step1"})
def step_2(state:State) -> Command[Literal["step_3"]]:
print("---Step 2---")
return Command(goto="step_3",update={"input":"step2"})
def step_3(state:State) -> Command[Literal[END]]:
print("---Step 3---")
return Command(goto=END, update={"input":"step2"})
builder = StateGraph(State)
builder.add_node("step_1", step_1)
builder.add_node("step_2", step_2)
builder.add_node("step_3", step_3)
builder.add_edge(START, "step_1")
# Set up memory
memory = MemorySaver()
# Add
graph = builder.compile(checkpointer=memory, interrupt_before=["step_2"])
# View
display(Image(graph.get_graph().draw_mermaid_png()))
initial_input = {"input": "hello world"}
# Thread
thread = {"configurable": {"thread_id": "1"}}
graph.invoke(initial_input,thread)
graph.get_state(thread).next # will have some value
graph.update_state(thread, {"input":"outer_input"})
graph.get_state(thread).next # will be empty---issue
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Hi Harrison
I hope you are doing well. I came across a problem while using Lang graph. I would directly jump into it.
I made few nodes for my project and I used interrupt before to get input from human to edit node. I also used command function's goto rather than using add_edge function to jump from one node to another. When I update the state in between two nodes while being in interrupted state, it returns graph.get_state().next to be empty.
When I use add_edge functionality to navigate the graph, and update the state in between while in interrupted state, It works fine. Most probably command(goto="some_node",update=...) returns the next state dynamically which the update_state is not providing/returning. If there's any way we could tell the graph to edit the graph's next state while using update_state function, please let me know.
Please have a look into it.
Best regards,
Daksh Arora
### System Info
google colab
python -m langchain_core.sys_info
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/3003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/3003/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2992
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2992/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2992/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2992/events
|
https://github.com/langchain-ai/langgraph/issues/2992
| 2,781,271,097
|
I_kwDOKFU0lc6lxtQ5
| 2,992
|
When the graph includes an async `BaseCheckpointSaver` and uses `get_state_history`, the program hangs
|
{
"login": "gbaian10",
"id": 34255899,
"node_id": "MDQ6VXNlcjM0MjU1ODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/34255899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbaian10",
"html_url": "https://github.com/gbaian10",
"followers_url": "https://api.github.com/users/gbaian10/followers",
"following_url": "https://api.github.com/users/gbaian10/following{/other_user}",
"gists_url": "https://api.github.com/users/gbaian10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbaian10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbaian10/subscriptions",
"organizations_url": "https://api.github.com/users/gbaian10/orgs",
"repos_url": "https://api.github.com/users/gbaian10/repos",
"events_url": "https://api.github.com/users/gbaian10/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbaian10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-10T22:02:02
| 2025-01-20T19:19:56
| 2025-01-14T19:46:53
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import asyncio
from langgraph.checkpoint.sqlite.aio import AsyncSqliteSaver
from langgraph.graph import END, START, MessagesState, StateGraph
async def main() -> None:
builder = StateGraph(MessagesState)
builder.add_node("foo", lambda _: None)
builder.add_edge(START, "foo")
builder.add_edge("foo", END)
async with AsyncSqliteSaver.from_conn_string(":memory:") as checkpointer:
graph = builder.compile(checkpointer=checkpointer)
config = {"configurable": {"thread_id": "1"}}
await graph.ainvoke({"messages": []}, config)
for state in graph.get_state_history(config):
print(state)
# async for state in graph.aget_state_history(config):
# print(state)
if __name__ == "__main__":
asyncio.run(main())
```
### Description
When using a graph with async checkpointer(sqlite / postgres) and calling sync method `get_state_history`, the program blocks and cannot continue running.
I expect the sync method to either throw a `NotImplementedError` exception or continue running, but without leveraging the benefits of async.
### System Info
System Information
------------------
> OS: Linux
> Python Version: 3.13.1 (main, Dec 4 2024, 08:54:15) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.3.29
> langgraph: 0.2.62
> langgraph-checkpoint-sqlite: 2.0.1
> langgraph-checkpoint-postgres: 2.0.10
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/2992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2992/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2991
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2991/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2991/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2991/events
|
https://github.com/langchain-ai/langgraph/issues/2991
| 2,781,019,220
|
I_kwDOKFU0lc6lwvxU
| 2,991
|
Failed to batch ingest runs: langsmith.utils.LangSmithError: Failed to POST https://api.smith.langchain.com/runs/batch in LangSmith API. HTTPError('422 Client Error: unknown for url: https://api.smith.langchain.com/runs/batch', '{"detail":"Invalid identifiers received for run_id:...
|
{
"login": "Layvier",
"id": 12390725,
"node_id": "MDQ6VXNlcjEyMzkwNzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12390725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Layvier",
"html_url": "https://github.com/Layvier",
"followers_url": "https://api.github.com/users/Layvier/followers",
"following_url": "https://api.github.com/users/Layvier/following{/other_user}",
"gists_url": "https://api.github.com/users/Layvier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Layvier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Layvier/subscriptions",
"organizations_url": "https://api.github.com/users/Layvier/orgs",
"repos_url": "https://api.github.com/users/Layvier/repos",
"events_url": "https://api.github.com/users/Layvier/events{/privacy}",
"received_events_url": "https://api.github.com/users/Layvier/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-10T19:58:21
| 2025-01-13T03:08:13
| 2025-01-13T03:08:13
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
LANGCHAIN_TRACING_V2="true"
LANGCHAIN_API_KEY="<key starting with lsv2>"
LANGCHAIN_PROJECT="my-project"
// "langgraph.json"
{
"dependencies": ["."],
"graphs": {
"telescope_v4": "./path_to_graph/graph.py:app"
},
"env": "./.env"
}
```
### Error Message and Stack Trace (if applicable)
```shell
Failed to batch ingest runs: langsmith.utils.LangSmithError: Failed to POST https://api.smith.langchain.com/runs/batch in LangSmith API. HTTPError('422 Client Error: unknown for url: https://api.smith.langchain.com/runs/batch', '{"detail":"Invalid identifiers received for run_id:2ff3004e-72d4-42ff-a4b1-9c8111a63ebf trace_id:UUID(\'1efcf892-f23f-6856-8e1b-96da72ff5d2a\') dotted_order:20250110T192918543695Z1efcf892-f23f-6856-8e1b-96da72ff5d2a.20250110T192918579965Z16f0f4e7-6bd4-4ea1-b6d2-b4652d1b7122.20250110T192918598421Z76086b6b-2754-4bf4-a06b-35886f709bbf.20250110T192919677228Z2ff3004e-72d4-42ff-a4b1-9c8111a63ebf parent_run_id:76086b6b-2754-4bf4-a06b-35886f709bbf"}')
```
### Description
Hello,
I'm using Langgraph and langgraph-cli to develop an agent, but I have issues when looking for traces in Langsmith. The batch ingest calls for run traces all fail, and I checked the api key 10 times but I still get the issue. I increased my usage limits as well, to no avail.
This issue seems to come from langgraph-cli, since from the same .env I have a fastapi server using langgraph (another graph), and this one does send logs to langsmith properly.
Any ideas what could be the issue?
Cheers, Olivier
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.5.0: Mon Apr 24 20:52:24 PDT 2023; root:xnu-8796.121.2~5/RELEASE_ARM64_T6000
> Python Version: 3.11.9 (main, Sep 23 2024, 23:17:37) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.1
> langsmith: 0.1.125
> langchain_anthropic: 0.2.1
> langchain_google_vertexai: 2.0.1
> langchain_openai: 0.2.1
> langchain_text_splitters: 0.3.5
> langgraph_api: 0.0.15
> langgraph_cli: 0.1.65
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.48
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> anthropic: 0.34.2
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: 4.0.3
> click: 8.1.7
> cryptography: 43.0.3
> dataclasses-json: 0.5.7
> defusedxml: 0.7.1
> google-cloud-aiplatform: 1.67.1
> google-cloud-storage: 2.17.0
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.0
> langchain-mistralai: Installed. No version info available.
> langgraph: 0.2.61
> langgraph-checkpoint: 2.0.9
> numpy: 1.25.2
> openai: 1.46.0
> orjson: 3.10.7
> packaging: 23.2
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> pyjwt: 2.10.1
> python-dotenv: 0.21.1
> PyYAML: 6.0.2
> requests: 2.32.3
> SQLAlchemy: 2.0.35
> sse-starlette: 2.1.3
> starlette: 0.38.6
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2
> uvicorn: 0.30.6
> watchfiles: 1.0.3
|
{
"login": "Layvier",
"id": 12390725,
"node_id": "MDQ6VXNlcjEyMzkwNzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12390725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Layvier",
"html_url": "https://github.com/Layvier",
"followers_url": "https://api.github.com/users/Layvier/followers",
"following_url": "https://api.github.com/users/Layvier/following{/other_user}",
"gists_url": "https://api.github.com/users/Layvier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Layvier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Layvier/subscriptions",
"organizations_url": "https://api.github.com/users/Layvier/orgs",
"repos_url": "https://api.github.com/users/Layvier/repos",
"events_url": "https://api.github.com/users/Layvier/events{/privacy}",
"received_events_url": "https://api.github.com/users/Layvier/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/2991/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2991/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2983
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2983/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2983/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2983/events
|
https://github.com/langchain-ai/langgraph/issues/2983
| 2,779,095,713
|
I_kwDOKFU0lc6lpaKh
| 2,983
|
Langgraph does not save memory into PostgresStore but instead .langgrapi_api folder
|
{
"login": "samsiuatpurple",
"id": 140026859,
"node_id": "U_kgDOCFij6w",
"avatar_url": "https://avatars.githubusercontent.com/u/140026859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samsiuatpurple",
"html_url": "https://github.com/samsiuatpurple",
"followers_url": "https://api.github.com/users/samsiuatpurple/followers",
"following_url": "https://api.github.com/users/samsiuatpurple/following{/other_user}",
"gists_url": "https://api.github.com/users/samsiuatpurple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samsiuatpurple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsiuatpurple/subscriptions",
"organizations_url": "https://api.github.com/users/samsiuatpurple/orgs",
"repos_url": "https://api.github.com/users/samsiuatpurple/repos",
"events_url": "https://api.github.com/users/samsiuatpurple/events{/privacy}",
"received_events_url": "https://api.github.com/users/samsiuatpurple/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-10T01:53:50
| 2025-01-10T04:03:43
| 2025-01-10T04:03:43
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
workflow = StateGraph(AgentState, input=InputState, output=OutputState)
connection = POSTGRES_DB_STORE_CONNECTION //Postgres db connection string
with PostgresStore.from_conn_string(conn_string=connection) as store:
store.setup()
app = workflow.compile(store=store)
//saving memory
def write_memory(state, config: RunnableConfig, store: BaseStore):
"""Reflect on the chat history and save a memory to the store."""
# Get the user ID from the config
user_id = config["configurable"]["user_id"]
print(f"user id: {user_id}")
# Retrieve existing memory from the store
namespace = ("memory", user_id)
existing_memory = store.get(namespace, "user_memory")
# Extract the memory
if existing_memory:
existing_memory_content = existing_memory.value.get('memory')
else:
existing_memory_content = "No existing memory found."
# Format the memory in the system prompt
system_msg = CREATE_MEMORY_INSTRUCTION.format(memory=existing_memory_content)
new_memory = get_llm().invoke([SystemMessage(content=system_msg)]+state['messages'])
# Overwrite the existing memory in the store
key = "user_memory"
print(new_memory)
# Write value as a dictionary with a memory key
store.put(namespace, key, {"memory": new_memory.content})
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
There is no exception. The `store.setup()` does create the schema (`store` table) in the postgres db. But when I save memory, it saves into the `.langgraph_api` folder instead of the postgres db.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.2.0: Fri Dec 6 19:02:12 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6031
> Python Version: 3.11.10 (main, Sep 7 2024, 01:03:31) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.3.22
> langchain: 0.3.10
> langchain_community: 0.3.10
> langsmith: 0.1.147
> langchain_chroma: 0.1.4
> langchain_openai: 0.2.11
> langchain_postgres: 0.0.12
> langchain_text_splitters: 0.3.2
> langgraph_api: 0.0.15
> langgraph_cli: 0.1.65
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.48
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.10
> async-timeout: Installed. No version info available.
> chromadb: 0.5.23
> click: 8.1.7
> cryptography: 43.0.3
> dataclasses-json: 0.6.7
> fastapi: 0.115.6
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.56
> langgraph-checkpoint: 2.0.8
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.57.0
> orjson: 3.10.12
> packaging: 24.2
> pgvector: 0.2.5
> psycopg: 3.2.3
> psycopg-pool: 3.2.4
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sqlalchemy: 2.0.36
> SQLAlchemy: 2.0.36
> sse-starlette: 2.1.3
> starlette: 0.41.3
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> uvicorn: 0.32.1
> watchfiles: 1.0.0
|
{
"login": "samsiuatpurple",
"id": 140026859,
"node_id": "U_kgDOCFij6w",
"avatar_url": "https://avatars.githubusercontent.com/u/140026859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samsiuatpurple",
"html_url": "https://github.com/samsiuatpurple",
"followers_url": "https://api.github.com/users/samsiuatpurple/followers",
"following_url": "https://api.github.com/users/samsiuatpurple/following{/other_user}",
"gists_url": "https://api.github.com/users/samsiuatpurple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samsiuatpurple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsiuatpurple/subscriptions",
"organizations_url": "https://api.github.com/users/samsiuatpurple/orgs",
"repos_url": "https://api.github.com/users/samsiuatpurple/repos",
"events_url": "https://api.github.com/users/samsiuatpurple/events{/privacy}",
"received_events_url": "https://api.github.com/users/samsiuatpurple/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/2983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2983/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2980
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2980/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2980/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2980/events
|
https://github.com/langchain-ai/langgraph/issues/2980
| 2,779,017,090
|
I_kwDOKFU0lc6lpG-C
| 2,980
|
param mismatch during langgraph-cli dev mode
|
{
"login": "kingsotn-twelve",
"id": 182565857,
"node_id": "U_kgDOCuG74Q",
"avatar_url": "https://avatars.githubusercontent.com/u/182565857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingsotn-twelve",
"html_url": "https://github.com/kingsotn-twelve",
"followers_url": "https://api.github.com/users/kingsotn-twelve/followers",
"following_url": "https://api.github.com/users/kingsotn-twelve/following{/other_user}",
"gists_url": "https://api.github.com/users/kingsotn-twelve/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingsotn-twelve/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingsotn-twelve/subscriptions",
"organizations_url": "https://api.github.com/users/kingsotn-twelve/orgs",
"repos_url": "https://api.github.com/users/kingsotn-twelve/repos",
"events_url": "https://api.github.com/users/kingsotn-twelve/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingsotn-twelve/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-10T00:19:56
| 2025-01-10T00:24:31
| 2025-01-10T00:24:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
langgraph dev --config langgraph.json --host 127.0.0.1 --port 3000 --debug-port 5678 --no-browser
```
### Error Message and Stack Trace (if applicable)
```shell
File "python3.11/site-packages/langgraph_cli/cli.py", line 608, in dev
run_server(
File "python3.11/site-packages/langgraph_api/cli.py", line 221, in run_server
uvicorn.run(
TypeError: run() got an unexpected keyword argument 'auth'
```
### Description
langgraphcli=0.1.65 has mismatch type params for the `run_server()` function inside `dev()`
commenting out the auth param fixes this error.
but this is the second time i'm running into param mismatches. this shouldn't happen every time.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Thu Sep 12 23:35:10 PDT 2024; root:xnu-10063.141.1.701.1~1/RELEASE_ARM64_T6030
> Python Version: 3.11.11 (main, Dec 6 2024, 21:09:50) [Clang 18.1.8 ]
Package Information
-------------------
> langchain_core: 0.3.24
> langchain: 0.3.7
> langsmith: 0.1.147
> langchain_openai: 0.2.9
> langchain_text_splitters: 0.3.2
> langgraph_api: 0.0.7
> langgraph_cli: 0.1.65
> langgraph_license: Installed. No version info available.
> langgraph_sdk: 0.1.44
> langgraph_storage: Installed. No version info available.
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.8
> async-timeout: Installed. No version info available.
> click: 8.1.8
> cryptography: 43.0.3
> httpx: 0.28.1
> jsonpatch: 1.33
> jsonschema-rs: 0.25.1
> langgraph: 0.2.59
> langgraph-checkpoint: 2.0.9
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.55.3
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.3
> pyjwt: 2.10.1
> python-dotenv: 1.0.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> sse-starlette: 2.1.3
> starlette: 0.41.3
> structlog: 24.4.0
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2
> uvicorn: 0.32.1
> watchfiles: 1.0.3
|
{
"login": "kingsotn-twelve",
"id": 182565857,
"node_id": "U_kgDOCuG74Q",
"avatar_url": "https://avatars.githubusercontent.com/u/182565857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingsotn-twelve",
"html_url": "https://github.com/kingsotn-twelve",
"followers_url": "https://api.github.com/users/kingsotn-twelve/followers",
"following_url": "https://api.github.com/users/kingsotn-twelve/following{/other_user}",
"gists_url": "https://api.github.com/users/kingsotn-twelve/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingsotn-twelve/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingsotn-twelve/subscriptions",
"organizations_url": "https://api.github.com/users/kingsotn-twelve/orgs",
"repos_url": "https://api.github.com/users/kingsotn-twelve/repos",
"events_url": "https://api.github.com/users/kingsotn-twelve/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingsotn-twelve/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/2980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2980/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2979
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2979/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2979/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2979/events
|
https://github.com/langchain-ai/langgraph/issues/2979
| 2,778,998,037
|
I_kwDOKFU0lc6lpCUV
| 2,979
|
Interrupt(). How to interrupt for the second time
|
{
"login": "Saisiva123",
"id": 36258631,
"node_id": "MDQ6VXNlcjM2MjU4NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/36258631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saisiva123",
"html_url": "https://github.com/Saisiva123",
"followers_url": "https://api.github.com/users/Saisiva123/followers",
"following_url": "https://api.github.com/users/Saisiva123/following{/other_user}",
"gists_url": "https://api.github.com/users/Saisiva123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saisiva123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saisiva123/subscriptions",
"organizations_url": "https://api.github.com/users/Saisiva123/orgs",
"repos_url": "https://api.github.com/users/Saisiva123/repos",
"events_url": "https://api.github.com/users/Saisiva123/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saisiva123/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-10T00:00:45
| 2025-01-10T00:17:41
| 2025-01-10T00:17:41
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
def ask_user_node(state: LookupState) -> Command[Literal['lookup_node']]:
agent_request = state['userAgentInteractionInfo']['agentRequest']
if 'userResponse' in state['userAgentInteractionInfo']:
del state['userAgentInteractionInfo']['userResponse']
user_response = interrupt({
'id': str(uuid.uuid4()),
'request': agent_request
})
if user_response:
return Command(goto='lookup_node',
resume={'userAgentInteractionInfo': {'agentRequest': agent_request}},
update={'messages': [HumanMessage(content=user_response, name="User_Response")], 'userAgentInteractionInfo': {'agentRequest': '', 'userResponse': ''}})
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I know the default behavior of interrupt will invoke GraphInterrupt for the first time. When the same node has the interrupt its not halting the execution instead its taking the same old cached value, so is there any extra argument that I can pass so that the interrupt will halt the execution for the second time and wait for the user input
### System Info
python -m langchain_core.sys_info
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/2979/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2979/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2968
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2968/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2968/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2968/events
|
https://github.com/langchain-ai/langgraph/issues/2968
| 2,777,785,345
|
I_kwDOKFU0lc6lkaQB
| 2,968
|
Execution goes into infinite loop
|
{
"login": "asrays",
"id": 71009078,
"node_id": "MDQ6VXNlcjcxMDA5MDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/71009078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asrays",
"html_url": "https://github.com/asrays",
"followers_url": "https://api.github.com/users/asrays/followers",
"following_url": "https://api.github.com/users/asrays/following{/other_user}",
"gists_url": "https://api.github.com/users/asrays/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asrays/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asrays/subscriptions",
"organizations_url": "https://api.github.com/users/asrays/orgs",
"repos_url": "https://api.github.com/users/asrays/repos",
"events_url": "https://api.github.com/users/asrays/events{/privacy}",
"received_events_url": "https://api.github.com/users/asrays/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| true
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-09T13:03:24
| 2025-01-09T14:39:31
| 2025-01-09T14:39:30
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import os
os.environ['ANTHROPIC_API_KEY'] = 'key'
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode, tools_condition
from langchain_core.messages import SystemMessage
from typing import List, Callable
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain import hub
from typing import Annotated
from langgraph.graph.message import add_messages
import requests, json
from langgraph.types import Command
from langgraph.prebuilt import create_react_agent
from langchain_core.messages import HumanMessage, AIMessage
from typing import Literal, TypedDict, Union
llm = ChatAnthropic(model="claude-3-5-haiku-20241022")
# Define graph state
class AgentState(MessagesState):
messages: Annotated[list, add_messages]
def push_into_db(brand_payload):
headers = {'Content-Type': 'application/json'}
response = requests.request('POST', 'my_api', data=json.dumps(brand_payload), headers=headers, timeout=180)
response_content = json.loads(response.content)
return response_content
@tool
def insert_brand_tool(brand_name: str):
'''This will insert the brand data into database and return object id'''
try:
brand_payload = {"name": brand_name, "journey_type": "social"}
response = push_into_db(brand_payload)
if response["success"] == True:
brandId = response["data"]["brandId"]
return f"Brand name inserted with object id {brandId}"
else:
return "Could not insert brand into database.Please try again."
except Exception as e:
return "Could not insert brand into database.Please try again."
@tool
def get_image(img_path: str):
'''This will read the given web url image path and download it'''
try:
img_response = requests.get(img_path)
img_name = os.path.basename(img_path)
with open(img_name, "wb") as f:
f.write(img_response.content)
return f"Image has been downloaded, status is successfull. No worker needed."
except Exception as e:
return "Cannot read image."
members = ["INSERT_DATABASE_AGENT", "UPLOAD_IMAGE_AGENT", "FINISH"]
class Router(TypedDict):
"""Worker to route to next. If no workers needed, route to FINISH."""
next: Union[Literal["INSERT_DATABASE_AGENT"], Literal["UPLOAD_IMAGE_AGENT"], Literal["FINISH"]]
system_prompt = (
"You are a supervisor tasked with managing a conversation between the"
f" following workers: {members}. Given the following user request,"
" respond with the worker to act next. Each worker will perform a"
" task and respond with their results and status. WHEN SUCCESSFULL, RESPOND MESSAGE WITH FINISH"
)
def greet_node(state: AgentState):
messages = [
{"role": "system", "content": system_prompt},
] + [state["messages"][-1]]
response = llm.with_structured_output(Router).invoke(messages)
goto = response["next"]
if goto == "FINISH":
goto = END
return Command(goto="UPLOAD_IMAGE_AGENT")
insert_database_agent = create_react_agent(llm, tools=[insert_brand_tool])
def router_function_to_db(state: MessagesState):
result = insert_database_agent.invoke(state)
return Command(
update={
"messages": [
HumanMessage(content=result["messages"][-1].content, name="INSERT_DATABASE_AGENT")
]
},
goto="GREETING_AGENT",
)
upload_image_agent = create_react_agent(llm, tools=[get_image], state_modifier="Response with FINISH if image is downloaded successfully.")
def router_function_to_image_upload(state: MessagesState):
result = upload_image_agent.invoke(state)
return Command(
update={
"messages": [
HumanMessage(content=result["messages"][-1].content, name="UPLOAD_IMAGE_AGENT")
]
},
goto="GREETING_AGENT",
)
builder = StateGraph(MessagesState)
builder.add_edge(START, "GREETING_AGENT")
builder.add_node("GREETING_AGENT", greet_node)
builder.add_node("INSERT_DATABASE_AGENT", router_function_to_db)
builder.add_node("UPLOAD_IMAGE_AGENT", router_function_to_image_upload)
graph = builder.compile()
img_bytes = graph.get_graph().draw_mermaid_png()
with open("graph_mermaid.png", "wb") as f:
f.write(img_bytes)
input={"messages": ["Hi how are you? Can you read the image url(A valid s3 url)"]}
for output in graph.stream(input):
print(output)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
This flow is the improvisation from the langgraph multi agent supervisor docs.
This workflow could not reach to "FINISH" state (only 1 out of 20 time if I run the code). Please note everything else is working fine; only the issue is when i give the input- download the image, it gets downloaded by "upload_image_agent" but my "greeting_agent" could not able to generate "FINISH" respose in its stack so it is not able to exit. What best should be the best promt to get into "FINISH" state?
Please can anyone help?
### System Info
versions-
langgraph- 0.2.61
langgraph-checkpoint- 2.0.9
langgraph-sdk- 0.1.48
| null |
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/2968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2968/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2965
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2965/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2965/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2965/events
|
https://github.com/langchain-ai/langgraph/issues/2965
| 2,777,185,322
|
I_kwDOKFU0lc6liHwq
| 2,965
|
Command do not render graph if nodes are defined within a class.
|
{
"login": "SergioG-M",
"id": 61322372,
"node_id": "MDQ6VXNlcjYxMzIyMzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/61322372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SergioG-M",
"html_url": "https://github.com/SergioG-M",
"followers_url": "https://api.github.com/users/SergioG-M/followers",
"following_url": "https://api.github.com/users/SergioG-M/following{/other_user}",
"gists_url": "https://api.github.com/users/SergioG-M/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SergioG-M/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SergioG-M/subscriptions",
"organizations_url": "https://api.github.com/users/SergioG-M/orgs",
"repos_url": "https://api.github.com/users/SergioG-M/repos",
"events_url": "https://api.github.com/users/SergioG-M/events{/privacy}",
"received_events_url": "https://api.github.com/users/SergioG-M/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-09T08:43:06
| 2025-01-23T07:26:42
| 2025-01-14T22:18:08
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [x] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import random
from langgraph.graph import START, StateGraph
from langgraph.types import Command
from typing_extensions import Literal, TypedDict
# Define graph state
class State(TypedDict):
foo: str
# Define the nodes
class MyGraph:
def node_a(self, state: State) -> Command[Literal["node_b", "node_c"]]:
print("Called A")
value = random.choice(["a", "b"])
# this is a replacement for a conditional edge function
if value == "a":
goto = "node_b"
else:
goto = "node_c"
# note how Command allows you to BOTH update the graph state AND route to the next node
return Command(
# this is the state update
update={"foo": value},
# this is a replacement for an edge
goto=goto,
)
# Nodes B and C are unchanged
def node_b(self, state: State):
print("Called B")
return {"foo": state["foo"] + "b"}
def node_c(self, state: State):
print("Called C")
return {"foo": state["foo"] + "c"}
def build_graph(self):
builder = StateGraph(State)
builder.add_edge(START, "node_a")
builder.add_node("node_a", self.node_a)
builder.add_node("node_b", self.node_b)
builder.add_node("node_c", self.node_c)
# NOTE: there are no edges between nodes A, B and C!
graph = builder.compile()
return graph
from IPython.display import display, Image
graph = MyGraph().build_graph()
display(Image(graph.get_graph().draw_mermaid_png()))
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When using command inside a class the edges are not rendered. It works if nodes are outside (or probably if static methods?)
I adapted this code: https://langchain-ai.github.io/langgraph/how-tos/command/#define-graph
And the generated graph is this:

### System Info
langgraph version 0.2.61
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/2965/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2965/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2964
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2964/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2964/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2964/events
|
https://github.com/langchain-ai/langgraph/issues/2964
| 2,776,987,681
|
I_kwDOKFU0lc6lhXgh
| 2,964
|
Cannot use Enums for node names
|
{
"login": "laurencejennings",
"id": 20730998,
"node_id": "MDQ6VXNlcjIwNzMwOTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/20730998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laurencejennings",
"html_url": "https://github.com/laurencejennings",
"followers_url": "https://api.github.com/users/laurencejennings/followers",
"following_url": "https://api.github.com/users/laurencejennings/following{/other_user}",
"gists_url": "https://api.github.com/users/laurencejennings/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laurencejennings/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laurencejennings/subscriptions",
"organizations_url": "https://api.github.com/users/laurencejennings/orgs",
"repos_url": "https://api.github.com/users/laurencejennings/repos",
"events_url": "https://api.github.com/users/laurencejennings/events{/privacy}",
"received_events_url": "https://api.github.com/users/laurencejennings/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5828292648,
"node_id": "LA_kwDOKFU0lc8AAAABW2SwKA",
"url": "https://api.github.com/repos/langchain-ai/langgraph/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-09T07:03:04
| 2025-01-22T19:35:08
| 2025-01-22T19:35:07
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
from enum import Enum
from langchain_core.messages import HumanMessage
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import create_react_agent
from typing import Literal
from typing_extensions import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import MessagesState, END
from langgraph.types import Command
class Members(Enum):
researcher="researcher"
coder="coder"
members = [Members.researcher, Members.coder]
# Our team supervisor is an LLM node. It just picks the next agent to process
# and decides when the work is completed
options = members + ["FINISH"]
system_prompt = (
"You are a supervisor tasked with managing a conversation between the"
f" following workers: {members}. Given the following user request,"
" respond with the worker to act next. Each worker will perform a"
" task and respond with their results and status. When finished,"
" respond with FINISH."
)
class Router(TypedDict):
"""Worker to route to next. If no workers needed, route to FINISH."""
next: Literal[*options]
llm = ChatOpenAI(model="gpt-4o")
def supervisor_node(state: MessagesState) -> Command[Literal[*members, "__end__"]]:
messages = [
{"role": "system", "content": system_prompt},
] + state["messages"]
response = llm.with_structured_output(Router).invoke(messages)
goto = response["next"]
if goto == "FINISH":
goto = END
return Command(goto=goto)
research_agent = create_react_agent(
llm, tools=[], state_modifier="You are a researcher. DO NOT do any math."
)
def research_node(state: MessagesState) -> Command[Literal["supervisor"]]:
result = research_agent.invoke(state)
return Command(
update={
"messages": [
HumanMessage(content=result["messages"][-1].content, name="researcher")
]
},
goto="supervisor",
)
# NOTE: THIS PERFORMS ARBITRARY CODE EXECUTION, WHICH CAN BE UNSAFE WHEN NOT SANDBOXED
code_agent = create_react_agent(llm, tools=[])
def code_node(state: MessagesState) -> Command[Literal["supervisor"]]:
result = code_agent.invoke(state)
return Command(
update={
"messages": [
HumanMessage(content=result["messages"][-1].content, name="coder")
]
},
goto="supervisor",
)
builder = StateGraph(MessagesState)
builder.add_edge(START, "supervisor")
builder.add_node("supervisor", supervisor_node)
builder.add_node("researcher", research_node)
builder.add_node("coder", code_node)
graph = builder.compile()
```
### Error Message and Stack Trace (if applicable)
```shell
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[4], line 90
88 builder.add_node("researcher", research_node)
89 builder.add_node("coder", code_node)
---> 90 graph = builder.compile()
File ~/projects/ditto/poc/assignments/venv/lib/python3.11/site-packages/langgraph/graph/state.py:510, in StateGraph.compile(self, checkpointer, store, interrupt_before, interrupt_after, debug)
507 interrupt_after = interrupt_after or []
509 # validate the graph
--> 510 self.validate(
511 interrupt=(
512 (interrupt_before if interrupt_before != "*" else []) + interrupt_after
513 if interrupt_after != "*"
514 else []
515 )
516 )
518 # prepare output channels
519 output_channels = (
520 "__root__"
521 if len(self.schemas[self.output]) == 1
(...)
527 ]
528 )
File ~/projects/ditto/poc/assignments/venv/lib/python3.11/site-packages/langgraph/graph/graph.py:405, in Graph.validate(self, interrupt)
403 for target in all_targets:
404 if target not in self.nodes and target != END:
--> 405 raise ValueError(f"Found edge ending at unknown node `{target}`")
406 # validate interrupts
407 if interrupt:
ValueError: Found edge ending at unknown node `Members.coder`
```
### Description
I'm trying to use an Enum for my node/agent names to make the code more maintainable, but it seems the graph is looking for a node with the actual name of the enum variable not the value.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.5.0: Mon Apr 24 20:51:50 PDT 2023; root:xnu-8796.121.2~5/RELEASE_X86_64
> Python Version: 3.11.6 (v3.11.6:8b6ee5ba3b, Oct 2 2023, 11:18:21) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.9
> langchain_community: 0.3.8
> langsmith: 0.1.147
> langchain_openai: 0.2.10
> langchain_text_splitters: 0.3.2
> langgraph_sdk: 0.1.48
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.3
> orjson: 3.10.13
> packaging: 24.2
> pydantic: 2.10.4
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/2964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2964/timeline
| null |
completed
| false
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2962
|
https://api.github.com/repos/langchain-ai/langgraph
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2962/labels{/name}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2962/comments
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2962/events
|
https://github.com/langchain-ai/langgraph/issues/2962
| 2,776,804,688
|
I_kwDOKFU0lc6lgq1Q
| 2,962
|
when define tool with `return Command` and call tools multiple at once, using stream_mode=updates, value type of result change from `dict` to `list`.
|
{
"login": "rayshen92",
"id": 38471614,
"node_id": "MDQ6VXNlcjM4NDcxNjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/38471614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rayshen92",
"html_url": "https://github.com/rayshen92",
"followers_url": "https://api.github.com/users/rayshen92/followers",
"following_url": "https://api.github.com/users/rayshen92/following{/other_user}",
"gists_url": "https://api.github.com/users/rayshen92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rayshen92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rayshen92/subscriptions",
"organizations_url": "https://api.github.com/users/rayshen92/orgs",
"repos_url": "https://api.github.com/users/rayshen92/repos",
"events_url": "https://api.github.com/users/rayshen92/events{/privacy}",
"received_events_url": "https://api.github.com/users/rayshen92/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
{
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting",
"message": "API rate limit exceeded for 34.106.111.61. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"status": null
}
| 2025-01-09T04:23:21
| 2025-01-10T00:26:31
| 2025-01-10T00:26:31
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import os
from langchain_core.messages import ToolMessage
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool
from langchain_core.tools.base import InjectedToolCallId
from langgraph.types import Command
from typing_extensions import Annotated
import dotenv
dotenv.load_dotenv()
@tool
def add(
a: int,
b: int,
tool_call_id: Annotated[str, InjectedToolCallId],
config: RunnableConfig,
):
"""add two numbers"""
result = a + b
return Command(
update={
"messages": [ToolMessage(f"add result: {result}", tool_call_id=tool_call_id)],
}
)
@tool
def sub(
a: int,
b: int,
tool_call_id: Annotated[str, InjectedToolCallId],
config: RunnableConfig,
):
"""sub two numbers"""
result = a + b
return f"sub result: {result}"
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
model = ChatOpenAI(
model="gpt-4o",
api_key=os.environ["AZURE_OPENAI_API_KEY"],
base_url=f'{os.environ["AZURE_OPENAI_ENDPOINT"]}/v1',
)
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
tools = [add, sub]
agent = create_react_agent(model, tools=tools, checkpointer=memory)
config = {
"configurable": {"thread_id": "1"},
}
# use add tool
for chunk in agent.stream(
input={
"messages": [
(
"user",
"add(1,1), add(1,2), add(1,3) at once",
),
]
},
config=config,
stream_mode="updates",
):
for node, values in chunk.items():
print(f"Receiving update from node: '{node}'")
print(f"type of values: {type(values)}")
print(values)
print("\n\n")
print("===========================================================\n\n")
# use sub tool
for chunk in agent.stream(
input={
"messages": [
(
"user",
"sub(1,1), sub(1,2), sub(1,3) at once",
),
]
},
config=config,
stream_mode="updates",
):
for node, values in chunk.items():
print(f"Receiving update from node: '{node}'")
print(f"type of values: {type(values)}")
print(values)
print("\n\n")
print("======================message history=================\n\n")
cur_state = agent.get_state(config)
messages = cur_state.values.get("messages", [])
for message in messages:
message.pretty_print()
```
### Error Message and Stack Trace (if applicable)
```shell
Receiving update from node: 'agent'
type of values: <class 'dict'>
{'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_ZZg7TJazTcfHGcq3gOiedOqt', 'function': {'arguments': '{"a": 1, "b": 1}', 'name': 'add'}, 'type': 'function'}, {'id': 'call_fz3zfIwaZV6KK9zEBCRG4jdf', 'function': {'arguments': '{"a": 1, "b": 2}', 'name': 'add'}, 'type': 'function'}, {'id': 'call_HMbvpnTtRNkxwXsu9sq7SFbo', 'function': {'arguments': '{"a": 1, "b": 3}', 'name': 'add'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 67, 'prompt_tokens': 85, 'total_tokens': 152, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_f3927aa00d', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-4dc7bc54-bf3e-42ee-8a53-b9444137a08b-0', tool_calls=[{'name': 'add', 'args': {'a': 1, 'b': 1}, 'id': 'call_ZZg7TJazTcfHGcq3gOiedOqt', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 1, 'b': 2}, 'id': 'call_fz3zfIwaZV6KK9zEBCRG4jdf', 'type': 'tool_call'}, {'name': 'add', 'args': {'a': 1, 'b': 3}, 'id': 'call_HMbvpnTtRNkxwXsu9sq7SFbo', 'type': 'tool_call'}], usage_metadata={'input_tokens': 85, 'output_tokens': 67, 'total_tokens': 152, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}
Receiving update from node: 'tools'
type of values: <class 'list'>
[{'messages': [ToolMessage(content='add result: 2', name='add', id='38f447fe-0939-4dec-af5b-b772d94bbbe8', tool_call_id='call_ZZg7TJazTcfHGcq3gOiedOqt')]}, {'messages': [ToolMessage(content='add result: 3', name='add', id='17b95c26-20bd-49f4-817b-254b8fd469b4', tool_call_id='call_fz3zfIwaZV6KK9zEBCRG4jdf')]}, {'messages': [ToolMessage(content='add result: 4', name='add', id='672d6d95-4248-420e-95e0-f1d4a315f7ce', tool_call_id='call_HMbvpnTtRNkxwXsu9sq7SFbo')]}]
Receiving update from node: 'agent'
type of values: <class 'dict'>
{'messages': [AIMessage(content='The results of the additions are as follows:\n- \\(1 + 1 = 2\\)\n- \\(1 + 2 = 3\\)\n- \\(1 + 3 = 4\\)', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 43, 'prompt_tokens': 183, 'total_tokens': 226, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_f3927aa00d', 'finish_reason': 'stop', 'logprobs': None}, id='run-19450b04-0e81-4f43-ba78-f55d667c5eb7-0', usage_metadata={'input_tokens': 183, 'output_tokens': 43, 'total_tokens': 226, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}
===========================================================
Receiving update from node: 'agent'
type of values: <class 'dict'>
{'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_YFPOUghRCkW1Fo1tZ9uSfOu4', 'function': {'arguments': '{"a": 1, "b": 1}', 'name': 'sub'}, 'type': 'function'}, {'id': 'call_ghESCK5aIuqEaqD6ofiZR14v', 'function': {'arguments': '{"a": 1, "b": 2}', 'name': 'sub'}, 'type': 'function'}, {'id': 'call_dALc18L81Jptz2cz3w8YDTf1', 'function': {'arguments': '{"a": 1, "b": 3}', 'name': 'sub'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 67, 'prompt_tokens': 253, 'total_tokens': 320, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_f3927aa00d', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-325f3183-5377-44f6-9deb-445b10b534c7-0', tool_calls=[{'name': 'sub', 'args': {'a': 1, 'b': 1}, 'id': 'call_YFPOUghRCkW1Fo1tZ9uSfOu4', 'type': 'tool_call'}, {'name': 'sub', 'args': {'a': 1, 'b': 2}, 'id': 'call_ghESCK5aIuqEaqD6ofiZR14v', 'type': 'tool_call'}, {'name': 'sub', 'args': {'a': 1, 'b': 3}, 'id': 'call_dALc18L81Jptz2cz3w8YDTf1', 'type': 'tool_call'}], usage_metadata={'input_tokens': 253, 'output_tokens': 67, 'total_tokens': 320, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}
Receiving update from node: 'tools'
type of values: <class 'dict'>
{'messages': [ToolMessage(content='sub result: 2', name='sub', id='2ad51609-06a2-4126-861e-e9682e316c19', tool_call_id='call_YFPOUghRCkW1Fo1tZ9uSfOu4'), ToolMessage(content='sub result: 3', name='sub', id='edefea05-aa33-4a0e-bf24-7fbf367a86be', tool_call_id='call_ghESCK5aIuqEaqD6ofiZR14v'), ToolMessage(content='sub result: 4', name='sub', id='f8295690-e247-44a9-bd50-1cfbb84543f7', tool_call_id='call_dALc18L81Jptz2cz3w8YDTf1')]}
Receiving update from node: 'agent'
type of values: <class 'dict'>
{'messages': [AIMessage(content='It seems there was an error in the response. Let me correct that for you.\n\nThe results of the subtractions are as follows:\n- \\(1 - 1 = 0\\)\n- \\(1 - 2 = -1\\)\n- \\(1 - 3 = -2\\)', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 61, 'prompt_tokens': 351, 'total_tokens': 412, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_f3927aa00d', 'finish_reason': 'stop', 'logprobs': None}, id='run-e0aa5255-b08f-4895-87d2-6d22a76ea555-0', usage_metadata={'input_tokens': 351, 'output_tokens': 61, 'total_tokens': 412, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}
======================message history=================
================================ Human Message =================================
add(1,1), add(1,2), add(1,3) at once
================================== Ai Message ==================================
Tool Calls:
add (call_ZZg7TJazTcfHGcq3gOiedOqt)
Call ID: call_ZZg7TJazTcfHGcq3gOiedOqt
Args:
a: 1
b: 1
add (call_fz3zfIwaZV6KK9zEBCRG4jdf)
Call ID: call_fz3zfIwaZV6KK9zEBCRG4jdf
Args:
a: 1
b: 2
add (call_HMbvpnTtRNkxwXsu9sq7SFbo)
Call ID: call_HMbvpnTtRNkxwXsu9sq7SFbo
Args:
a: 1
b: 3
================================= Tool Message =================================
Name: add
add result: 2
================================= Tool Message =================================
Name: add
add result: 3
================================= Tool Message =================================
Name: add
add result: 4
================================== Ai Message ==================================
The results of the additions are as follows:
- \(1 + 1 = 2\)
- \(1 + 2 = 3\)
- \(1 + 3 = 4\)
================================ Human Message =================================
sub(1,1), sub(1,2), sub(1,3) at once
================================== Ai Message ==================================
Tool Calls:
sub (call_YFPOUghRCkW1Fo1tZ9uSfOu4)
Call ID: call_YFPOUghRCkW1Fo1tZ9uSfOu4
Args:
a: 1
b: 1
sub (call_ghESCK5aIuqEaqD6ofiZR14v)
Call ID: call_ghESCK5aIuqEaqD6ofiZR14v
Args:
a: 1
b: 2
sub (call_dALc18L81Jptz2cz3w8YDTf1)
Call ID: call_dALc18L81Jptz2cz3w8YDTf1
Args:
a: 1
b: 3
================================= Tool Message =================================
Name: sub
sub result: 2
================================= Tool Message =================================
Name: sub
sub result: 3
================================= Tool Message =================================
Name: sub
sub result: 4
================================== Ai Message ==================================
It seems there was an error in the response. Let me correct that for you.
The results of the subtractions are as follows:
- \(1 - 1 = 0\)
- \(1 - 2 = -1\)
- \(1 - 3 = -2\)
```
### Description
when call `add` multiple times at once, type of values is `list`.
when call `sub` multiple times at once, type of values if `dict`.
it should be the same type, the correct type is `dict`.
### System Info
Package Information
-------------------
> langchain_core: 0.3.25
> langsmith: 0.1.140
> langchain_openai: 0.2.6
> langgraph_sdk: 0.1.47
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> httpx: 0.27.0
> jsonpatch: 1.33
> openai: 1.54.3
> orjson: 3.10.3
> packaging: 24.0
> pydantic: 2.7.4
> PyYAML: 6.0.1
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
|
{
"login": "vbarda",
"id": 19161700,
"node_id": "MDQ6VXNlcjE5MTYxNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19161700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbarda",
"html_url": "https://github.com/vbarda",
"followers_url": "https://api.github.com/users/vbarda/followers",
"following_url": "https://api.github.com/users/vbarda/following{/other_user}",
"gists_url": "https://api.github.com/users/vbarda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbarda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbarda/subscriptions",
"organizations_url": "https://api.github.com/users/vbarda/orgs",
"repos_url": "https://api.github.com/users/vbarda/repos",
"events_url": "https://api.github.com/users/vbarda/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbarda/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/langchain-ai/langgraph/issues/2962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/langchain-ai/langgraph/issues/2962/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.