File size: 6,166 Bytes
fe36046
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
description: best pract
globs: 
alwaysApply: false
---
The most robust pattern is to treat every agent node as a pure function AgentState → Command, where AgentState is an explicit, typed snapshot of everything the rest of the graph must know.
My overall confidence that the practices below will remain valid for12 months is 85 % (expert opinion).

1 Design a single source of truth for state
Guideline	Why it matters	Key LangGraph API
Define a typed schema (TypedDict or pydantic.BaseModel) for the whole graph.	Static typing catches missing keys early and docs double as living design specs.
langchain-ai.github.io
StateGraph(YourState)
Use channel annotations such as Annotated[list[BaseMessage], operator.add] on mutable fields.	Makes accumulation (+) vs. overwrite clear and prevents accidental loss of history.
langchain-ai.github.io
messages: Annotated[list[BaseMessage], operator.add]
Keep routing out of business data—store the next hop in a dedicated field (next: Literal[...]).	Separates control-flow from payload; easier to debug and replay.
langchain-ai.github.io
next: Literal["planner", "researcher", "__end__"]

2 Pass information with Command objects
Pattern

python
Copy
Edit
def planner(state: AgentState) -> Command[Literal["researcher", "executor", END]]:
    decision = model.invoke(...state.messages)
    return Command(
        goto = decision["next"],
        update = {
            "messages": [decision["content"]],
            "plan": decision["plan"]
        }
    )
Best-practice notes

Always update via update=… rather than mutating the state in-place. This guarantees immutability between nodes and makes time-travel/debugging deterministic.
langchain-ai.github.io

When handing off between sub-graphs, set graph=Command.PARENT or the target sub-graph’s name so orchestration stays explicit.
langchain-ai.github.io

3 Choose a message-sharing strategy early
Strategy	Pros	Cons	When to use
Shared scratch-pad (every intermediate LLM thought stored in messages)
langchain-ai.github.io
Maximum transparency; great for debugging & reflection.	Context window bloat, higher cost/time.	≤ 3 specialist agents or short tasks.
Final-result only (each agent keeps private scratch-pad, shares only its final answer)
langchain-ai.github.io
Scales to 10 + agents; small token footprint.	Harder to post-mortem; agents need local memory.	Large graphs; production workloads.

Tip: If you hide scratch-pads, store them in a per-agent key (e.g. researcher_messages) for replay or fine-tuning even if they’re not sent downstream.
langchain-ai.github.io

4 Inject only what a tool needs
When exposing sub-agents as tools under a supervisor:

python
Copy
Edit
from langgraph.prebuilt import InjectedState

def researcher(state: Annotated[AgentState, InjectedState]):
    ...
Why: keeps tool signatures clean and prevents leaking confidential state.
Extra: If the tool must update global state, let it return a Command so the supervisor doesn’t have to guess what changed.
langchain-ai.github.io

5 Structure the graph for clarity & safety
Network ➜ every agent connects to every other (exploration, research prototypes).

Supervisor ➜ one LLM decides routing (good default for 3-7 agents).

Hierarchical ➜ teams of agents with team-level supervisors (scales past ~7 agents).
langchain-ai.github.io

Pick the simplest architecture that meets today’s needs; refactor to sub-graphs as complexity grows.

6 Operational best practices
Concern	Best practice
Tracing & observability	Attach a LangFuse run-ID to every AgentState at graph entry; emit state snapshots on node enter/exit so traces line up with LangFuse v3 spans.
Memory & persistence	Use Checkpointer for cheap disk-based snapshots or a Redis backend for high-QPS, then time-travel when an LLM stalls.
Parallel branches	Use map edges (built-in) to fan-out calls, but cap parallelism with an asyncio semaphore to avoid API rate-limits.
Vector lookup	Put retrieval results in a dedicated key (docs) so they don’t clutter messages; store only document IDs if you need to replay cheaply.

7 Evidence from the literature (why graphs work)
Peer-reviewed source	Key takeaway	Credibility (0-10)
AAAI 2024 Graph of Thoughts‎ shows arbitrary-graph reasoning beats tree/chain structures by up to 62 % on sorting tasks.
arxiv.org
Graph topology yields better exploration & feedback loops—mirrors LangGraph’s StateGraph.	9
EMNLP 2024 EPO Hierarchical LLM Agents demonstrates hierarchical agents outperform flat agents on ALFRED by >12 % and scales with preference-based training.
aclanthology.org
Validates splitting planning vs. execution agents (Supervisor + workers).	9

Non-peer-reviewed source	Why included	Credibility
Official LangGraph docs (June 2025).
langchain-ai.github.io
Primary specification of the library’s APIs and guarantees.	8

8 Minimal starter template (v 0.6.*)
python
Copy
Edit
from typing import Annotated, Literal, Sequence, TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.types import Command
from langchain_openai import ChatOpenAI
import operator

class AgentState(TypedDict):
    messages: Annotated[Sequence[str], operator.add]
    next: Literal["planner", "researcher", "__end__"]
    plan: str | None

llm = ChatOpenAI()

def planner(state: AgentState) -> Command[Literal["researcher", END]]:
    resp = llm.invoke(...)
    return Command(
        goto = resp["next"],
        update = {"messages": [resp["content"]],
                  "plan": resp["plan"]}
    )

def researcher(state: AgentState) -> Command[Literal["planner"]]:
    resp = llm.invoke(...)
    return Command(goto="planner",
                   update={"messages": [resp["content"]]})

g = StateGraph(AgentState)
g.add_node(planner)
g.add_node(researcher)
g.add_edge(START, planner)
g.add_edge(planner, researcher)
g.add_edge(researcher, planner)
g.add_conditional_edges(planner)
graph = g.compile()
Bottom line
Use typed immutable state, route with Command, and keep private scratch-pads separate from shared context. These patterns align with both the latest LangGraph APIs and empirical results from hierarchical, graph-based agent research.