Langgraph Engine
Streamline your workflow with this expert, langgraph, production, grade. Includes structured workflows, validation checks, and reusable patterns for ai research.
LangGraph Engine
Overview
LangGraph is a Python framework from LangChain for building stateful, multi-actor AI agent applications as directed graphs. Unlike simple chain-based approaches, LangGraph makes the flow of your agent explicit and debuggable by representing it as a graph where nodes are computation steps and edges define transitions between them. State is managed through typed dictionaries with reducers that control how updates are merged, enabling complex patterns like cyclic agent loops, conditional routing, human-in-the-loop approval, and persistent multi-turn conversations. LangGraph compiles these graphs into runnables that support streaming, async execution, and checkpointing out of the box. It is the recommended framework for production agent systems that need reliability, observability, and fine-grained control over execution flow.
When to Use
- ReAct-style tool-calling agents: Build agents that reason, select tools, execute them, and iterate until the task is complete.
- Multi-agent orchestration: Coordinate multiple specialized agents (researcher, writer, reviewer) with explicit handoff logic.
- Human-in-the-loop workflows: Pause execution for human approval, modification, or input before proceeding.
- Stateful conversational agents: Maintain conversation history and accumulated context across multiple interactions using checkpointers.
- Complex conditional routing: Route requests to different processing paths based on classification, state conditions, or model output.
- Production agent deployments: When you need persistence, retry logic, streaming, and observability beyond what simple chains provide.
Quick Start
Installation
pip install langgraph langchain-openai
Minimal Agent
from typing import Annotated, TypedDict from langgraph.graph import StateGraph, START, END from langgraph.graph.message import add_messages from langgraph.prebuilt import ToolNode from langchain_openai import ChatOpenAI from langchain_core.tools import tool class AgentState(TypedDict): messages: Annotated[list, add_messages] @tool def search(query: str) -> str: """Search the web for information.""" return f"Results for: {query}" tools = [search] llm = ChatOpenAI(model="gpt-4o").bind_tools(tools) def agent(state: AgentState) -> dict: response = llm.invoke(state["messages"]) return {"messages": [response]} def should_continue(state: AgentState) -> str: if state["messages"][-1].tool_calls: return "tools" return END graph = StateGraph(AgentState) graph.add_node("agent", agent) graph.add_node("tools", ToolNode(tools)) graph.add_edge(START, "agent") graph.add_conditional_edges("agent", should_continue, ["tools", END]) graph.add_edge("tools", "agent") app = graph.compile() result = app.invoke({"messages": [("user", "What is 25 * 4?")]})
Core Concepts
State Management with Reducers
State is the heart of LangGraph. Each node receives the current state and returns partial updates. Reducers define how updates are merged:
from typing import Annotated, TypedDict from operator import add from langgraph.graph.message import add_messages def merge_dicts(left: dict, right: dict) -> dict: return {**left, **right} class ResearchState(TypedDict): messages: Annotated[list, add_messages] # Appends new messages findings: Annotated[dict, merge_dicts] # Merges dictionaries sources: Annotated[list[str], add] # Concatenates lists current_step: str # Overwrites (no reducer) errors: Annotated[int, lambda a, b: a + b] # Sums integers def researcher(state: ResearchState) -> dict: return { "findings": {"topic_a": "New finding"}, "sources": ["source1.com"], "current_step": "researching" }
Conditional Routing
Route execution to different nodes based on state:
from langgraph.graph import StateGraph, START, END class RouterState(TypedDict): query: str query_type: str result: str def classifier(state: RouterState) -> dict: query = state["query"].lower() if "code" in query: return {"query_type": "coding"} elif "search" in query: return {"query_type": "search"} return {"query_type": "chat"} def route_query(state: RouterState) -> str: return state["query_type"] graph = StateGraph(RouterState) graph.add_node("classifier", classifier) graph.add_node("coding", coding_agent) graph.add_node("search", search_agent) graph.add_node("chat", chat_agent) graph.add_edge(START, "classifier") graph.add_conditional_edges( "classifier", route_query, {"coding": "coding", "search": "search", "chat": "chat"} ) graph.add_edge("coding", END) graph.add_edge("search", END) graph.add_edge("chat", END) app = graph.compile()
Human-in-the-Loop with Interrupts
Pause the graph for human input using interrupts and checkpointers:
from langgraph.graph import StateGraph, START, END from langgraph.checkpoint.postgres import PostgresSaver from langgraph.types import interrupt, Command class ApprovalState(TypedDict): messages: Annotated[list, add_messages] draft: str approved: bool def generate_draft(state: ApprovalState) -> dict: draft = llm.invoke(state["messages"]) return {"draft": draft.content} def human_review(state: ApprovalState) -> dict: # This pauses execution and waits for human input decision = interrupt( value={"draft": state["draft"]}, resume_options=["approve", "reject", "edit"] ) return {"approved": decision == "approve"} def publish(state: ApprovalState) -> dict: if state["approved"]: return {"messages": [("assistant", "Published!")]} return {"messages": [("assistant", "Rejected.")]} checkpointer = PostgresSaver.from_conn_string("postgresql://...") graph = StateGraph(ApprovalState) graph.add_node("generate", generate_draft) graph.add_node("review", human_review) graph.add_node("publish", publish) graph.add_edge(START, "generate") graph.add_edge("generate", "review") graph.add_edge("review", "publish") graph.add_edge("publish", END) app = graph.compile(checkpointer=checkpointer) # Start execution (pauses at review) result = app.invoke( {"messages": [("user", "Write a blog post about AI")]}, config={"configurable": {"thread_id": "thread-1"}} ) # Resume with human decision result = app.invoke( Command(resume="approve"), config={"configurable": {"thread_id": "thread-1"}} )
Persistence with Checkpointers
from langgraph.checkpoint.postgres import PostgresSaver from langgraph.checkpoint.memory import MemorySaver # Development: in-memory checkpointer memory = MemorySaver() app = graph.compile(checkpointer=memory) # Production: PostgreSQL checkpointer pg_checkpointer = PostgresSaver.from_conn_string( "postgresql://user:pass@localhost/langgraph" ) app = graph.compile(checkpointer=pg_checkpointer) # Every invocation with a thread_id persists state config = {"configurable": {"thread_id": "user-session-42"}} result = app.invoke({"messages": [("user", "Hello")]}, config=config) # Later: resume the same conversation result = app.invoke({"messages": [("user", "Follow up")]}, config=config)
Configuration Reference
| Parameter | Description | Default |
|---|---|---|
checkpointer | Persistence backend (MemorySaver, PostgresSaver) | None |
interrupt_before | List of node names to pause before | [] |
interrupt_after | List of node names to pause after | [] |
debug | Enable debug mode with execution traces | False |
thread_id | Unique identifier for persistent conversation | Required for persistence |
recursion_limit | Maximum number of graph steps per invocation | 25 |
State Configuration
| Pattern | Reducer | Behavior |
|---|---|---|
Annotated[list, add_messages] | add_messages | Appends messages, handles dedup |
Annotated[list, add] | operator.add | Concatenates lists |
Annotated[dict, merge_dicts] | Custom | Merges dictionaries |
| No annotation | None | Overwrites value |
Annotated[int, lambda a,b: a+b] | Custom | Sums values |
Best Practices
-
Always define explicit exit conditions: Every cyclic graph must have a clear path to END. Use iteration counters in state and check them in routing functions to prevent infinite loops that burn tokens and money.
-
Use typed state with reducers: Define your state as a TypedDict with Annotated reducers for every field that accumulates data. This prevents accidental overwrites and makes state evolution predictable.
-
Use PostgreSQL checkpointers in production: MemorySaver is fine for development, but production systems need PostgreSQL persistence to survive restarts, enable human-in-the-loop, and support multi-process deployments.
-
Keep nodes small and focused: Each node should do one thing. Split complex logic into multiple nodes connected by edges. This makes the graph debuggable and individual nodes testable.
-
Use meaningful thread IDs: Attach thread IDs to user sessions or task identifiers so checkpoints map to real-world conversations. This enables resumption and debugging of specific user flows.
-
Add iteration limits to state: Include a counter field that increments each cycle. Check it in your routing function and route to END when exceeded. This is your safety net against runaway agents.
-
Prefer conditional edges over complex node logic: Move routing decisions into dedicated classifier nodes with conditional edges rather than embedding complex if/else logic inside nodes.
-
Stream intermediate results: Use
app.stream()instead ofapp.invoke()for user-facing applications. This provides real-time feedback as the agent works through its graph. -
Test graphs with deterministic inputs: Write unit tests for individual nodes and integration tests for complete graph paths. Mock LLM responses to test routing logic without API calls.
-
Use LangGraph Studio for visualization: The LangGraph Studio tool lets you visualize your graph structure, step through execution, and inspect state at each node during development.
Troubleshooting
Agent loops forever without reaching END
Add an iteration counter to your state and check it in the routing function. Set recursion_limit when compiling the graph as a hard safety limit. Log the routing decision at each step to identify where the loop occurs.
State updates overwrite instead of accumulate
Ensure you are using Annotated[type, reducer] for fields that should accumulate. Without a reducer annotation, LangGraph overwrites the field with the new value. Check that your reducer function signature matches (old, new) -> merged.
Checkpointer not persisting across restarts
Verify that you are passing the same thread_id in the config. MemorySaver does not persist across process restarts; switch to PostgresSaver for production. Ensure the database connection string is correct and the database is accessible.
Human-in-the-loop interrupt not pausing
Interrupts require a checkpointer to be configured. Verify that interrupt() is called inside a node function (not outside the graph). Check that you are using Command(resume=...) to continue execution after the interrupt.
Conditional edges routing incorrectly Print the state before the routing function to verify the values being used for routing decisions. Ensure the routing function returns exact string matches for the node names defined in the conditional edges mapping.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.