Open
Conversation
3 tasks
Agents no longer need a direct Postgres connection for LangGraph checkpointing. Instead, checkpoint operations are proxied through 5 new backend endpoints under /checkpoints (get-tuple, put, put-writes, list, delete-thread). Binary blob data is base64-encoded for JSON transport. Includes ORM models for the 4 checkpoint tables, Alembic migration, repository with composite-PK queries, use case layer, Pydantic schemas, and FastAPI routes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
115e886 to
dd5abe4
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What this does
This PR adds backend support for LangGraph checkpoint persistence — the mechanism LangGraph uses to save and restore agent state between messages (conversation history, channel values, pending writes, etc.).
Why we need this
LangGraph agents need to persist their state (checkpoints) to a database. The built-in approach (
AsyncPostgresSaver) has each agent connect directly to Postgres with its own connection pool. This doesn't scale — as we spin up more LangGraph agent pods, we'd hit connection limits quickly. This is the same problem we already solved for Temporal: instead of agents talking to the DB directly, they go through the backend API, which uses a shared connection pool.Why Postgres (not MongoDB)
Even though agent state currently lives in MongoDB, we chose Postgres for checkpoint storage. There have been some reliability concerns around MongoDB recently and there's a potential future migration to Postgres. Keeping new storage in Postgres is more future-forward. The checkpoint tables are independent and don't conflict with existing MongoDB state storage.
How it works
The pattern mirrors what we do with Temporal. The agent doesn't know about the database — it talks to the backend API, and the backend handles the DB operations.
On the backend side, we:
checkpoints,checkpoint_blobs,checkpoint_writes,checkpoint_migrations) — these mirror the schema that LangGraph's ownAsyncPostgresSaverusesAsyncPostgresSaverusing our SQLAlchemy patterns (composite primary keys, JSONB metadata, upserts viaON CONFLICT)/checkpoints(get-tuple,put,put-writes,list,delete-thread) — one for each method on LangGraph'sBaseCheckpointSaverBinary blob data (serialized Python objects) is base64-encoded for JSON transport. The actual serialization/deserialization stays in the SDK — the backend just stores and retrieves raw JSONB + bytes.
Companion PRs
AsyncPostgresSaverwithHttpCheckpointSaverthat calls these endpointscreate_checkpointer()API is unchangedTest plan
🤖 Generated with Claude Code