Building Production-Ready, Full Stack AI Agents With LangGraph, FastAPI, and Next.js 15
Live Demo Here
GitHub Repo Here
Shipping an AI agent from notebook to production-ready app usually means wiring up observability, auth, streaming UI, and persistence across two very different stacks. This template cuts straight through that friction by pairing a LangGraph and FastAPI microservice with a Next.js 15 frontend that already speaks the Vercel AI SDK stream protocol. You start with a pipeline that handles retrieval-augmented generation, tool calls, and real-time user experience out of the box.The backend is a modernized fork of Joshua Carroll’s agent-service-toolkit: Python 3.13+, LangGraph graphs, Pydantic models, Uvicorn, and an AI SDK compatible SSE layer. It already ships with history endpoints, multi-tenant settings through environment files, and optional Mongo or Postgres checkpoints. Because it is built around LangGraph, you can drop in single agents or orchestrate multi-agent workflows without fighting the HTTP layer.
On the frontend, we remix Vercel’s AI Chatbot template into a production-grade Next.js 15 canary app running React 19 RC. It keeps the polished session and auth patterns, SWR data hooks, Drizzle ORM, Tailwind 4 styles, and ai-sdk-ui components, then swaps the direct OpenAI calls for a proxy that routes every request through FastAPI. That keeps secrets on the server, preserves server-sent events, and lets you layer your own rate limits or traces.
Because both tiers share the AI SDK stream protocol, tool calls, tokens, and RAG artifacts flow as a single stream. The chat UI already handles streaming tool panes and history hydration, so you can focus on orchestrating better agents instead of debugging SSE payloads.
Key enhancements include LangGraph agents wired for text and tool streaming and Python 3.13 support baked into the dependency lock, a proxy-first Next.js server that routes every browser request through FastAPI, Drizzle-backed persistence, so you inherit a proven auth story, and streamlined chat extras such as automatic title generation, artifact viewers, and resilient history hydration when users reload.
To get started, clone the repo, copy the environment templates, and fill in your database URL along with any LLM keys. Run docker compose up with build or sync the backend dependencies and start python src/run_service.py, then install frontend dependencies with pnpm and run the dev server. Visit http://localhost:3000 and watch the UI stream via /api/fastapi/stream while FastAPI serves on http://localhost:8080. Drop your own agent graph into src/agents/ and emit AI SDK protocol events to surface new tool outputs instantly. When you head to production, plan on a single Docker Compose or orchestrated stack that brings up Postgres, FastAPI, and Next.js inside a private VPC with one ingress and one egress.
The roadmap still includes integrating legacy Next.js hooks such as the old chat handlers, time-travel editing. A one-command docker compose bundle that spins up Postgres, FastAPI, and Next.js with seed data is on deck. Playwright suites, FastAPI tests, and fresh LangGraph agent examples are always welcome; open an issue, fork the project, and show the community what your agent can do.
Comments
Post a Comment