← Back to all writing

Lessons learnt from upgrading to LangChain 1.0 in production

· 5 min readai eng

What worked, what broke, and why I did it

introduction

LangChain shipped its first stable v1.0 release in late October 2025. After spending the past two months working with their new APIs, I genuinely feel this is the most coherent and thoughtfully designed version of LangChain to date.

I wasn't always a LangChain fan. The early versions were fragile, poorly documented, abstractions shifted frequently, and it felt too premature to use in prod. But v1.0 felt more intentional, and had a more consistent mental model for how data should flow through agents and tools.

This article isn't here to regurgitate the docs. I'm assuming you've already dabbled with LangChain (or are a heavy user). Rather than dumping a laundry list of points, I'm going to cherry-pick just four key points.

a quick recap: LangChain, LangGraph & LangSmith

At a high level, LangChain is a framework for building LLM apps and agents, allowing devs to ship AI features fast with common abstractions.

LangGraph is the graph based execution engine for durable, stateful agent workflows in a controllable way. Finally, LangSmith is an observability platform for tracing and monitoring.

Put simply: LangChain helps you build agents fast, LangGraph runs them reliably, and LangSmith lets you monitor and improve them in production.

my stack

For context, most of my recent work focuses on building multi-agent features for a customer-facing AI platform at work. My backend stack is FastAPI, with Pydantic powering schema validation and data contracts.

lesson 1: dropping support for Pydantic models

A major shift in the migration to v1.0 was the introduction of the new create_agent method. It streamlines how agents are defined and invoked, but it also drops support for Pydantic models and dataclasses in agent state. Everything must now be expressed as TypedDicts extending AgentState.

If you're using FastAPI, Pydantic is often the recommended and default schema validator. I valued schema consistency across the codebase and felt that mixing TypedDicts and Pydantic models would inevitably create confusion — especially for new engineers who might not know which schema format to follow.

To solve this, I introduced a small helper that converts a Pydantic model into a TypedDict that extends AgentState right before it's passed to create_agent. It is critical to note that LangChain attaches custom metadata to type annotations which you must preserve. Python utilities like get_type_hints() strip these annotations, meaning a naïve conversion won't work.

lesson 2: deep agents, default middleware & the lack of control

Alongside the new create_agent API in LangChain 1.0 came something that caught my attention: the deepagents library. Inspired by tools like Claude Code and Manus, deep agents can plan, break tasks into steps, and even spawn subagents.

When I first saw this, I wanted to use it everywhere. Why wouldn't you want "smarter" agents, right? But after trying it across several workflows, I realised that this extra autonomy was sometimes unnecessary — and in certain cases, counterproductive — for my use cases.

Each deep agent comes with some built-in middleware — things like ToDoListMiddleware, FilesystemMiddleware, SummarizationMiddleware, etc. These shape how the agent thinks, plans, and manages context. The issue is that, at the time of writing, you can't control when these middleware run, nor can you disable the ones you don't need.

After digging into the deepagents source code, you can see that the middleware parameter is additional middleware to apply after standard middleware. Any middleware that was passed in middleware=[...] gets appended after the defaults.

All this extra orchestration also introduced noticeable latency, without providing meaningful benefits for my specific workloads.

I'm not saying deep agents are bad — they're powerful in the right scenarios. However, this is a good reminder of a classic engineering principle: don't chase the "shiny" thing. Use the tech that solves your actual problem, even if it's the "less glamorous" option.

my favourite feature: structured output

Having deployed agents in production, especially ones that integrate with deterministic enterprise systems, getting agents to consistently produce output of a specific schema was crucial.

LangChain 1.0 makes this pretty easy. You can define a schema (e.g., a Pydantic model) and pass it to create_agent via the response_format parameter. The agent then produces output that conforms to that schema within a single agent loop with no additional steps.

This has been incredibly useful whenever I need the agent to strictly adhere to a JSON structure with certain fields guaranteed. So far, structured output has been very reliable too.

what I want to explore more of: middleware

One of the trickiest parts of building reliable agents is context engineering — making sure the agent always has the right information at the right time. Middleware was introduced to give developers precise control over each step of the agent loop, and I think it is worth diving deeper into.

Middleware can mean different things depending on context (pun intended). In LangGraph, this can mean controlling the exact sequence of node execution. In long-running conversations, it might involve summarising accumulated context before the next LLM call. In human-in-the-loop scenarios, middleware can pause execution and wait for a user to approve or reject a tool call.

More recently, in the latest v1.1 minor release, LangChain also added a model retry middleware with configurable exponential backoff, allowing graceful recovery for transient endpoint errors.

I personally think middleware is a game changer as agentic workflows get more complex, long-running, and stateful, especially when you need fine-grained control or robust error handling.

to end off

That's it for now — four key reflections from what I've learnt so far about LangChain. And if anyone from the LangChain team happens to be reading this, I'm always happy to share user feedback anytime or simply chat :)

Have fun building!