How to build a Frontend for LangChain Deep Agents with CopilotKit!

AI Summary11 min read

TL;DR

This article demonstrates how to integrate LangChain Deep Agents with a Next.js frontend using CopilotKit for real-time synchronization. It covers building a job search assistant that plans, delegates tasks, and streams updates to the UI, with step-by-step guidance on architecture and implementation.

Key Takeaways

  • LangChain Deep Agents provide a structured, multi-agent system with built-in planning, filesystem context, and subagent spawning for complex workflows.
  • CopilotKit enables real-time frontend synchronization with agent execution by streaming state updates and events, using middleware like CopilotKitMiddleware.
  • The implementation involves setting up a Next.js frontend with CopilotKit components, proxying requests to a FastAPI backend, and integrating tools for tasks like resume parsing and job searching.

Tags

reactprogrammingpythontutorial

LangChain recently introduced Deep Agents: a new way to build structured, multi-agent systems that can plan, delegate, and reason across multiple steps.

It comes with built-in planning, a filesystem for context, and subagent spawning. But connecting that agent to a real frontend is still surprisingly hard.

Today, we will build a Deep Agents powered job search assistant and connect it to a live Next.js UI with CopilotKit, so the frontend stays in sync with the agent in real time.

You will find architecture, the key patterns, how state flows between the UI ↔ agent and the step-by-step guide to building this from scratch.

Let's build it.

Check out CopilotKit's GitHub ⭐️


1. What are Deep Agents?

Most agents today are just “LLM in a loop + tools”. That works, but it tends to be shallow: no explicit plan, weak long-horizon execution, and messy state as runs get longer.

Popular agents like Claude Code, Deep Research, and Manus get around this by following a common pattern: they plan first, externalize working context (often via files or a shell), and delegate isolated pieces of work to sub-agents.

Deep Agents package those primitives into a reusable agent runtime.

Instead of designing your own agent loop from scratch, you call create_deep_agent(...) and get a pre-wired execution graph that already knows how to plan, delegate and manage state across many steps.

deep agents

Credit: LangChain

 

At a practical level, a Deep Agent created via create_deep_agent is just a LangGraph graph. There’s no separate runtime or hidden orchestration layer.

That means standard LangGraph features work as-is:

  • streaming
  • checkpoints and interrupts
  • human-in-the-loop controls

The mental model (how it runs)

Conceptually, the execution flow looks like this:

User goal
  ↓
Deep Agent (LangGraph StateGraph)
  ├─ Plan: write_todos → updates "todos" in state
  ├─ Delegate: task(...) → runs a subagent with its own tool loop
  ├─ Context: ls/read_file/write_file/edit_file → persists working notes/artifacts
  ↓
Final answer
Enter fullscreen mode Exit fullscreen mode

That gives you a usable structure for “plan → do work → store intermediate artifacts → continue” without inventing your own plan format, memory layer or delegation protocol.

You can read more at blog.langchain.com/deep-agents and check official docs.

 

Where CopilotKit Fits

Deep Agents push key parts into explicit state (e.g. todos + files + messages), which makes runs easier to inspect. That explicit state is also what makes Copilotkit integration possible.

CopilotKit is a frontend runtime that keeps UI state in sync with agent execution by streaming agent events and state updates in real time (using AG-UI under the hood).

This middleware (CopilotKitMiddleware) is what allows the frontend to stay in lock-step with the agent as it runs. You can read the docs at docs.copilotkit.ai/langgraph/deep-agents.

agent = create_deep_agent(
    model="openai:gpt-4o",
    tools=[get_weather],
    middleware=[CopilotKitMiddleware()], # for frontend tools and context
    system_prompt="You are a helpful research assistant."
)
Enter fullscreen mode Exit fullscreen mode

The diagram below shows how a user action in the UI is sent via AG-UI to any agent backend and responses flow back as standardized events.

protocol


2. Core Components

Here are the core components that we will be using later on:

1) Planning Tools (built-in via Deep Agents) - built-in planning/to‑do behavior so the agent can break the workflow into steps without you writing a separate planning tool.

# Conceptual example (not required in codebase)
@tool
def todo_write(tasks: List[str]) -> str:
    formatted = "\n".join([f"- {task}" for task in tasks])
    return f"Todo list created:\n{formatted}"
Enter fullscreen mode Exit fullscreen mode

2) Subagents - let the main agent delegate focused tasks into isolated execution loops. Each sub-agent has its own prompt, tools and context.

subagents = [
    {
        "name": "job-search-agent",
        "description": "Finds relevant jobs and outputs structured job candidates.",
        "system_prompt": JOB_SEARCH_PROMPT,
        "tools": [internet_search],
    }
]
Enter fullscreen mode Exit fullscreen mode

3) Tools - this is how the agent actually does things. Here, finalize() signals completion.

@tool
def finalize() -> dict:
    """Signal that the agent is done."""
    return {"status": "done"}
Enter fullscreen mode Exit fullscreen mode

 

How Deep Agents are implemented (Middleware)

If you are wondering how create_deep_agent() actually injects planning, files and subagents into a normal LangGraph agent, the answer is middleware.

Each feature is implemented as a separate middleware. By default, three are attached:

  • To-do list middleware - adds the write_todos tool and instructions that push the agent to explicitly plan and update a live todo list during multi-step tasks.

  • Filesystem middleware - adds file tools (ls, read_file, write_file, edit_file) so the agent can externalize notes and artifacts instead of stuffing everything into chat history.

  • Subagent middleware - adds the task tool, allowing the main agent to delegate work to subagents with isolated context and their own prompts/tools.

This is what makes Deep Agents feel “pre-wired” without introducing a new runtime. If you want to go deeper, the linked middleware docs show the exact implementation details.

components involved

 

What are we building?

Let's create an agent that:

  • Accepts a resume (PDF) and extracts skills + context
  • Uses Deep Agents to plan and orchestrate sub-agents
  • Searches the web for relevant jobs using tools (Tavily)
  • Streams tool results back to the UI via CopilotKit (AG-UI)

We will see some of these concepts in action as we build the agent.


3. Frontend: wiring the agent to the UI

Let's first build the frontend part. This is how our directory will look.

The src directory hosts the Next.js frontend, including the UI, shared components and the CopilotKit API route (/api/copilotkit) used for agent communication.

.
├── src/                               ← Next.js frontend
│   ├── app/
│   │   ├── page.tsx                      
│   │   ├── layout.tsx                 ← CopilotKit provider
│   │   └── api/
│   │       ├── upload-resume/route.ts ← upload endpoint
│   │       └── copilotkit/route.ts    ← CopilotKit AG-UI runtime
│   ├── components/
│   │   ├── ChatPanel.tsx              ← Chat + tool capture
│   │   ├── ResumeUpload.tsx           ← PDF upload UI
│   │   ├── JobsResults.tsx            ← Jobs table renderer
│   │   └── LivePreviewPanel.tsx          
│   └── lib/
│       └── types.ts   
├── package.json                     
├── next.config.ts                   
└── README.md  
Enter fullscreen mode Exit fullscreen mode

installing next.js frontend

Step 1: CopilotKit Provider & Layout

Install the necessary CopilotKit packages.

npm install @copilotkit/react-core @copilotkit/react-ui @copilotkit/runtime
Enter fullscreen mode Exit fullscreen mode
  • @copilotkit/react-core provides the core React hooks and context that connect your UI to an AG-UI compatible agent backend.

  • @copilotkit/react-ui offers ready-made UI components like <CopilotChat /> to build AI chat or assistant interfaces quickly.

  • @copilotkit/runtime is the server-side runtime that exposes an API endpoint and bridges the frontend with an external AG-UI compatible agent backend using HTTP and SSE.

copilotkit packages

The <CopilotKit> component must wrap the Copilot-aware parts of your application. In most cases, it's best to place it around the entire app, like in layout.tsx.

import type { Metadata } from "next";

import { CopilotKit } from "@copilotkit/react-core";
import "./globals.css";
import "@copilotkit/react-ui/styles.css";

export const metadata: Metadata = {
  title: "Job Finder | Deep Agents with CopilotKit",
  description: "A job search assistant powered by Deep Agents and CopilotKit",
};

export default function RootLayout({
  children,
}: Readonly<{
  children: React.ReactNode;
}>) {
  return (
    <html lang="en">
      <body className={"antialiased"}>
        <CopilotKit runtimeUrl="/api/copilotkit" agent="job_application_assistant">
          {children}
        </CopilotKit>
      </body>
    </html>
  );
}
Enter fullscreen mode Exit fullscreen mode

Here, runtimeUrl="/api/copilotkit" points to the Next.js API route CopilotKit uses to talk to the agent backend.

Each page is wrapped in this context so UI components know which agent to invoke and where to send requests.

 

Step 2: Next.js API Route: Proxy to FastAPI

This Next.js API route acts as a thin proxy between the browser and the Deep Agents. It:

  • Accepts CopilotKit requests from the UI
  • Forwards them to the agent over AG-UI
  • Streams agent state and events back to the frontend

Instead of letting the frontend talk to the FastAPI agent directly, all requests go through a single endpoint /api/copilotkit.

import {
  CopilotRuntime,
  ExperimentalEmptyAdapter,
  copilotRuntimeNextJSAppRouterEndpoint,
} from "@copilotkit/runtime";
import { LangGraphHttpAgent } from "@copilotkit/runtime/langgraph";
import { NextRequest } from "next/server";

const serviceAdapter = new ExperimentalEmptyAdapter();

const runtime = new CopilotRuntime({
  agents: {
    job_application_assistant: new LangGraphHttpAgent({
      url: process.env.LANGGRAPH_DEPLOYMENT_URL || "http://localhost:8123",
    }),
  },
});

export const POST = async (req: NextRequest) => {
  const { handleRequest } = copilotRuntimeNextJSAppRouterEndpoint({
    runtime,
    serviceAdapter,
    endpoint: "/api/copilotkit",
  });

  return handleRequest(req);
};
Enter fullscreen mode Exit fullscreen mode

Here's a simple explanation of the above code:

  • The code above registers the job_application_assistant agent.

  • LangGraphHttpAgent : defines a remote LangGraph agent endpoint. It points to the Deep Agents backend running on FastAPI.

  • ExperimentalEmptyAdapter : simple no-op adapter used when the agent backend handles its own LLM calls and orchestration

  • copilotRuntimeNextJSAppRouterEndpoint : small helper that adapts the Copilot runtime to a Next.js App Router API route and returns a handleRequest function

 

Step 3: Resume upload API endpoint

This API route (src\app\api\upload-resume\route.ts) handles resume uploads from the frontend and forwards them to the FastAPI backend. It:

  • Accepts multipart file uploads from the browser
  • Proxies the file to the backend resume parser
  • Returns extracted text and skills to the UI

Keeping resume parsing in the backend lets the agent reuse the same logic and keeps the frontend lightweight.

import { NextRequest, NextResponse } from "next/server";

export async function POST(req: NextRequest) {
  try {
    const formData = await req.formData();
    const file = formData.get("file") as File;

    if (!file) {
      return NextResponse.json({ error: "No file provided" }, { status: 400 });
    }

    const backendFormData = new FormData();
    backendFormData.append("file", file);

    const backendUrl = process.env.BACKEND_URL || "http://localhost:8123";
    const response = await fetch(`${backendUrl}/api/upload-resume`, {
      method: "POST",
      body: backendFormData,
    });

    if (!response.ok) {
      throw new Error("Backend upload failed");
    }

    const data = await response.json();
    return NextResponse.json(data);
  } catch (error) {
    return NextResponse.json(
      { error: error instanceof Error ? error.message : "Upload failed" },
      { status: 500 }
    );
  }
}
Enter fullscreen mode Exit fullscreen mode

 

Step 4: Building Key Components

I'm only covering the core logic behind each component since the overall code is huge. You can find all the components in the repository at src\components.

These components use CopilotKit hooks (like useCopilotReadable) to tie everything together.

✅ Resume Upload Component

This client component handles resume selection and forwards the file to the backend for parsing.

It accepts a PDF/TXT file, POSTs it to /api/upload-resume and lifts the extracted text and skills back up to the parent component.

"use client";
import { useRef, useState } from "react";

type ResumeUploadResponse = { success: boolean; text: string; skills: string[]; filename: string };

export function ResumeUpload({ onUploadSuccess }: { onUploadSuccess(d: ResumeUploadResponse): void }) {
  const [selectedFile, setSelectedFile] = useState<File | null>(null);
  const [isLoading, setIsLoading] = useState(false);
  const [error, setError] = useState<string | null>(null);
  const inputRef = useRef<HTMLInputElement>(null);

  const onSelect = (e: React.ChangeEvent<HTMLInputElement>) => {
    setError(null);
    const f = e.target.files?.[0] ?? null;
    if (f && !["application/pdf", "text/plain"].includes(f.type)) {
      setSelectedFile(null);
      setError("Please upload a PDF or TXT file");
      e.target.value = ""; // allow re-selecting same file
      return;
    }
    setSelectedFile(f);
  };

  const onSubmit = async (e: React.FormEvent) => {
    e.preventDefault();
    if (!selectedFile) return;

    setIsLoading(true);
    setError(null);

    try {
      const fd = new FormData();
      fd.append("file", selectedFile);

      const res = await fetch("/api/upload-resume", { method: "POST", body: fd });
      if (!res.ok) throw new Error("Upload failed");

      onUploadSuccess((await res.json()) as ResumeUploadResponse);

      setSelectedFile(null);
      if (inputRef.current) inputRef.current.value = "";
    } catch (err) {
      setError(err instanceof Error ? err.message : "Failed to upload resume");
    

Visit Website