How to Build a Research Assistant using Deep Agents

AI Summary12 min read

TL;DR

This guide demonstrates building a research assistant using LangChain's Deep Agents, integrating Tavily for web search and CopilotKit for real-time UI streaming. It covers architecture, state flow, and step-by-step implementation with a Next.js frontend and FastAPI backend.

Key Takeaways

  • Deep Agents provide a structured, multi-agent system with built-in planning, filesystem context, and subagent delegation, simplifying complex workflows.
  • Integrating CopilotKit enables real-time streaming of agent actions to a frontend UI, enhancing transparency and user interaction.
  • The architecture uses a Next.js frontend with CopilotKit middleware and a FastAPI backend with Deep Agents, connected via AG-UI for seamless data flow.
  • Key components include planning tools, subagents for task delegation, and filesystem middleware for managing intermediate artifacts during research.
  • The project showcases practical patterns for building AI assistants with live updates, including state management and tool visualization in the UI.

Tags

reactprogrammingjavascripttutorial

LangChain's Deep Agents provide a new way to build structured, multi-agent systems that can plan, delegate and reason across multiple steps.

It comes with planning, a filesystem for context and subagent spawning built in. But connecting that agent to a live frontend and actually showing what’s happening behind the scenes in real time is still surprisingly hard.

Today, we will build a Deep Agents powered research assistant using Tavily and connect it to a live Next.js UI with CopilotKit, so every step the agent takes streams to the frontend in real time.

You will find architecture, the key patterns, how state flows between the UI ↔ agent and the step-by-step guide to building this from scratch.

Let's build it.

deep agents research assistant


What is covered?

In summary, we are covering these topics in detail.

  1. What are Deep Agents?
  2. Core Components
  3. What are we building?
  4. Building Frontend
  5. Building Backend (FastAPI + Deep Agents + AG-UI)
  6. Running Application
  7. Data flow (frontend ↔ Agent)

Here is the GitHub Repository, deployed link and official docs if you want to explore yourself.


1. What are Deep Agents?

Most agents today are just “LLM in a loop + tools”. That works but it tends to be shallow: no explicit plan, weak long-horizon execution and messy state as runs get longer.

Popular agents like Claude Code, Deep Research and Manus get around this by following a common pattern: they plan first, externalize working context (often via files or a shell) and delegate isolated pieces of work to sub-agents.

Deep Agents package those primitives into a reusable agent runtime.

Instead of designing your own agent loop from scratch, you call create_deep_agent(...) and get a pre-wired execution graph that already knows how to plan, delegate and manage state across many steps.

deep agents

Credit: LangChain

 

At a practical level, a Deep Agent created via create_deep_agent is just a LangGraph graph. There’s no separate runtime or hidden orchestration layer.

The "context management" in deep agents is also very practical -- they offload large tool payloads to the filesystem and only fall back to summarization when token usage approaches the model’s context window. You can read more on Context Management for Deep Agents blog by LangChain.

Offloading large tool results

The mental model (how it runs)

Conceptually, the execution flow looks like this:

User goal
  ↓
Deep Agent (LangGraph StateGraph)
  ├─ Plan: write_todos → updates "todos" in state
  ├─ Delegate: task(...) → runs a subagent with its own tool loop
  ├─ Context: ls/read_file/write_file/edit_file → persists working notes/artifacts
  ↓
Final answer
Enter fullscreen mode Exit fullscreen mode

That gives you a usable structure for “plan → do work → store intermediate artifacts → continue” without inventing your own plan format, memory layer or delegation protocol.

You can check official docs.

 

Where CopilotKit Fits

Deep Agents push key parts into explicit state (e.g. todos + files + messages), which makes runs easier to inspect. That explicit state is also what makes Copilotkit integration possible.

CopilotKit is a frontend runtime that keeps UI state in sync with agent execution by streaming agent events and state updates in real time (using AG-UI under the hood).

This middleware (CopilotKitMiddleware) is what allows the frontend to stay in lock-step with the agent as it runs. You can read the docs at docs.copilotkit.ai/langgraph/deep-agents.

agent = create_deep_agent(
    model="openai:gpt-4o",
    tools=[get_weather],
    middleware=[CopilotKitMiddleware()], # for frontend tools and context
    system_prompt="You are a helpful research assistant."
)
Enter fullscreen mode Exit fullscreen mode

2. Core Components

Here are the core components that we will be using later on:

1) Planning Tools (built-in via Deep Agents) - built-in planning/to‑do behavior so the agent can break the workflow into steps without you writing a separate planning tool.

# Conceptual example (not required in codebase)
@tool
def todo_write(tasks: List[str]) -> str:
    formatted = "\n".join([f"- {task}" for task in tasks])
    return f"Todo list created:\n{formatted}"
Enter fullscreen mode Exit fullscreen mode

2) Subagents - let the main agent delegate focused tasks into isolated execution loops. Each sub-agent has its own prompt, tools and context.

subagents = [
    {
        "name": "job-search-agent",
        "description": "Finds relevant jobs and outputs structured job candidates.",
        "system_prompt": JOB_SEARCH_PROMPT,
        "tools": [internet_search],
    }
]
Enter fullscreen mode Exit fullscreen mode

3) Tools - this is how the agent actually does things. Here, finalize() signals completion.

@tool
def finalize() -> dict:
    """Signal that the agent is done."""
    return {"status": "done"}
Enter fullscreen mode Exit fullscreen mode

 

How Deep Agents are implemented (Middleware)

If you are wondering how create_deep_agent() actually injects planning, files and subagents into a normal LangGraph agent, the answer is middleware.

Each feature is implemented as a separate middleware. By default, three are attached:

  • To-do list middleware - adds the write_todos tool and instructions that push the agent to explicitly plan and update a live todo list during multi-step tasks.

  • Filesystem middleware - adds file tools (ls, read_file, write_file, edit_file) so the agent can externalize notes and artifacts instead of stuffing everything into chat history.

  • Subagent middleware - adds the task tool, allowing the main agent to delegate work to subagents with isolated context and their own prompts/tools.

This is what makes Deep Agents feel “pre-wired” without introducing a new runtime. If you want to go deeper, the middleware docs show the exact implementation details.

components involved


3. What are we building?

Let's create an agent that:

  • Accepts a research question from the user
  • Uses Deep Agents to plan multi-step and orchestrate sub-agents
  • Searches the web using Tavily
  • Writes intermediate research artifacts using the filesystem middleware
  • Streams tool results back to the UI via CopilotKit (AG-UI)

The interface is a two-panel app where the left side is a CopilotKit chat UI and the right side is a live workspace that shows the agent’s plan, generated files and sources as the agent works.

Here's a simplified call request → response flow of what will happen:

[User asks research question]
        ↓
Next.js Frontend (CopilotChat + Workspace)
        ↓
CopilotKit Runtime → LangGraphHttpAgent
        ↓
Python Backend (FastAPI + AG-UI)
        ↓
Deep Agent (research_assistant)
    ├── write_todos        (planning, built-in)
    ├── write_file         (filesystem, built-in)
    ├── read_file          (filesystem, built-in)
    └── research(query)
            └── internal Deep Agent [thread-isolated]
                    └── internet_search (Tavily)
Enter fullscreen mode Exit fullscreen mode

We will see the concepts in action as we build the agent.


4. Frontend: wiring the agent to the UI

Let's first build the frontend part. This is how our directory will look.

The src directory hosts the Next.js frontend, including the UI, shared components and the CopilotKit API route (/api/copilotkit) used for agent communication.

.
├── src/                             ← Next.js frontend
│   ├── app/
│   │   ├── page.tsx          
│   │   ├── layout.tsx               ← CopilotKit provider
│   │   └── api/
│   │       └── copilotkit/route.ts  ← CopilotKit AG-UI runtime
│   ├── components/
│   │   ├── FileViewerModal.tsx      ← Markdown file viewer
│   │   ├── WorkSpace.tsx            ← Research progress display
│   │   └── ToolCard.tsx             ← Tool call visualizer
├── lib/
│   └── types.ts
├── package.json                     
├── next.config.ts                   
└── README.md  
Enter fullscreen mode Exit fullscreen mode

If you don’t have a frontend, you can create a new Next.js app with TypeScript.

// creates a nextjs app  
npx create-next-app@latest .
Enter fullscreen mode Exit fullscreen mode

installing next.js frontend

Step 1: CopilotKit Provider & Layout

Install the necessary CopilotKit packages.

npm install @copilotkit/react-core @copilotkit/react-ui @copilotkit/runtime
Enter fullscreen mode Exit fullscreen mode
  • @copilotkit/react-core provides the core React hooks and context that connect your UI to an AG-UI compatible agent backend.

  • @copilotkit/react-ui offers ready-made UI components like <CopilotChat /> to build AI chat or assistant interfaces quickly.

  • @copilotkit/runtime is the server-side runtime that exposes an API endpoint and bridges the frontend with an AG-UI compatible backend (e.g., a LangGraph HTTP agent).

copilotkit packages

The <CopilotKit> component must wrap the Copilot-aware parts of your application. In most cases, it's best to place it around the entire app, like in layout.tsx.

import type { Metadata } from "next";

import { CopilotKit } from "@copilotkit/react-core";
import "./globals.css";
import "@copilotkit/react-ui/styles.css";

export const metadata: Metadata = {
  title: "Deep Research Assistant | CopilotKit Deep Agents Demo",
  description: "A research assistant powered by Deep Agents and CopilotKit - demonstrating planning, memory, subagents, and generative UI",
};

export default function RootLayout({
  children,
}: Readonly<{
  children: React.ReactNode;
}>) {
  return (
    <html lang="en">
      <body className="antialiased">
        <CopilotKit runtimeUrl="/api/copilotkit" agent="research_assistant">
          {children}
        </CopilotKit>
      </body>
    </html>
  );
}
Enter fullscreen mode Exit fullscreen mode

Here, runtimeUrl="/api/copilotkit" points to the Next.js API route CopilotKit uses to talk to the agent backend.

Each page is wrapped in this context so UI components know which agent to invoke and where to send requests.

 

Step 2: Next.js API Route: Proxy to FastAPI

This Next.js API route acts as a thin proxy between the browser and the Deep Agents. It:

  • Accepts CopilotKit requests from the UI
  • Forwards them to the agent over AG-UI
  • Streams agent state and events back to the frontend

Instead of letting the frontend talk to the FastAPI agent directly, all requests go through a single endpoint /api/copilotkit.

import {
  CopilotRuntime,
  ExperimentalEmptyAdapter,
  copilotRuntimeNextJSAppRouterEndpoint,
} from "@copilotkit/runtime";
import { LangGraphHttpAgent } from "@copilotkit/runtime/langgraph";
import { NextRequest } from "next/server";

// Empty adapter since the LLM is handled by the remote agent
const serviceAdapter = new ExperimentalEmptyAdapter();

// Configure CopilotKit runtime with the Deep Agents backend
const runtime = new CopilotRuntime({
  agents: {
    research_assistant: new LangGraphHttpAgent({
      url: process.env.LANGGRAPH_DEPLOYMENT_URL || "http://localhost:8123",
    }),
  },
});

export const POST = async (req: NextRequest) => {
  const { handleRequest } = copilotRuntimeNextJSAppRouterEndpoint({
    runtime,
    serviceAdapter,
    endpoint: "/api/copilotkit",
  });

  return handleRequest(req);
};
Enter fullscreen mode Exit fullscreen mode

Here's a simple explanation of the above code:

  • The code above registers the research_assistant agent.

  • LangGraphHttpAgent : defines a remote LangGraph agent endpoint. It points to the Deep Agents backend running on FastAPI.

  • ExperimentalEmptyAdapter : simple no-op adapter used when the agent backend handles its own LLM calls and orchestration

  • copilotRuntimeNextJSAppRouterEndpoint : small helper that adapts the Copilot runtime to a Next.js App Router API route and returns a handleRequest function

 

Step 3: Types (Research State)

Before building components, let's define shared state for todos, files, and sources in lib/types/research.ts. These are the contracts between the tool results from the agent and the local React state.

// Uses local state + useDefaultTool instead of CoAgent (avoids type issues with Python FilesystemMiddleware)

export interface Todo {
  id: string;
  content: string;
  status: "pending" | "in_progress" | "completed";
}

export interface ResearchFile {
  path: string;
  content: string;
  createdAt: string;
}

// Sources found via internet_search (includes content)
export interface Source {
  url: string;
  title: string;
  content?: string;
  status: "found" | "scraped" | "failed";
}

export interface ResearchState {
  todos: Todo[];
  files: ResearchFile[];
  sources: Source[];
}

export const INITIAL_STATE: ResearchState = {
  todos: [],
  files: [],
  sources: [],
};
Enter fullscreen mode Exit fullscreen mode

Instead of dumping raw tool JSON into chat, each tool result routes into a dedicated state slot - write_todos updates todos, write_file appends to files and research appends to sources. This becomes the foundation of the Workspace panel.

 

Step 4: Building Key Components

I'm only covering the core logic behind each component since the overall code is huge. You can find all the components in the repository at src/components.

✅ ToolCard Component

This is the client component that renders every tool call inline inside chat. It has two modes:

  • SpecializedToolCard for known tools (write_todos, research, write_file, read_file) with icons, status indicators and result previews

  • DefaultToolCard for unknown tools that fall back to expandable JSON.

"use client";

import { useState } from "react";
import { Pencil, ClipboardList, Search, Save, BookOpen, Check, ChevronDown } from "lucide-react";

const TOOL_CONFIG = {
  write_todos: {
    icon: Pencil,
    getDisplayText: () => "Updating research plan...",
    getResultSummary: (result, args) => {
      const todos = (args as { todos?: unknown[] })?.todos;
      if (Array.isArray(todos)) {
        return `${todos.length} todo${todos.length !== 1 ? "s" : ""} updated`;
      }
      return null;
    },
  },
  research: {
    icon: Search,
    getDisplayText: (args) =>
      `Researching: ${((args.query as string) || "...").slice(0, 50)}${(args.query as string)?.length > 50 ? "..." : ""}`,
    getResultSummary: (result) => {
      if (result && typeof result === "object" && "sources" in result) {
        const { sources } = result as { summary: string; sources: unknown[] };
        return `Found ${sources.length} source${sources.length !== 1 ? "s" : ""}`;
      }
      return "Research complete";
    },
  },
  write_file: {
    icon: Save,
    getDisplayText

Visit Website