How to Build a LangGraph Research Agent that Embeds Dynamic Charts via MCP Apps (CopilotKit & Tako)

AI Summary13 min read

TL;DR

This article details building an AI research assistant using LangGraph for orchestration, CopilotKit for UI, and MCP Apps like Tako for dynamic chart embedding. It covers the tech stack, architecture, and step-by-step implementation to create a chat + canvas system that streams real-time updates and integrates interactive visualizations.

Key Takeaways

  • The project combines LangGraph for agent orchestration, CopilotKit for frontend UI, and MCP Apps (e.g., Tako) to embed interactive charts directly into research reports.
  • Key components include a stateful workflow with parallel retrieval (web search via Tavily and chart queries via Tako), real-time streaming using AG-UI protocol, and generative UI for dynamic content rendering.
  • Implementation involves setting up a Next.js frontend with CopilotKit hooks, a Python backend with FastAPI and LangGraph, and environment configuration for API keys to enable seamless agent-to-UI communication.

Tags

programmingtutorialjavascriptmcp

AI research assistants are getting smarter, but turning that reasoning into something usable is still hard.

Real research involves more than text: searching the web, exploring structured data, comparing sources, and gradually assembling a report. Doing that well requires state, tools, and UI -- not just a chat box.

Today, we will build a chat + canvas research assistant that searches the web, queries structured data via MCP, and embeds interactive charts directly into the report, while streaming progress to the frontend in real time.

We will use CopilotKit (Next.js) for the frontend, LangGraph to orchestrate the agent workflow, and MCP (Model Context Protocol) to connect external tools like Tako, which returns embeddable chart UI.

Check out CopilotKit's GitHub ⭐️

You will find architecture, key concepts, how state flows between the UI and the agent, how MCP Apps fit into the system, and a step-by-step guide to building this from scratch.

Let’s build it.

Gen UI


What is covered?

In summary, we will cover these topics in detail.

  1. What are we building?
  2. Tech Stack and Project Structure
  3. Building the Frontend (CopilotKit)
  4. Agent Orchestration (LangGraph)
  5. Backend: MCP Integration & Tools
  6. Running the Application
  7. Complete Data Flow between UI ↔ agent

Here’s the GitHub Repo if you are interested in exploring it yourself.


1. What are we building?

We are building an AI research assistant with a live chat + canvas UI that searches, collects sources, and assembles a report while streaming progress to the frontend in real time.

It will:

  • Turn a user question into a research plan (research question + data sub-questions)
  • Run parallel retrieval: Tavily for web search + Tako via MCP apps for charts
  • Deduplicate results into a clean resources list
  • Generate a report with [CHART:title] placeholders and then embed the real charts inline
  • Stream intermediate state (logs/resources/report/charts) so the UI updates as the agent works.

We will see all of these concepts in action as we build the agent.

Core Components

A few days ago, the Anthropic team shipped MCP Apps as an official MCP extension, meaning tools can return interactive UI that the host can render, not just JSON or text.

That’s the pattern we will use here: our agent will call Tako (over MCP) to fetch charts as embeddable UI, so interactive visualizations can drop straight into the report/canvas instead of us building chart components.

tako

At a high level, this project is built by combining:

  • CopilotKit for the agent chat + canvas UI
  • AG‑UI for streaming agent ↔ UI events (protocol)
  • LangGraph for stateful agent orchestration
  • MCP Apps for external tools
  • Tako for charts (search engine)
  • Tavily for web search

For context: AG-UI (Agent–User Interaction Protocol) is an event-based protocol that standardizes this “agent ↔ frontend” real-time communication.

Because it’s transport-agnostic (SSE, WebSockets, etc.), the same frontend can work across different agent backends without custom wiring. CopilotKit uses AG-UI as its synchronization layer.

ag-ui

Here's a simplified call request → response flow of what will happen:

User question
   ↓
CopilotKit UI (chat + canvas)
   ↓
LangGraph agent workflow
   ├─ chat: interpret intent + plan
   ├─ search: parallel retrieval (Tavily + Tako via MCP Apps)
   └─ report: narrative + [CHART:*] markers
   ↓
Streaming state updates (AG‑UI events)
   ↓
Canvas renders report + embedded charts
Enter fullscreen mode Exit fullscreen mode

2. Tech Stack and Architecture

At the core, we are going to use this stack for building the agent:

There are also other libraries used, like react-markdown & remark-gfm for report rendering, langgraph-checkpoint-sqlite for state persistence, and Shadcn UI for components.

See package.json and agents/python/pyproject.toml in the repo for the complete dependency list.

 

Project structure

This is how our directory will look.

The agents/python/ directory hosts the Python LangGraph agent, exposed via FastAPI, which orchestrates the workflow (chat, search, report), streams state updates, and calls external tools (Tavily, Tako via MCP).

The src/ directory hosts the Next.js frontend, including the UI components, shared types, and the CopilotKit API route (/api/copilotkit/route.ts) that bridges the frontend to the agent backend.

.
├── src/                           ← Next.js frontend (TypeScript)
│   ├── app/
│   │   ├── page.tsx               ← CopilotKit provider & model selector
│   │   ├── Main.tsx               ← Chat + ResearchCanvas split layout
│   │   └── api/
│   │       └── copilotkit/
│   │           └── route.ts      ← CopilotRuntime bridge to agent
│   ├── components/
│   │   ├── ResearchCanvas.tsx    ← Main canvas (orchestrates report + resources)
│   │   ├── Resources.tsx         ← Displays Tavily + Tako resources list
│   │   ├── MarkdownRenderer.tsx  ← Renders report + embeds charts
│   │   └── ui/                   ← Reusable Shadcn UI components
│   └── lib/
│       ├── types.ts                     ← AgentState type
│       ├── utils.ts                     ← Utility functions
│       └── model-selector-provider.tsx  ← Model selection context
│
├── agents/
│   ├── python/                        ← Python LangGraph agent (primary)
│   │   ├── src/
│   │   │   ├── agent.py               ← StateGraph definition & compile
│   │   │   └── lib/
│   │   │       ├── state.py           ← AgentState (Pydantic)
│   │   │       ├── model.py       ← LLM factory (OpenAI / Anthropic / Gemini)
│   │   │       ├── chat.py            ← Chat node & tool definitions
│   │   │       ├── search.py          ← Parallel Tavily + Tako search
│   │   │       ├── mcp_integration.py ← MCP client & iframe helpers
│   │   │       ├── download.py        ← Download node
│   │   │       └── delete.py          ← Delete node
│   │   ├── main.py                    ← FastAPI/Uvicorn entrypoint
│   │   ├── requirements.txt
│   │   └── pyproject.toml
│   │
│   └── src/                       ← TypeScript agent (optional, local dev)
│       └── server.ts              ← Express + CopilotRuntime
│
├── package.json                   ← Frontend deps & scripts
├── .env.example
├── .env.local
└── README.md
Enter fullscreen mode Exit fullscreen mode

Here's the GitHub repository and deployed live at tako-copilotkit.vercel.app if you want to explore yourself. I will cover the implementation and all key concepts in the following sections.

The easiest way to follow along is to clone the repo.

git clone https://github.com/TakoData/tako-copilotkit.git
cd tako-copilotkit
Enter fullscreen mode Exit fullscreen mode

Add necessary API Keys

You can copy the environment template (.env.example) by using this command to create a .env.local in the root directory.

cp .env.example .env.local
Enter fullscreen mode Exit fullscreen mode

Add your OpenAI API Key, Tavily API Key and Tako API Key to the file. I have attached the docs link so it's easy to follow.

OPENAI_API_KEY=sk-proj-...
TAVILY_API_KEY=tvly-dev-...
TAKO_API_TOKEN=your_api_token_here
TAKO_MCP_URL=https://mcp.tako.com  # URL of Tako's MCP server
TAKO_URL=https://tako.com  # URL of Tako's main API
Enter fullscreen mode Exit fullscreen mode

The Tako/MCP integration is an optional data source for the research agent. If you want to query charts or structured datasets via a Tako MCP server, provide TAKO_API_TOKEN and the related URLs.

Otherwise, you can leave these unset & the agent will continue to work normally. For this tutorial, we will be using it.

OpenAI API Key

OpenAI API Key

 

Tavily API Key

Tavily API Key

 

Tako API Key

Tako API Key

3. Frontend: wiring the agent to the UI

Let's first build the frontend part.

The src/ directory hosts the Next.js frontend, including the UI components, shared types, and the CopilotKit API route (/api/copilotkit/route.ts) that bridges the frontend to the Python agent backend.

At a high level, the frontend is responsible for:

  • Sending user queries to the agent backend via CopilotChat
  • Receiving and rendering real-time state updates from the agent
  • Embedding Tako chart iframes in the rendered report
  • Managing the CopilotKit runtime bridge between UI and Python agent

If you are building this from scratch, the easiest approach is to copy the existing package.json from the repo.

It already includes all the required dependencies for CopilotKit, LangGraph integration, UI components, and local dev tooling, so you don’t have to assemble them manually.

I'm only covering the core frontend dependencies you actually need to understand.

Step 1: CopilotKit Provider & Layout

Install the necessary packages:

npm install @copilotkit/react-core @copilotkit/react-ui @copilotkit/runtime
Enter fullscreen mode Exit fullscreen mode
  • @copilotkit/react-core: Core React hooks and context that connect your UI to the agent backend via AG-UI protocol

  • @copilotkit/react-ui: Ready-made UI components like <CopilotChat /> for building AI chat interfaces

  • @copilotkit/runtime: Server-side runtime that exposes an API endpoint and bridges the frontend with the LangGraph agent using HTTP and SSE

Everything else in the repo (radix, tailwind, react-split, etc.) is there to support layout, styling, and developer experience -- not the core agent wiring.

The <CopilotKit> component must wrap the agent-aware parts of your application and point to the runtime endpoint so it can communicate with the agent backend.

This is the main entry point (page.tsx):

// page.tsx
"use client";

import { CopilotKit } from "@copilotkit/react-core";
import Main from "./Main";
import {
  ModelSelectorProvider,
  useModelSelectorContext,
} from "@/lib/model-selector-provider";

export default function ModelSelectorWrapper() {
  return (
    <ModelSelectorProvider>
      <Home />
    </ModelSelectorProvider>
  );
}

function Home() {
  const { agent, lgcDeploymentUrl } = useModelSelectorContext();

  // This logic is implemented to demonstrate multi-agent frameworks in this demo project.
  // There are cleaner ways to handle this in a production environment.
  const runtimeUrl = lgcDeploymentUrl
    ? `/api/copilotkit?lgcDeploymentUrl=${lgcDeploymentUrl}`
    : `/api/copilotkit${
        agent.includes("crewai") ? "?coAgentsModel=crewai" : ""
      }`;

  return (
    <div style={{ height: "100vh", overflow: "hidden" }}>
      <CopilotKit runtimeUrl={runtimeUrl} showDevConsole={false} agent={agent}>
        <Main />
      </CopilotKit>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Here, ModelSelectorProvider is a convenience for switching agents/models in development.

 

Step 2: Chat + Canvas Layout

The Main.tsx component sets up the core UI. It sets up:

  • the chat interface (CopilotChat)
  • the research canvas (where results & charts appear)
  • state binding to the agent
// app/Main.tsx
import { useCoAgent } from "@copilotkit/react-core";
import { CopilotChat } from "@copilotkit/react-ui";
import Split from "react-split";
import { ResearchCanvas } from "@/components/ResearchCanvas";
import { ChatInputWithModelSelector } from "@/components/ChatInputWithModelSelector";
import { AgentState } from "@/lib/types";
import { useModelSelectorContext } from "@/lib/model-selector-provider";

export default function Main() {
  const { model, agent } = useModelSelectorContext();

  const { state, setState } = useCoAgent<AgentState>({
    name: agent,
    initialState: {
      model,
      research_question: "",
      resources: [],
      report: "",
      logs: [],
    },
  });

  return (
    <div style={{ height: "100%", display: "flex", flexDirection: "column" }}>
      <h1>Research Helper</h1>

      <Split sizes={[30, 70]} style={{ flex: 1, display: "flex" }}>
        {/* Chat Panel */}
        <CopilotChat
          Input={ChatInputWithModelSelector}
          onSubmitMessage={async () => {
            setState({ ...state, logs: [] });
            await new Promise((r) => setTimeout(r, 30));
          }}
        />

        {/* Canvas Panel */}
        <ResearchCanvas />
      </Split>
    </div>
  );
}

Enter fullscreen mode Exit fullscreen mode

Here's what's happening:

  • useCoAgent<AgentState>: bi-directional state synchronization between UI and agent. When the agent emits state updates via copilotkit_emit_state(), this hook automatically picks them up.

  • CopilotChat: drop-in chat UI that sends messages to the agent and renders tool calls inline

  • ResearchCanvas: custom component that renders the streaming report, resources list, and embedded charts

  • Split: provides a resizable split-pane layout

 

Step 3: Building Key Components

I'm only covering the core logic behind the main components since the overall code is huge. You can find all the components in the repository at src\components.

These components use CopilotKit hooks (like useCoAgentStateRender) to tie everything together.

✅ Research Canvas Component

ResearchCanvas.tsx is where the agent’s current state is rendered:

  • the accumulating report text
  • linked resources
  • embedded charts (iframes from MCP Apps like Tako)

This component listens to the agent state and displays elements as they arrive. It translates [CHART:title] markers into real embedded charts. This pattern is part of CopilotKit's support for generative UI.

// core logic
import { useCoAgent, useCoAgentStateRender, useCopilotAction } from "@copilotkit/react-core";
import { MarkdownRenderer } from "./MarkdownRenderer";
import { AgentState } from "@/lib/types";
import { useRef, useEffect } from "react";

export function ResearchCanvas() {
  const { state, setState } = useCoAgent<AgentState>({
    name: "research_agent",
  });

  // Use refs to prevent flicker during streaming updates
  const lastReportRef = useRef<string>("");
  const lastResourcesRef = useRef<Resource[]>([]);

  if (state.report) lastReportRef.current = state.report;
  if (state.resources?.length) lastResourcesRef.current = state.resources;

  const report = state.report || lastReportRef.current;
  const resources = state.resources || lastResourcesRef.current;

  // Render progress logs during execution
  useCoAgentStateRender({
    name: "research_agent",
    render: ({ state, status }) => {
      if (state.logs?.length) return <Progress logs={state.logs} />;
      return null;
    },
  });

  // Generative UI: Agent requests deletion confirmation from user
  useCopilotAction({
    name: "DeleteResources",
    description: "Prompt user for resource delete confirmation",
    available: "remote",
    parameters: [{ name: "urls", type: "string[]" }],
    renderAndWait: ({ args, handler }) => (
      <div>
        <h3>Delete these resources?</h3>
        <Resources resources={resources.filter(r => args.urls.includes(r.url))} />
        <button onClick={() => handler("YES")}>Delete</button>
        <button onClick={() => handler("NO")}>Cancel</button>
      </div>
    ),
  });

  return (
    <div>
      <h2>Research Question</h2>
      <div>{state.research_question || "Agent will identify your question..."}</div>

      {/* Resources Panel */}
      <Resources resources={resources} />

      {/* Report Panel with Embedded Charts */}
      <MarkdownRenderer content={report} /

Visit Website