Build a Frontend for your Microsoft Agent Framework (Python) Agents with AG-UI
TL;DR
Learn to build a frontend for Microsoft Agent Framework (Python) agents using AG-UI and CopilotKit. Set up the backend with AG-UI protocol and integrate it with a CopilotKit-powered frontend for seamless communication.
Key Takeaways
- •Use the CLI command 'npx copilotkit@latest init -m microsoft-agent-framework-py' to quickly set up a full-stack agent with backend and frontend.
- •Define agent state and tools using JSON schemas and @ai_function decorators to enable AI-driven interactions and UI updates.
- •Integrate the agent with AG-UI protocol by wrapping it in AgentFrameworkAgent middleware for state synchronization and frontend coordination.
Tags
TL;DR
In this guide, you will learn how to build a frontend for your Microsoft Agent Framework (Python) Agents using AG-UI Protocol and CopilotKit. Microsoft Agent Framework (Python) will power the AI agents backend, while CopilotKit powers the frontend, and then AG-UI creates a bridge that enables the frontend to communicate with the backend.
Before we jump in, here is what we will cover:
Understanding the Microsoft Agent Framework?
Setting up a Microsoft Agent Framework + AG-UI + CopilotKit agent using CLI
Integrating your Microsoft Agent Framework agent with AG-UI protocol in the backend
Building a frontend for your Microsoft Agent Framework + AG-UI agent using CopilotKit
Here is a preview of what you can build using Microsoft Agent Framework (Python) + AG-UI + CopilotKit.
What is the Microsoft Agent Framework?
The Microsoft Agent Framework (MAF) is an open-source development kit (SDK) and runtime for building and deploying AI agents and multi-agent workflows in both Python and NET.
MAF is considered the next generation of Microsoft's previous agent-related projects, unifying the strengths of AutoGen (for innovative multi-agent orchestration) and Semantic Kernel (for enterprise-grade features and production readiness).
The core purpose of the framework is to allow developers to create sophisticated AI applications that involve:
Individual AI agents that use Large Language Models (LLMs) to reason, make decisions, and use tools.
Complex multi-agent systems where multiple agents collaborate to solve complex, multi-step tasks.
The Python implementation offers a developer-friendly experience that is highly suitable for building and testing agentic systems quickly.
If you want to dive deeper into how Microsoft Agent Framework works and its setup, check out the docs here: Microsoft Agent Framework docs.
Prerequisites
Before you begin, you'll need the following:
OpenAI or Azure OpenAI credentials (for the Microsoft Agent Framework agent)
Python 3.12+
Node.js 20+
-
Any of the following package managers:
- pnpm (recommended)
- npm
- yarn
- bun
Setting up a Microsoft Agent Framework + AG-UI + CopilotKit agent using CLI
In this section, you will learn how to set up a full-stack Microsoft Agent Framework (Python) agent using a CLI command that sets up the backend using AG-UI protocol and the frontend using CopilotKit.
Let’s get started.
Step 1: Run CLI command
If you don’t already have a Microsoft Agent Framework agent, you can set up one quickly by running the CLI command below in your terminal.
npx copilotkit@latest init -m microsoft-agent-framework-py
Then give your project a name as shown below.
Step 2: Install dependencies
Once your project has been created successfully, install dependencies using your preferred package manager:
# Using pnpm (recommended)
pnpm install
# Using npm
npm install
# Using yarn
yarn install
# Using bun
bun install
Step 3: Set up your agent credentials
After installing the dependencies, set up your agent credentials. The backend automatically uses Azure when the Azure env vars below are present; otherwise, it falls back to OpenAI.
To set up your agent credentials, create a .env file inside the agent folder with one of the following configurations:
OpenAI:
OPENAI_API_KEY=sk-...your-openai-key-here...
OPENAI_CHAT_MODEL_ID=gpt-4o-mini
Azure OpenAI:
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME=gpt-4o-mini
# If you are not relying on az login:
# AZURE_OPENAI_API_KEY=...
Step 4: Run development server
Then start the development server using your preferred package manager:
# Using pnpm
pnpm dev
# Using npm
npm run dev
# Using yarn
yarn dev
# Using bun
bun run dev
Once the development server is running, navigate to http://localhost:3000/ and you should see your Microsoft Agent Framework (Python) + AG-UI + CopilotKit agent up and running.
Congrats! You've successfully set up a full-stack Microsoft Agent Framework agent. Your AI agent is now ready to use! Try playing around with the suggestions in the chat to test the integration.
Integrating your Microsoft Agent Framework agent with AG-UI protocol in the backend
In this section, you will learn how to integrate your Microsoft Agent Framework agent with AG-UI protocol to expose it to the frontend.
Let’s jump in.
Step 1: Install Microsoft Agent Framework + AG-UI packages
To get started, install the Microsoft Agent Framework + AG-UI packages together with other necessary dependencies using the commands below.
uv pip install agent-framework, agent-framework-ag-ui, azure-identity
Step 2: Define your agent state
Once you have installed the required packages, use a JSON schema to define the structure of the agent state that will be synchronised between the backend and the frontend UI, as shown below.
from __future__ import annotations
from textwrap import dedent
from typing import Annotated
# Import core agent framework components for building conversational AI agents
from agent_framework import ChatAgent, ChatClientProtocol, ai_function
# Import CopilotKit integration layer that bridges Agent Framework with the frontend
from agent_framework_ag_ui import AgentFrameworkAgent
from pydantic import Field
# STATE_SCHEMA: Defines the structure of agent state
STATE_SCHEMA: dict[str, object] = {
"proverbs": {
"type": "array", # Proverbs is a list/array of items
"items": {"type": "string"}, # Each item in the array is a string
"description": "Ordered list of the user's saved proverbs.", # Human-readable description
}
}
// ...
After that, use a state config JSON schema to tell the agent which tool function to call when it needs to update state, as shown below.
from __future__ import annotations
from textwrap import dedent
from typing import Annotated
# Import core agent framework components for building conversational AI agents
from agent_framework import ChatAgent, ChatClientProtocol, ai_function
# Import CopilotKit integration layer that bridges Agent Framework with the frontend
from agent_framework_ag_ui import AgentFrameworkAgent
from pydantic import Field
// ...
# PREDICT_STATE_CONFIG: Maps state fields to the tools that modify them.
# This tells your agent which tool function to call when the agent needs to update
# a particular piece of state, and which argument of that tool contains the new state value.
PREDICT_STATE_CONFIG: dict[str, dict[str, str]] = {
"proverbs": {
"tool": "update_proverbs", # The name of the tool function that updates proverbs
"tool_argument": "proverbs", # The parameter name in that tool that receives the new proverb list
}
}
// ...
Step 3: Define your agent tools
After defining your agent state, configure your agent tools using the @ai_function decorator to mark them as tools that your AI agent can call, as shown below. The name and description help the agent to understand when and how to use these tools.
from __future__ import annotations
from textwrap import dedent
from typing import Annotated
# Import core agent framework components for building conversational AI agents
from agent_framework import ChatAgent, ChatClientProtocol, ai_function
# Import CopilotKit integration layer that bridges Agent Framework with the frontend
from agent_framework_ag_ui import AgentFrameworkAgent
from pydantic import Field
// ...
# @ai_function decorator: Marks this as a tool that the AI agent can call.
# The name and description help the LLM understand when and how to use this tool.
@ai_function(
name="update_proverbs",
description=(
"Replace the entire list of proverbs with the provided values. "
"Always include every proverb you want to keep."
),
)
def update_proverbs(
proverbs: Annotated[
list[str], # The function accepts a list of strings
Field(
description=(
# This description guides the LLM on how to use this parameter correctly.
# Emphasises that this should be the COMPLETE list, not a partial update.
"The complete source of truth for the user's proverbs. "
"Maintain ordering and include the full list on each call."
)
),
],
) -> str:
"""
Persist the provided set of proverbs.
Returns:
A confirmation message indicating how many proverbs are now tracked.
"""
return f"Proverbs updated. Tracking {len(proverbs)} item(s)."
# Weather tool: Demonstrates frontend UI integration through tool calls.
# When the agent calls this function, CopilotKit can render a weather card in the UI.
@ai_function(
name="get_weather",
description="Share a quick weather update for a location. Use this to render the frontend weather card.",
)
def get_weather(
location: Annotated[str, Field(description="The city or region to describe. Use fully spelled out names.")],
) -> str:
"""
Return a short natural language weather summary.
Args:
location: The city or region name to get the weather for.
Returns:
A friendly weather description string that can be displayed to the user.
"""
# Normalise the location string: remove whitespace, capitalise words
normalized = location.strip().title() or "the requested location"
# Return a mock weather response (in production, this would query a real API)
return (
f"The weather in {normalised} is mild with a light breeze. "
"Skies are mostly clear—perfect for planning something fun."
)
# Human-in-the-loop tool: Demonstrates requiring explicit user approval before executing.
# The approval_mode="always_require" parameter ensures this function will pause execution
# and wait for the user to explicitly approve or reject the action via the UI.
@ai_function(
name="go_to_moon",
description="Request a playful human-in-the-loop confirmation before launching a mission to the moon.",
approval_mode="always_require", # CRITICAL: Forces user approval before this tool executes
)
def go_to_moon() -> str:
"""
Request human approval before continuing.
Returns:
A message confirming the approval request has been initiated.
"""
return "Mission control requested. Awaiting human approval for the lunar launch."
Step 4: Create, configure and integrate your agent with AG-UI protocol
Once you have defined your agent tools, create your agent with Microsoft Agent Framework. Then configure the agent with instructions, chat client, and tools, as shown below.
from __future__ import annotations
from textwrap import dedent
from typing import Annotated
# Import core agent framework components for building conversational AI agents
from agent_framework import ChatAgent, ChatClientProtocol, ai_function
# Import CopilotKit integration layer that bridges Agent Framework with the frontend
from agent_framework_ag_ui import AgentFrameworkAgent
from pydantic import Field
// ...
def create_agent(chat_client: ChatClientProtocol) -> AgentFrameworkAgent:
"""
Instantiate the Microsoft Agent Framework with AG-UI.
This is the main entry point for creating the agent. It configures the base ChatAgent
with instructions and tools, then wraps it in an AgentFrameworkAgent for AG-UI integration.
Args:
chat_client: The chat client protocol implementation that handles LLM communication.
Returns:
A fully configured AgentFrameworkAgent ready to handle user interactions.
"""
# Step 1: Create the base ChatAgent with Microsoft Agent Framework
# This agent handles the core conversational logic and tool execution
base_agent = ChatAgent(
name="proverbs_agent", # Internal identifier for this agent
# System instructions: These guide the agent's behaviour throughout the conversation.
# The instructions are carefully crafted to ensure proper state management,
# tool usage, and user experience.
instructions=dedent(
"""
You help users brainstorm, organize, and refine proverbs while coordinating UI updates.
// ...
""".strip()
),
# The chat client handles actual LLM API calls (e.g., to OpenAI, Azure OpenAI, etc.)
chat_client=chat_client,
# Tools: These are the functions the agent can call to perform actions
tools=[update_proverbs, get_weather, go_to_moon],
)
// ...
After that, wrap your agent with AgentFrameworkAgent middleware for frontend communication using AG-UI, as shown below.
from __future__ import annotations
from textwrap import dedent
from typing import Annotated
# Import core agent framework components for building conversational AI agents
from agent_framework import ChatAgent, ChatClientProtocol, ai_function
# Import CopilotKit integration layer that bridges Agent Framework with the frontend
from agent_framework_ag_ui import AgentFrameworkAgent
from pydantic import Field
// ...
def create_agent(chat_client: ChatClientProtocol) -> AgentFrameworkAgent:
"""
Instantiate the Microsoft Agent Framework with AG-UI.
This is the main entry point for creating the agent. It configures the base ChatAgent
with instructions and tools, then wraps it in an AgentFrameworkAgent for CopilotKit integration.
Args:
chat_client: The chat client protocol implementation that handles LLM communication.
Returns:
A fully configured AgentFrameworkAgent ready to handle user interactions.
"""
# Step 1: Create the base ChatAgent with Microsoft Agent Framework
# This agent handles the core conversational logic and tool execution
// ...
# Step 2: Wrap the base agent in an AgentFrameworkAgent for AG-UI integration
# This wrapper adds CopilotKit-specific features like state sync and frontend coordination
return AgentFrameworkAgent(
agent=base_agent, # The underlying ChatAgent we just configured
name="CopilotKitMicrosoftAgentFrameworkAgent", # Display name for the agent
description="Manages proverbs, weather snippets, and human-in-the-loop moon launches.",
# STATE_SCHEMA tells CopilotKit what state this agent manages
state_schema=STATE_SCHEMA,
# PREDICT_STATE_CONFIG maps state fields to the tools that update them
predict_state_config=PREDICT_STATE_CONFIG,
# require_confirmation=False means the agent can update the state and continue
# without waiting for explicit user confirmation after each state change.
# This allows for smoother conversational flow.
require_confirmation=False,
)
Step 5: Build and configure chat client
Once you have wrapped your agent with AgentFrameworkAgent middleware, build and configure a chat client with Azure OpenAI or OpenAI credentials using their respective clients, as shown below in the ./main.py file.
from __future__ import annotations
import os
# Uvicorn: ASGI server for running FastAPI applications
import uvicorn
# ChatClientProtocol: Interface that all chat clients must implement
from agent_framework._clients import ChatClientProtocol
// ...
# Load environment variables from .env file (must be done before accessing os.getenv)
# This allows developers to configure API keys and endpoints without hardcoding them
load_dotenv()
def _build_chat_client() -> ChatClientProtocol:
"""
Build and configure the appropriate chat client based on environment variables.
Returns:
ChatClientProtocol: A configured chat client ready to communicate with an LLM.
Raises:
RuntimeError: If credentials are missing or invalid.
"""
try:
# Option 1: Azure OpenAI Service
# Check if Azure OpenAI endpoint is configured
if bool(os.getenv("AZURE_OPENAI_ENDPOINT")):
# Azure OpenAI uses deployment names instead of model IDs
# Deployments are custom instances of models you create in Azure
deployment_name = os.getenv("AZURE_OPENAI_CHAT_DEPLOYMENT_NAME", "gpt-4o-mini")
return AzureOpenAIChatClient(
# DefaultAzureCredential tries multiple authentication methods:
# 1. Environment variables (AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, etc.)
# 2. Managed Identity (if running on Azure infrastructure)
# 3. Azure CLI credentials (for local development)
credential=DefaultAzureCredential(),
# The name of your deployed model in Azure OpenAI Studio
deployment_name=deployment_name,
# Your Azure OpenAI resource endpoint (e.g., https://my-resource.openai.azure.com/)
endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
)
# Option 2: OpenAI API (public service)
# Check if OpenAI API key is configured
if bool(os.getenv("OPENAI_API_KEY")):
return OpenAIChatClient(
# Model identifier from OpenAI's catalogue (e.g., gpt-4o-mini, gpt-4, etc.)
model_id=os.getenv("OPENAI_CHAT_MODEL_ID", "gpt-4o-mini"),
# Your OpenAI API key from platform.openai.com
api_key=os.getenv("OPENAI_API_KEY"),
)
# If neither Azure nor OpenAI credentials are found, raise an error
raise ValueError("Either AZURE_OPENAI_ENDPOINT or OPENAI_API_KEY environment variable is required")
except Exception as exc: # pragma: no cover
# Catch any initialization errors and provide a helpful message
raise RuntimeError

