Easily Build a Frontend for your AWS Strands Agents using AG-UI in 30 minutes
TL;DR
This guide demonstrates how to build a frontend for AWS Strands AI agents using AG-UI and CopilotKit in 30 minutes. It covers setting up the backend with AG-UI protocol and integrating a React frontend with CopilotKit for seamless communication.
Key Takeaways
- •AWS Strands is an open-source, model-agnostic framework for building AI agents with features like tool integration and multi-agent systems.
- •AG-UI protocol bridges the backend (AWS Strands) and frontend (CopilotKit), enabling state synchronization and tool communication.
- •The setup involves CLI commands for project creation, dependency installation, and configuring shared state between agent and UI.
- •CopilotKit allows building a React-based frontend that interacts with the AG-UI-wrapped Strands agent for real-time AI interactions.
Tags
TL;DR
In this guide, you will learn how to build a frontend for your AWS Strands Agents using AG-UI Protocol and CopilotKit. AWS Strands will power the AI agents backend, while CopilotKit powers the frontend, and then AG-UI creates a bridge that enables the frontend to communicate with the backend.
Before we jump in, here is what we will cover:
What is AWS Strands?
Setting up a AWS Strands + AG-UI + CopilotKit agent using CLI
Integrating your AWS Strands agent with AG-UI protocol in the backend
Building a frontend for your AWS Strands + AG-UI agent using CopilotKit
Here is a preview of what you can build using AWS Strands + AG-UI + CopilotKit.
What is AWS Strands?
AWS Strands, also known as Strands Agents, is an open-source agent development framework developed by the AWS team for building and deploying AI agents.
The framework adopts a model-driven approach where it leverages model reasoning to plan, orchestrate tasks, and reflect on goals.
Moreover, Strands is model-agnostic and supports various LLM providers, including Amazon Bedrock, Anthropic, OpenAI, Meta Llama, and local models via Ollama or LiteLLM.
Below are the key features and capabilities of AWS Strands:
Tool Integration: It provides straightforward mechanisms for integrating custom tools and comes with native support for the Model Context Protocol (MCP), which allows agents to access a vast ecosystem of third-party tools.
Multi-Agent Systems: Strands simplifies the creation of complex systems where multiple specialized agents collaborate on complex tasks.
Observability: It includes native features for observability, such as traces, metrics, and logs, which are essential for debugging and monitoring agent performance.
You can learn more about AWS Strands if you visit the Strands Agents docs.
Prerequisites
Node.js 20+
Python 3.12+
OpenAI API Key (for the strands agent)
-
Any of the following package managers:
- pnpm (recommended)
- npm
- yarn
- bun
Setting up a AWS Strands + AG-UI + CopilotKit agent using CLI
In this section, you will learn how to set up a full-stack AWS Strands agent using a CLI command that sets up the backend using AG-UI protocol and the frontend using CopilotKit.
Let’s get started.
Step 1: Run CLI command
If you don’t already have an AWS Strands agent, you can set up one quickly by running the CLI command below in your terminal.
npx copilotkit@latest create -f aws-strands-py
Then give your project a name as shown below.
Step 2: Install dependencies
Once your project has been created successfully, install dependencies using your preferred package manager:
# Using pnpm (recommended)
pnpm install
# Using npm
npm install
# Using yarn
yarn install
# Using bun
bun install
Step 3: Set up your OpenAI key
After installing the dependencies, create a .env file in the root folder and add your OpenAI API key.
OPENAI_API_KEY="your-open-api-key-here”
Step 4: Run development server
Then start the development server using your preferred package manager:
# Using pnpm
pnpm dev
# Using npm
npm run dev
# Using yarn
yarn dev
# Using bun
bun run dev
Once the development server is running, navigate to http://localhost:3000/, and you should see your AWS Strands + AG-UI + CopilotKit agent up and running.
Congrats! You've successfully set up a full-stack AWS Strands agent. Your AI agent is now ready to use! Try playing around with the suggestions in the chat to test the integration.
Integrating your AWS Strands agent with AG-UI protocol in the backend
In this section, you will learn how to integrate your AWS Strands agent with AG-UI protocol, to expose it to the frontend.
Let’s jump in.
Step 1: Install AWS Strands + AG-UI packages
To get started, install the AWS Strands + AG-UI packages, along with other necessary dependencies, using the commands below.
uv pip install ag-ui-protocol strands-agents[OpenAI] strands-agents-tools ag_ui_strands
Step 2: Import required packages
Once you have installed the required packages, import them as shown below in a main.py file.
# AG-UI / Strands integration helpers:
# - StrandsAgent: Wrapper that makes a Strands Agent compatible with AG-UI protocol
# - StrandsAgentConfig: Configuration for shared state and tool behaviors
# - ToolBehavior: Defines how specific tools interact with the AG-UI state system
# - create_strands_app: Factory function that creates a FastAPI application
from ag_ui_strands import (
StrandsAgent,
StrandsAgentConfig,
ToolBehavior,
create_strands_app,
)
# dotenv: Loads environment variables from a .env file for local development
from dotenv import load_dotenv
# Pydantic: Used for data validation and schema definition
from pydantic import BaseModel, Field
# Strands SDK:
# - Agent: The core agent class that orchestrates LLM interactions
# - tool: Decorator to mark functions as tools the agent can call
from strands import Agent, tool
# OpenAI model wrapper from Strands for using GPT models
from strands.models.openai import OpenAIModel
// ...
Step 3: Define your AWS Strands Agent Data Models
After importing required packages, define Pydantic models for structured data validation to ensure that tool inputs are validated adequately before processing, as shown below.
# Pydantic: Used for data validation and schema definition
from pydantic import BaseModel, Field
// ...
class ProverbsList(BaseModel):
"""Pydantic model representing the entire proverbs list.
We use a dedicated model to validate tool inputs for `update_proverbs`.
The `proverbs` field is a list of strings describing the stored proverbs.
"""
# The proverbs field is a list of strings with a description for the LLM
proverbs: List[str] = Field(description="The complete list of proverbs")
// ...
Step 4: Define your AWS Strands Agent Tools
Once you have defined your agent data models, define your agent tools decorated with @tool to register them with the Strands framework, as shown below.
// ...
@tool
def get_weather(location: str):
"""Backend tool example: returns weather information for a location.
This is a simple synchronous tool demonstrating a backend operation. In a
real app this would call an external weather API. Here it returns a JSON
string for simplicity so agents and UI can display structured data.
Args:
location: The location to get weather for (e.g., "Seattle, WA")
Returns:
A JSON string containing weather information.
"""
# Return a JSON-encoded placeholder response
# In production, this would call an actual weather API like OpenWeatherMap
return json.dumps({"location": "70 degrees"})
@tool
def set_theme_color(theme_color: str):
"""Frontend-only tool example: request a UI theme color change.
Frontend tools are declared on the backend so the agent can request UI
actions, but actual execution happens in the browser via a frontend
integration (e.g., `useFrontendTool`). Because the frontend performs the
action, this backend implementation returns `None`.
Args:
theme_color: CSS color (hex, name, or rgb) that the UI should apply
"""
# Return None because the actual implementation is in the frontend
# The frontend will intercept this tool call and apply the theme change
return None
@tool
def update_proverbs(proverbs_list: ProverbsList):
"""Backend tool: replace the entire proverbs list.
IMPORTANT: This tool expects the complete list of proverbs, not just
additions. When the UI or user wants to modify the list, always send the
full array so the backend can store a canonical snapshot.
Args:
proverbs_list: A validated `ProverbsList` instance containing the
complete array of proverbs.
Returns:
A success message string on completion.
"""
# In a real app, this would persist to a DB. Here we return success.
# The state synchronization happens via the ToolBehavior config below.
return "Proverbs updated successfully."
// ...
Step 5: Define State Management Functions
After defining your agent tools, configure state management functions to handle state synchronization between the agent and the UI, as shown below.
// ...
def build_proverbs_prompt(input_data, user_message: str) -> str:
"""Build a model prompt that includes the current proverbs state.
When the agent composes messages for the model, we inject the current
`proverbs` array into the prompt so the model can reference or modify it.
This function acts as a "state context builder" - it takes the current
shared state and incorporates it into the prompt sent to the LLM.
Args:
input_data: The shared state container (may include a `state` key)
user_message: The user's raw request/message
Returns:
The combined prompt string includes proverbs when available.
"""
# Step 5a: Try to extract the state dictionary from input_data
state_dict = getattr(input_data, "state", None)
# Step 5b: Check if we have a valid state with proverbs
if isinstance(state_dict, dict) and "proverbs" in state_dict:
# Step 5c: Pretty-print the proverbs array for the model to read easily
# Using indent=2 makes the JSON human-readable in the prompt
proverbs_json = json.dumps(state_dict["proverbs"], indent=2)
# Step 5d: Combine the proverbs context with the user message
return (
f"Current proverbs list:\n{proverbs_json}\n\nUser request: {user_message}"
)
# Step 5e: Fall back to the plain user message if no proverbs state is available
return user_message
async def proverbs_state_from_args(context):
"""Extract a `{'proverbs': [...]}` snapshot from tool arguments.
This function is used by the shared state system when the `update_proverbs`
tool runs. It inspects the tool input and returns a small dictionary
containing only the `proverbs` array so AG-UI can update its local state.
This is an async function because AG-UI's state handling may involve
async operations in more complex scenarios.
Args:
context: ToolResultContext containing `tool_input` (string or dict)
Returns:
A dictionary like `{"proverbs": [...]}` or `None` on error.
"""
try:
# Step 5f: Get the raw tool input from the context
tool_input = context.tool_input
# Step 5g: If the tool input was serialized to a string, parse it back
# This handles cases where the LLM sends the input as a JSON string
if isinstance(tool_input, str):
tool_input = json.loads(tool_input)
# Step 5h: Some call sites may package the list under `proverbs_list` key
# This provides flexibility in how the data is structured
proverbs_data = tool_input.get("proverbs_list", tool_input)
# Step 5i: Ensure we extract the array safely with proper type checking
if isinstance(proverbs_data, dict):
proverbs_array = proverbs_data.get("proverbs", [])
else:
proverbs_array = []
# Step 5j: Return the state snapshot in the expected format
return {"proverbs": proverbs_array}
except Exception:
# Step 5k: Return None to indicate we couldn't build a valid state snapshot
# AG-UI will handle this gracefully and not update the state
return None
// ...
Step 6: Configure Shared State between your Agent and UI
Once you have defined state management functions, configure how your AWS Strands agent and the UI share state and behave for specific tools using AG-UI’s StrandsAgentConfig class and state management functions you defined earlier, as shown below.
// ...
shared_state_config = StrandsAgentConfig(
# Step 6a: Inject proverbs into the prompt using the builder function above
# This ensures the LLM always has context about the current proverbs list
state_context_builder=build_proverbs_prompt,
# Step 6b: Define tool-specific behaviors for AG-UI integration
# This dictionary maps tool names to their ToolBehavior configurations
tool_behaviors={
"update_proverbs": ToolBehavior(
# Step 6c: Skip messages snapshot to avoid redundant state updates
# When this is True, the tool result won't trigger a full message sync
skip_messages_snapshot=True,
# Step 6d: Use our custom state extractor function
# This tells AG-UI how to extract state changes from tool arguments
state_from_args=proverbs_state_from_args,
)
},
)
// ...
Step 7: Configure your AWS Strands Agent
After configuring shared state, configure your AWS Strands agent by first initializing the OpenAI model using your OpenAI API key, as shown below.
// ...
# Step 7a: Retrieve the OpenAI API key from environment variables
# Default to empty string if not set (will cause authentication errors)
api_key = os.getenv("OPENAI_API_KEY", "")
# Step 7b: Create the OpenAI model instance with configuration
model = OpenAIModel(
# Pass API key through client_args for the underlying OpenAI client
client_args={"api_key": api_key},
# Use GPT-4o (the optimized version); change as appropriate for your account
model_id="gpt-4o",
)
// ...
Then define the system prompt that sets the agent's persona and behavior, as shown below.
// ...
system_prompt = (
"You are a helpful and wise assistant who helps manage a collection of proverbs."
)
// ...
Finally, create a Strands Agent with the model and declared tools, as shown below.
// ...
strands_agent = Agent(
model=model, # The LLM to use
system_prompt=system_prompt, # Agent's persona
tools=[update_proverbs, get_weather, set_theme_color], # Available tools
)
// ...
Step 8: Integrate your AWS Strands Agent with AG-UI Protocol
Once you have configured your AWS Strands agent, integrate it with AG-UI protocol using AG-UI’s StrandsAgent wrapper, as shown below.
// ...
# Step 9a: Create a StrandsAgent wrapper that adds AG-UI capabilities
agui_agent = StrandsAgent(
agent=strands_agent, # The underlying Strands agent
name="proverbs_agent", # Unique identifier for this agent
description="A proverbs assistant that collaborates with you to manage proverbs",
config=shared_state_config, # Shared state and tool behavior configuration
)
// ...
Step 9: Create a FastAPI Server
After integrating your AWS Strands agent with AG-UI protocol, create a FastAPI server using the AG-UI factory function to expose your Strands + AG-UI agent to the frontend, as shown below.
// ...
app = create_strands_app(agui_agent, "/")
if __name__ == "__main__":
# Step 11a: Import uvicorn ASGI server (only needed when running directly)
import uvicorn
# Step 11b: Start the development server with auto-reload enabled
# - "main:app": Module and app variable reference for uvicorn
# - host="0.0.0.0": Listen on all network interfaces
# - port=8000: Default HTTP port for development
# - reload=True: Auto-restart on code changes (development mode)
uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True)
Congrats! You've successfully integrated your Python AWS Strands Agent with AG-UI protocol and it is available at http://localhost:8000 (or specified port) endpoint.
Let’s now see how to add a frontend to your AG-UI wrapped AWS Strands agent.
Building a frontend for your AWS Strands + AG-UI agent using CopilotKit
In this section, you will learn how to add a frontend to your AWS Strands + AG-UI agent using CopilotKit, which runs anywhere that React runs.
Let’s get started.
Step 1: Install CopilotKit packages
To get started, install the latest CopilotKit packages in your frontend.
npm install @copilotkit/react-ui @copilotkit/react-core @copilotkit/runtime @ag-ui/client


