How to Make Agents Talk to Each Other (and Your App) Using A2A + AG-UI
TL;DR
This guide explains how to build full-stack Agent-to-Agent (A2A) communication using A2A Protocol, AG-UI, and CopilotKit. It covers setup with CLI, integrating agents from frameworks like Google ADK and LangGraph, and creating a frontend for real-time interactions.
Key Takeaways
- •A2A Protocol enables standardized communication between AI agents from different frameworks, allowing discovery and collaboration.
- •Use CLI commands and dependencies setup to quickly configure A2A multi-agent systems with backend and frontend components.
- •Integrate orchestrator agents with Google ADK and AG-UI to expose them as ASGI applications for frontend communication.
- •Configure A2A remote agents, such as those using LangGraph, to handle specific tasks like itinerary generation in a workflow.
Tags
TL;DR
In this guide, you will learn how to build full-stack Agent-to-Agent(A2A) communication between AI agents from different AI agent frameworks using A2A Protocol, AG-UI Protocol, and CopilotKit.
Before we jump in, here is what we will cover:
What is A2A Protocol?
Setting up A2A multi-agent communication using CLI
Integrating AI agents from different agent frameworks with the A2A protocol
Building a frontend for the AG-UI and A2A multi-agent communication using CopilotKit
Here is a preview of what we will be building:
What is A2A Protocol?
The A2A (Agent-to-Agent) Protocol is a standardized communication framework by Google that enables different AI agents to discover, communicate, and collaborate no matter of the framework in a distributed system.
The A2A protocol is designed to facilitate inter-agent communication where agents can call other agents as tools or services, creating a network of specialized AI capabilities.
The key features of the A2A protocol include:
A2A Client: This is the boss agent (we call it the "client agent") that starts everything. It figures out what needs to be done, spots the right helper agents, and hands off the jobs to them. Think of it as the project manager in your code.
A2A Agent: An AI agent that sets up a simple web address (an HTTP endpoint) following A2A rules. It listens for incoming requests, crunches the task, and sends back results or updates. Super useful for making your agent "public" and ready to collaborate.
Agent Card: Imagine a digital ID card in JSON format—easy to read and share. It holds basic info about an A2A agent, like its name, what it does, and how to connect.
Agent Skills: These are like job descriptions for your agent. Each skill outlines one specific thing it's awesome at (e.g., "summarize articles" or "generate images"). Clients read these to know exactly what tasks to assign—no guessing games!
A2A Executor: The brains behind the scenes. It's a function in your code that does the heavy lifting: takes a request, runs the logic to solve the task, and spits out a response or triggers events.
A2A Server: The web server side of things. It turns your agent's skills into something shareable over the internet. You'll set it up with A2A's request handler, build a lightweight web app using Starlette (a Python web framework), and fire it up with Uvicorn (a speedy server runner). Boom—your agent is online and ready for action!
If you want to dive deeper into how the A2A protocol works and its setup, check out the docs here: A2A protocol docs.
Now that you have learned what the A2A protocol is, let us see how to use it together with AG-UI and CopilotKit to build full-stack A2A AI agents.
Prerequisites
To fully understand this tutorial, you need to have a basic understanding of React or Next.js
We'll also make use of the following:
Python - a popular programming language for building AI agents with AI agent frameworks; make sure it is installed on your computer.
AG-UI Protocol - The Agent User Interaction Protocol (AG-UI), developed by CopilotKit, is an open-source, lightweight, event-based protocol that facilitates rich, real-time interactions between the frontend and your AI agent backend.
Google ADK - an open-source framework designed by Google to simplify the process of building complex and production-ready AI agents.
LangGraph - a framework for creating and deploying AI agents. It also helps to define the control flows and actions to be performed by the agent.
Gemini API Key - an API key to enable you to perform various tasks using the Gemini models for ADK agents.
CopilotKit - an open-source copilot framework for building custom AI chatbots, in-app AI agents, and text areas.
Setting up A2A multi-agents communication using CLI
In this section, you will learn how to set up an A2A client (orchestrator agent) + A2A agents using a CLI command that sets up the backend using Google ADK with AG-UI protocol and the frontend using CopilotKit.
Let’s get started.
Step 1: Run CLI command
If you don’t already have a pre-configured AG-UI agent, you can set up one quickly by running the CLI command below in your terminal.
npx copilotkit@latest create -f a2a
Then give your project a name as shown below.
Step 2: Install frontend dependencies
Once your project has been created successfully, install dependencies using your preferred package manager:
npm install
Step 3: Install backend dependencies
After installing the frontend dependencies, install the backend dependencies:
cd agents
python3 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt
cd ..
Step 4: Set up environment variables
Once you have installed the backend dependencies, set up environment variables:
cp .env.example .env
# Edit .env and add your API keys:
# GOOGLE_API_KEY=your_google_api_key
# OPENAI_API_KEY=your_openai_api_key
Step 5: Start all services
After setting up the environment variables, start all services that include the backend and the frontend.
npm run dev
Once the development server is running, navigate to http://localhost:3000/ and you should see your A2A multi-agents frontend up and running.
Congrats! You've successfully set up an A2A multi-agent communication. Try asking your agent to research a topic, like "Please research quantum computing". You'll see that it will send messages to the research agent and the analysis agent. Then, it will present the complete research and analysis to the user.
Integrating the orchestrator agent with Google ADK and AG-UI protocol in the backend
In this section, you will learn how to integrate your orchestrator agent with Google ADK and AG-UI protocol to expose it to the frontend as an ASGI application.
Let’s jump in.
Step 1: Setting up the backend
To get started, clone the A2A-Travel repository that consists of a Python-based backend (agents) and a Next.js frontend.
Next, navigate to the backend directory:
cd agents
Then create a new Python virtual environment:
python -m venv .venv
After that, activate the virtual environment:
source .venv/bin/activate # On Windows: .venv\Scripts\activate
Finally, install all the Python dependencies listed in the requirements.txt
file.
pip install -r requirements.txt
Step 2: Configure your Orchestrator ADK agent
Once you have set up the backend, configure your orchestrator ADK agent by defining the agent name, specifying Gemini 2.5 Pro as the Large Language Model (LLM), and defining the agent’s instructions, as shown below in the agents/orchestrator.py
file.
# Import Google ADK components for LLM agent creation
from google.adk.agents import LlmAgent
# === ORCHESTRATOR AGENT CONFIGURATION ===
# Create the main orchestrator agent using Google ADK's LlmAgent
# This agent coordinates all travel planning activities and manages the workflow
orchestrator_agent = LlmAgent(
name="OrchestratorAgent",
model="gemini-2.5-pro", # Use the more powerful Pro model for complex orchestration
instruction="""
You are a travel planning orchestrator agent. Your role is to coordinate specialized agents
to create personalized travel plans.
AVAILABLE SPECIALIZED AGENTS:
1. **Itinerary Agent** (LangGraph) - Creates day-by-day travel itineraries with activities
2. **Restaurant Agent** (LangGraph) - Recommends restaurants for breakfast, lunch, and dinner by day
3. **Weather Agent** (ADK) - Provides weather forecasts and packing advice
4. **Budget Agent** (ADK) - Estimates travel costs and creates budget breakdowns
CRITICAL CONSTRAINTS:
- You MUST call agents ONE AT A TIME, never make multiple tool calls simultaneously
- After making a tool call, WAIT for the result before making another tool call
- Do NOT make parallel/concurrent tool calls - this is not supported
RECOMMENDED WORKFLOW FOR TRAVEL PLANNING:
// ...
""",
)
Step 3: Create an ADK middleware agent instance
After configuring your orchestrator ADK agent, create an ADK middleware agent instance that wraps your orchestrator ADK agent to integrate it with AG-UI protocol, as shown below in the agents/orchestrator.py
file.
# Import AG-UI ADK components for frontend integration
from ag_ui_adk import ADKAgent
// ...
# === AG-UI PROTOCOL INTEGRATION ===
# Wrap the orchestrator agent with AG-UI Protocol capabilities
# This enables frontend communication and provides the interface for user interactions
adk_orchestrator_agent = ADKAgent(
adk_agent=orchestrator_agent, # The core LLM agent we created above
app_name="orchestrator_app", # Unique application identifier
user_id="demo_user", # Default user ID for demo purposes
session_timeout_seconds=3600, # Session timeout (1 hour)
use_in_memory_services=True # Use in-memory storage for simplicity
)
Step 4: Configure a FastAPI endpoint
Once you have created an ADK middleware agent instance, configure a FastAPI endpoint that exposes your AG-UI wrapped orchestrator ADK agent to the frontend, as shown below in the agents/orchestrator.py
file.
# Import necessary libraries for web server and environment variables
import os
import uvicorn
# Import FastAPI for creating HTTP endpoints
from fastapi import FastAPI
// ...
# === FASTAPI WEB APPLICATION SETUP ===
# Create the FastAPI application that will serve the orchestrator agent
# This provides HTTP endpoints for the AG-UI Protocol communication
app = FastAPI(title="Travel Planning Orchestrator (ADK)")
# Add the ADK agent endpoint to the FastAPI application
# This creates the necessary routes for AG-UI Protocol communication
add_adk_fastapi_endpoint(app, adk_orchestrator_agent, path="/")
# === MAIN APPLICATION ENTRY POINT ===
if __name__ == "__main__":
"""
Main entry point when the script is run directly.
This function:
1. Checks for required environment variables (API keys)
2. Configures the server port
3. Starts the uvicorn server with the FastAPI application
"""
# Check for required Google API key
if not os.getenv("GOOGLE_API_KEY"):
print("⚠️ Warning: GOOGLE_API_KEY environment variable not set!")
print(" Set it with: export GOOGLE_API_KEY='your-key-here'")
print(" Get a key from: https://aistudio.google.com/app/apikey")
print()
# Get server port from environment variable, default to 9000
port = int(os.getenv("ORCHESTRATOR_PORT", 9000))
# Start the server with detailed information
print(f"🚀 Starting Orchestrator Agent (ADK + AG-UI) on http://localhost:{port}")
# Run the FastAPI application using uvicorn
# host="0.0.0.0" allows external connections
# port is configurable via the environment variable
uvicorn.run(app, host="0.0.0.0", port=port)
Congrats! You've successfully integrated your orchestrator ADK Agent with AG-UI protocol, and it is available at http://localhost:9000 (or specified port) endpoint.
Integrating AI agents from different agent frameworks with the A2A protocol
In this section, you will learn how to integrate AI agents from different agent frameworks with A2A
protocol.
Let’s get started!
Step 1: Configure A2A remote agents
To get started, configure your A2A remote agent, such as the itinerary agent that uses the LangGraph framework, as shown in the agents/itinerary_agent.py
file.
# Import LangGraph components for workflow management
from langgraph.graph import StateGraph, END
// ...
# === MAIN AGENT CLASS ===
class ItineraryAgent:
"""
Main agent class that handles itinerary generation using LangGraph workflow.
"""
def __init__(self):
self.llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)
# Build and compile the LangGraph workflow
self.graph = self._build_graph()
def _build_graph(self):
workflow = StateGraph(ItineraryState)
workflow.add_node("parse_request", self._parse_request)
workflow.add_node("create_itinerary", self._create_itinerary)
workflow.set_entry_point("parse_request")
workflow.add_edge("parse_request", "create_itinerary")
workflow.add_edge("create_itinerary", END)
# Compile the workflow into an executable graph
return workflow.compile()
def _parse_request(self, state: ItineraryState) -> ItineraryState:
message = state["message"]
# Create a focused prompt for the extraction task
prompt = f"""
Extract the destination and number of days from this travel request.
Return ONLY a JSON string with 'destination' and 'days' fields.
Request: {message}
Example output: {{"destination": "Tokyo", "days": 3}}
"""
# Get LLM response for parsing
response = self.llm.invoke(prompt)
# Debug: Print the LLM response for troubleshooting
print(response.content)
try:
# Attempt to parse the JSON response
parsed = json.loads(response.content)
state["destination"] = parsed.get("destination", "Unknown")
state["days"] = int(parsed.get("days", 3))
except:
# Fallback values if parsing fails
print("⚠️ Failed to parse request, using defaults")
state["destination"] = "Unknown"
state["days"] = 3
return state
def _create_itinerary(self, state: ItineraryState) -> ItineraryState:
destination = state["destination"]
days = state["days"]
# Create detailed prompt for itinerary generation
prompt = f"""
Create a detailed {days}-day travel itinerary for {destination}.
// ...
Make it realistic, interesting, and include specific place names.
Return ONLY valid JSON, no markdown, no other text.
"""
# Generate itinerary using LLM
response = self.llm.invoke(prompt)
content = response.content.strip()
# Clean up response - remove markdown formatting if present
if "```
json" in content:
content = content.split("
```json")[1].split("```
")[0].strip()
elif "
```" in content:
content = content.split("```
")[1].split("
```")[0].strip()
try:
# Step 1: Parse JSON from LLM response
structured_data = json.loads(content)
# Step 2: Validate structure using Pydantic model
validated_itinerary = StructuredItinerary(**structured_data)
# Step 3: Store both validated data and formatted JSON string
state["structured_itinerary"] = validated_itinerary.model_dump()
state["itinerary"] = json.dumps(validated_itinerary.model_dump(), indent=2)
print("✅ Successfully created structured itinerary")
// ...
return state
async def invoke(self, message: Message) -> str:
# Extract text content from A2A message format
message_text = message.parts[0].root.text
print("Invoking itinerary agent with message: ", message_text)
# Execute the LangGraph workflow with initial state
result = self.graph.invoke({
"message"