mirror of
https://github.com/awslabs/amazon-bedrock-agentcore-samples.git
synced 2025-09-08 20:50:46 +00:00
AutoGen Agent with Bedrock AgentCore Integration
Information | Details |
---|---|
Agent type | Synchronous |
Agentic Framework | Autogen |
LLM model | Open AI GPT 4o |
Components | AgentCore Runtime |
Example complexity | Easy |
SDK used | Amazon BedrockAgentCore Python SDK |
This example demonstrates how to integrate an AutoGen agent with AWS Bedrock AgentCore, enabling you to deploy a conversational agent with tool-using capabilities as a managed service.
Prerequisites
- Python 3.10+
- uv - Fast Python package installer and resolver
- AWS account with Bedrock access
- OpenAI API key (for the model client)
Setup Instructions
1. Create a Python Environment with uv
# Install uv if you don't have it already
pip install uv
# Create and activate a virtual environment
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
2. Install Requirements
uv pip install -r requirements.txt
3. Understanding the Agent Code
The autogen_agent_hello_world.py
file contains an AutoGen agent with a weather tool capability, integrated with Bedrock AgentCore:
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
import asyncio
import logging
# Set up logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger("autogen_agent")
# Initialize the model client
model_client = OpenAIChatCompletionClient(
model="gpt-4o",
)
# Define a simple function tool that the agent can use
async def get_weather(city: str) -> str:
"""Get the weather for a given city."""
return f"The weather in {city} is 73 degrees and Sunny."
# Define an AssistantAgent with the model and tool
agent = AssistantAgent(
name="weather_agent",
model_client=model_client,
tools=[get_weather],
system_message="You are a helpful assistant.",
reflect_on_tool_use=True,
model_client_stream=True, # Enable streaming tokens
)
# Integrate with Bedrock AgentCore
from bedrock_agentcore.runtime import BedrockAgentCoreApp
app = BedrockAgentCoreApp()
@app.entrypoint
async def main(payload):
# Process the user prompt
prompt = payload.get("prompt", "Hello! What can you help me with?")
# Run the agent
result = await Console(agent.run_stream(task=prompt))
# Extract the response content for JSON serialization
if result and hasattr(result, 'messages') and result.messages:
last_message = result.messages[-1]
if hasattr(last_message, 'content'):
return {"result": last_message.content}
return {"result": "No response generated"}
app.run()
4. Configure and Launch with Bedrock AgentCore Toolkit
# Configure your agent for deployment
agentcore configure -e
# Deploy your agent with OpenAI API key
agentcore launch --env OPENAI_API_KEY=...
5. Testing Your Agent
Launch locally to test for:
agentcore launch -l --env OPENAI_API_KEY=...
agentcore invoke -l '{"prompt": "what is the weather in NYC?"}'
The agent will:
- Process your query
- Use the weather tool if appropriate
- Provide a response based on the tool's output
Note: Remove the -l to launch and invoke on clpud
How It Works
This agent uses AutoGen's agent framework to create an assistant that can:
- Process natural language queries
- Decide when to use tools based on the query
- Execute tools and incorporate their results into responses
- Stream responses in real-time
The agent is wrapped with the Bedrock AgentCore framework, which handles:
- Deployment to AWS
- Scaling and management
- Request/response handling
- Environment variable management