mirror of
https://github.com/awslabs/amazon-bedrock-agentcore-samples.git
synced 2025-09-08 20:50:46 +00:00
* feat: integrate long-term memory system into SRE agent - Add AgentCore Memory integration with three memory strategies: * User preferences (escalation, notification, workflow preferences) * Infrastructure knowledge (dependencies, patterns, baselines) * Investigation summaries (timeline, actions, findings) - Implement memory tools for save/retrieve operations - Add automatic memory capture through hooks and pattern recognition - Extend agent state to support memory context - Integrate memory-aware planning in supervisor agent - Add comprehensive test coverage for memory functionality - Create detailed documentation with usage examples This transforms the SRE agent from stateless to learning assistant that becomes more valuable over time by remembering user preferences, infrastructure patterns, and investigation outcomes. Addresses issue #164 * feat: environment variable config, agent routing fixes, and project organization - Move USER_ID/SESSION_ID from metadata parsing to environment variables - Add .memory_id to .gitignore for local memory state - Update .gitignore to use .scratchpad/ folder instead of .scratchpad.md - Fix agent routing issues with supervisor prompt and graph node naming - Add conversation memory tracking for all agents and supervisor - Improve agent metadata system with centralized constants - Add comprehensive logging and debugging for agent tool access - Update deployment script to pass user_id/session_id in payload - Create .scratchpad/ folder structure for better project organization * feat: enhance SRE agent with automatic report archiving and error fixes - Add automatic archiving system for reports by date - Include user_id in report filenames for better organization - Fix Pydantic validation error with string-to-list conversion for investigation steps - Add content length truncation for memory storage to prevent validation errors - Remove status line from report output for cleaner formatting - Implement date-based folder organization (YYYY-MM-DD format) - Add memory content length limits configuration in constants Key improvements: - Reports now auto-archive old files when saving new ones - User-specific filenames: query_user_id_UserName_YYYYMMDD_HHMMSS.md - Robust error handling for memory content length limits - Backward compatibility with existing filename formats * feat: fix memory retrieval system for cross-session searches and user personalization Key fixes and improvements: - Fix case preservation in actor_id sanitization (Carol remains Carol, not carol) - Enable cross-session memory searches for infrastructure and investigation memories - Add XML parsing support for investigation summaries stored in XML format - Enhance user preference integration throughout the system - Add comprehensive debug logging for memory retrieval processes - Update prompts to support user-specific communication styles and preferences Memory system now properly: - Preserves user case in memory namespaces (/sre/users/Carol vs /sre/users/carol) - Searches across all sessions for planning context vs session-specific for current state - Parses both JSON and XML formatted investigation memories - Adapts investigation approach based on user preferences and historical patterns - Provides context-aware planning using infrastructure knowledge and past investigations * feat: enhance SRE agent with user-specific memory isolation and anti-hallucination measures Memory System Improvements: - Fix memory isolation to retrieve only user-specific memories (Alice doesn't see Carol's data) - Implement proper namespace handling for cross-session vs session-specific searches - Add detailed logging for memory retrieval debugging and verification - Remove verbose success logs, keep only error logs for cleaner output Anti-Hallucination Enhancements: - Add tool output validation requirements to agent prompts - Implement timestamp fabrication prevention (use 2024-* format from backend) - Require tool attribution for all metrics and findings in reports - Add backend data alignment patterns for consistent data references - Update supervisor aggregation prompts to flag unverified claims Code Organization: - Extract hardcoded prompts from supervisor.py to external prompt files - Add missing session_id parameters to SaveInfrastructureTool and SaveInvestigationTool - Improve memory client namespace documentation and cross-session search logic - Reduce debug logging noise while maintaining error tracking Verification Complete: - Memory isolation working correctly (only user-specific data retrieval) - Cross-session memory usage properly configured for planning and investigations - Memory integration confirmed in report generation pipeline - Anti-hallucination measures prevent fabricated metrics and timestamps * feat: organize utility scripts in dedicated scripts folder Script Organization: - Move manage_memories.py to scripts/ folder with updated import paths - Move configure_gateway.sh to scripts/ folder with corrected PROJECT_ROOT path - Copy user_config.yaml to scripts/ folder for self-contained script usage Path Fixes: - Update manage_memories.py to import sre_agent module from correct relative path - Fix .memory_id file path resolution for new script location - Update configure_gateway.sh PROJECT_ROOT to point to correct parent directory - Add fallback logic to find user_config.yaml in scripts/ or project root Script Improvements: - Update help text and examples to use 'uv run python scripts/' syntax - Make manage_memories.py executable with proper permissions - Maintain backward compatibility for custom config file paths - Self-contained scripts folder with all required dependencies Verification: - All scripts work correctly from new location - Memory management functions operate properly - Gateway configuration handles paths correctly - User preferences loading works from scripts directory * docs: update SSL certificate paths to use /opt/ssl standard location - Update README.md to reference /opt/ssl for SSL certificate paths - Update docs/demo-environment.md to use /opt/ssl paths - Clean up scripts/configure_gateway.sh SSL fallback paths - Remove duplicate and outdated SSL path references - Establish /opt/ssl as the standard SSL certificate location This ensures consistent SSL certificate management across all documentation and scripts, supporting the established /opt/ssl directory with proper ubuntu:ubuntu ownership. * feat: enhance memory system with infrastructure parsing fix and user personalization analysis Infrastructure Memory Parsing Improvements: - Fix infrastructure memory parsing to handle both JSON and plain text formats - Convert plain text memories to structured InfrastructureKnowledge objects - Change warning logs to debug level for normal text-to-structure conversion - Ensure all infrastructure memories are now retrievable and usable User Personalization Documentation: - Add comprehensive memory system analysis comparing Alice vs Carol reports - Create docs/examples/ folder with real investigation reports demonstrating personalization - Document side-by-side communication differences based on user preferences - Show how same technical incident produces different reports for different user roles Example Reports Added: - Alice's technical detailed investigation report (technical role preferences) - Carol's business-focused executive summary report (executive role preferences) - Memory system analysis with extensive side-by-side comparisons This demonstrates the memory system's ability to: - Maintain technical accuracy while adapting presentation style - Apply user-specific escalation procedures and communication channels - Build institutional knowledge about recurring infrastructure patterns - Personalize identical technical incidents for different organizational roles * feat: enhance memory system with automatic pattern extraction and improved logging ## Memory System Enhancements - **Individual agent memory integration**: Every agent response now triggers automatic memory pattern extraction through on_agent_response() hooks - **Enhanced conversation logging**: Added detailed message breakdown showing USER/ASSISTANT/TOOL message counts and tool names called - **Fixed infrastructure extraction**: Resolved hardcoded agent name issues by using SREConstants for agent identification - **Comprehensive memory persistence**: All agent responses and tool executions stored as conversation memory with proper session tracking ## Tool Architecture Clarification - **Centralized memory access**: Confirmed only supervisor agent has direct access to memory tools (retrieve_memory, save_*) - **Individual agent focus**: Individual agents have NO memory tools, only domain-specific tools (5 tools each for metrics, logs, k8s, runbooks) - **Automatic pattern recognition**: Memory capture happens automatically through hooks, not manual tool calls by individual agents ## Documentation Updates - **Updated memory-system.md**: Comprehensive design documentation reflecting current implementation - **Added example analyses**: Created flight-booking-analysis.md and api-response-time-analysis.md in docs/examples/ - **Enhanced README.md**: Added memory system overview and personalized investigation examples - **Updated .gitignore**: Now ignores entire reports/ folder instead of just .md files ## Implementation Improvements - **Event ID tracking**: All memory operations generate and log event IDs for verification - **Pattern extraction confirmation**: Logs confirm pattern extraction working for all agent types - **Memory save verification**: Comprehensive logging shows successful saves across all memory types - **Script enhancements**: manage_memories.py now handles duplicate removal and improved user management * docs: enhance memory system documentation with planning agent memory usage examples - Add real agent.log snippets showing planning agent retrieving and using memory context - Document XML-structured prompts for improved Claude model interaction - Explain JSON response format enforcement and infrastructure knowledge extraction - Add comprehensive logging and monitoring details - Document actor ID design for proper memory namespace isolation - Fix ASCII flow diagram alignment for better readability - Remove temporal framing and present features as current design facts * docs: add AWS documentation links and clean up memory system documentation - Add hyperlink to Amazon Bedrock AgentCore Memory main documentation - Link to Memory Getting Started Guide for the three memory strategies - Remove Legacy Pattern Recognition section from documentation (code remains) - Remove Error Handling and Fallbacks section to focus on core functionality - Keep implementation details in code while streamlining public documentation * docs: reorganize memory-system.md to eliminate redundancies - Merged Memory Tool Architecture and Planning sections into unified section - Consolidated all namespace/actor_id explanations in architecture section - Combined pattern recognition and memory capture content - Created dedicated Agent Memory Integration section with examples - Removed ~15-20% redundant content while improving clarity - Improved document structure for better navigation * style: apply ruff formatting and fix code style issues - Applied ruff auto-formatting to all Python files - Fixed 383 style issues automatically - Remaining issues require manual intervention: - 29 ruff errors (bare except, unused variables, etc.) - 61 mypy type errors (missing annotations, implicit Optional) - Verified memory system functionality matches documentation - Confirmed user personalization working correctly in reports * docs: make benefits section more succinct in memory-system.md - Consolidated 12 bullet points into 5 focused benefits - Removed redundant three-category structure (Users/Teams/Operations) - Maintained all key value propositions while improving readability - Reduced section length by ~60% while preserving essential information * feat: add comprehensive cleanup script with memory deletion - Added cleanup.sh script to delete all AWS resources (gateway, runtime, memory) - Integrated memory deletion using bedrock_agentcore MemoryClient - Added proper error handling and graceful fallbacks - Updated execution order: servers → gateway → memory → runtime → local files - Added memory deletion to README.md cleanup instructions - Includes confirmation prompts and --force option for automation * fix: preserve .env, .venv, and reports in cleanup script - Modified cleanup script to only remove AWS-generated configuration files - Preserved .env files for development continuity - Preserved .venv directories to avoid reinstalling dependencies - Preserved reports/ directory containing investigation history - Files removed: gateway URIs, tokens, agent ARNs, memory IDs only - Updated documentation to clarify preserved vs removed files * fix: use correct bedrock-agentcore-control client for gateway operations - Changed boto3 client from 'bedrock-agentcore' to 'bedrock-agentcore-control' - Fixes 'list_gateways' method not found error during gateway deletion - Both gateway and runtime deletion now use the correct control plane client * docs: add memory system initialization timing guidance - Added note that memory system takes 10-12 minutes to be ready - Added steps to check memory status with list command after 10 minutes - Added instruction to run update command again once memory is ready - Provides clear workflow for memory system setup and prevents user confusion * docs: comprehensive documentation update and cleanup - Remove unused root .env and .env.example files (not referenced by any code) - Update configuration.md with comprehensive config file documentation - Add configuration overview table with setup instructions and auto-generation info - Consolidate specialized-agents.md content into system-components.md - Update system-components.md with complete AgentCore architecture - Add detailed sections for AgentCore Runtime, Gateway, and Memory primitives - Remove cli-reference.md (excessive documentation for limited use) - Update README.md to reference configuration guide in setup section - Clean up documentation links and organization The documentation now provides a clear, consolidated view of the system architecture and configuration with proper cross-references and setup guidance. * feat: improve runtime deployment and invocation robustness - Increase deletion wait time to 150s for agent runtime cleanup - Add retry logic with exponential backoff for MCP rate limiting (429 errors) - Add session_id and user_id to agent state for memory retrieval - Filter out /ping endpoint logs to reduce noise - Increase boto3 read timeout to 5 minutes for long-running operations - Add clear error messages for agent name conflicts - Update README to clarify virtual environment requirement for scripts - Fix session ID generation to meet 33+ character requirement These changes improve reliability when deploying and invoking agents, especially under heavy load or with complex queries that take time. * chore: remove accidentally committed reports folder Removed 130+ markdown report files from the reports/ directory that were accidentally committed. The .gitignore already includes reports/ to prevent future commits of these generated files.
494 lines
21 KiB
Python
494 lines
21 KiB
Python
#!/usr/bin/env python3
|
|
|
|
import asyncio
|
|
import logging
|
|
from functools import lru_cache
|
|
from pathlib import Path
|
|
from typing import Any, Dict, List
|
|
|
|
import yaml
|
|
from langchain_core.messages import HumanMessage, SystemMessage
|
|
from langchain_core.tools import BaseTool
|
|
from langgraph.prebuilt import create_react_agent
|
|
|
|
from .agent_state import AgentState
|
|
from .constants import AgentMetadata
|
|
from .llm_utils import create_llm_with_error_handling
|
|
from .memory import SREMemoryClient, create_conversation_memory_manager
|
|
from .prompt_loader import prompt_loader
|
|
|
|
# Logging will be configured by the main entry point
|
|
logger = logging.getLogger(__name__)
|
|
|
|
|
|
@lru_cache(maxsize=1)
|
|
def _load_agent_config() -> Dict[str, Any]:
|
|
"""Load agent configuration from YAML file."""
|
|
config_path = Path(__file__).parent / "config" / "agent_config.yaml"
|
|
with open(config_path, "r") as f:
|
|
return yaml.safe_load(f)
|
|
|
|
|
|
def _create_llm(provider: str = "bedrock", **kwargs):
|
|
"""Create LLM instance with improved error handling."""
|
|
return create_llm_with_error_handling(provider, **kwargs)
|
|
|
|
|
|
def _filter_tools_for_agent(
|
|
all_tools: List[BaseTool], agent_name: str, config: Dict[str, Any]
|
|
) -> List[BaseTool]:
|
|
"""Filter tools based on agent configuration."""
|
|
agent_config = config["agents"].get(agent_name, {})
|
|
allowed_tools = agent_config.get("tools", [])
|
|
|
|
# Also include global tools
|
|
global_tools = config.get("global_tools", [])
|
|
allowed_tools.extend(global_tools)
|
|
|
|
# Filter tools based on their names
|
|
filtered_tools = []
|
|
for tool in all_tools:
|
|
tool_name = getattr(tool, "name", "")
|
|
# Remove any prefix from tool name for matching
|
|
base_tool_name = tool_name.split("___")[-1] if "___" in tool_name else tool_name
|
|
|
|
if base_tool_name in allowed_tools:
|
|
filtered_tools.append(tool)
|
|
|
|
logger.info(f"Agent {agent_name} has access to {len(filtered_tools)} tools")
|
|
|
|
# Debug: Show which tools are being added to this agent
|
|
logger.info(f"Agent {agent_name} tool names:")
|
|
for tool in filtered_tools:
|
|
tool_name = getattr(tool, "name", "unknown")
|
|
tool_description = getattr(tool, "description", "No description")
|
|
# Extract just the first line of description for cleaner logging
|
|
description_first_line = (
|
|
tool_description.split("\n")[0].strip()
|
|
if tool_description
|
|
else "No description"
|
|
)
|
|
logger.info(f" - {tool_name}: {description_first_line}")
|
|
|
|
# Debug: Show what was allowed vs what was available
|
|
logger.debug(f"Agent {agent_name} allowed tools: {allowed_tools}")
|
|
all_tool_names = [getattr(tool, "name", "unknown") for tool in all_tools]
|
|
logger.debug(f"Agent {agent_name} available tools: {all_tool_names}")
|
|
|
|
return filtered_tools
|
|
|
|
|
|
class BaseAgentNode:
|
|
"""Base class for all agent nodes."""
|
|
|
|
def __init__(
|
|
self,
|
|
name: str,
|
|
description: str,
|
|
tools: List[BaseTool],
|
|
llm_provider: str = "bedrock",
|
|
agent_metadata: AgentMetadata = None,
|
|
**llm_kwargs,
|
|
):
|
|
# Use agent_metadata if provided, otherwise fall back to individual parameters
|
|
if agent_metadata:
|
|
self.name = agent_metadata.display_name
|
|
self.description = agent_metadata.description
|
|
self.actor_id = agent_metadata.actor_id
|
|
self.agent_type = agent_metadata.agent_type
|
|
else:
|
|
# Backward compatibility - use provided name/description
|
|
self.name = name
|
|
self.description = description
|
|
self.actor_id = None # No actor_id available in legacy mode
|
|
self.agent_type = "unknown"
|
|
|
|
self.tools = tools
|
|
self.llm_provider = llm_provider
|
|
|
|
logger.info(
|
|
f"Initializing {self.name} with LLM provider: {llm_provider}, actor_id: {self.actor_id}, tools: {[tool.name for tool in tools]}"
|
|
)
|
|
self.llm = _create_llm(llm_provider, **llm_kwargs)
|
|
|
|
# Create the react agent
|
|
self.agent = create_react_agent(self.llm, self.tools)
|
|
|
|
def _get_system_prompt(self) -> str:
|
|
"""Get system prompt for this agent using prompt loader."""
|
|
try:
|
|
# Determine agent type based on name
|
|
agent_type = self._get_agent_type()
|
|
|
|
# Use prompt loader to get complete prompt
|
|
return prompt_loader.get_agent_prompt(
|
|
agent_type=agent_type,
|
|
agent_name=self.name,
|
|
agent_description=self.description,
|
|
)
|
|
except Exception as e:
|
|
logger.error(f"Error loading prompt for agent {self.name}: {e}")
|
|
# Fallback to basic prompt if loading fails
|
|
return f"You are the {self.name}. {self.description}"
|
|
|
|
def _get_agent_type(self) -> str:
|
|
"""Determine agent type based on agent metadata or fallback to name parsing."""
|
|
# Use agent_type from metadata if available
|
|
if hasattr(self, "agent_type") and self.agent_type != "unknown":
|
|
return self.agent_type
|
|
|
|
# Fallback to name-based detection for backward compatibility
|
|
name_lower = self.name.lower()
|
|
|
|
if "kubernetes" in name_lower:
|
|
return "kubernetes"
|
|
elif "logs" in name_lower or "application" in name_lower:
|
|
return "logs"
|
|
elif "metrics" in name_lower or "performance" in name_lower:
|
|
return "metrics"
|
|
elif "runbooks" in name_lower or "operational" in name_lower:
|
|
return "runbooks"
|
|
else:
|
|
logger.warning(f"Unknown agent type for agent: {self.name}")
|
|
return "unknown"
|
|
|
|
async def __call__(self, state: AgentState) -> Dict[str, Any]:
|
|
"""Process the current state and return updated state."""
|
|
try:
|
|
# Get the last user message
|
|
messages = state["messages"]
|
|
|
|
# Create a focused query for this agent
|
|
agent_prompt = (
|
|
f"As the {self.name}, help with: {state.get('current_query', '')}"
|
|
)
|
|
|
|
# If auto_approve_plan is set, add instruction to not ask follow-up questions
|
|
if state.get("auto_approve_plan", False):
|
|
agent_prompt += "\n\nIMPORTANT: Provide a complete, actionable response without asking any follow-up questions. Do not ask if the user wants more details or if they would like you to investigate further."
|
|
|
|
# We'll collect all messages and the final response
|
|
all_messages = []
|
|
agent_response = ""
|
|
|
|
# Initialize conversation memory manager for automatic message tracking
|
|
conversation_manager = None
|
|
user_id = state.get("user_id")
|
|
if user_id:
|
|
try:
|
|
memory_client = SREMemoryClient()
|
|
conversation_manager = create_conversation_memory_manager(
|
|
memory_client
|
|
)
|
|
logger.info(
|
|
f"{self.name} - Initialized conversation memory manager for user: {user_id}"
|
|
)
|
|
except Exception as e:
|
|
logger.warning(
|
|
f"{self.name} - Failed to initialize conversation memory manager: {e}"
|
|
)
|
|
else:
|
|
logger.info(
|
|
f"{self.name} - No user_id found in state, skipping conversation memory"
|
|
)
|
|
|
|
# Add system prompt and user prompt
|
|
system_message = SystemMessage(content=self._get_system_prompt())
|
|
user_message = HumanMessage(content=agent_prompt)
|
|
|
|
# Stream the agent execution to capture tool calls with timeout
|
|
logger.info(f"{self.name} - Starting agent execution")
|
|
|
|
try:
|
|
# Add timeout to prevent infinite hanging (120 seconds)
|
|
timeout_seconds = 120
|
|
|
|
async def execute_agent():
|
|
nonlocal agent_response # Fix scope issue - allow access to outer variable
|
|
chunk_count = 0
|
|
logger.info(
|
|
f"{self.name} - Executing agent with {[system_message] + messages + [user_message]}"
|
|
)
|
|
async for chunk in self.agent.astream(
|
|
{"messages": [system_message] + messages + [user_message]}
|
|
):
|
|
chunk_count += 1
|
|
logger.info(
|
|
f"{self.name} - Processing chunk #{chunk_count}: {list(chunk.keys())}"
|
|
)
|
|
|
|
if "agent" in chunk:
|
|
agent_step = chunk["agent"]
|
|
if "messages" in agent_step:
|
|
for msg in agent_step["messages"]:
|
|
all_messages.append(msg)
|
|
# Log tool calls being made
|
|
if hasattr(msg, "tool_calls") and msg.tool_calls:
|
|
logger.info(
|
|
f"{self.name} - Agent making {len(msg.tool_calls)} tool calls"
|
|
)
|
|
for tc in msg.tool_calls:
|
|
tool_name = tc.get("name", "unknown")
|
|
tool_args = tc.get("args", {})
|
|
tool_id = tc.get("id", "unknown")
|
|
logger.info(
|
|
f"{self.name} - Tool call: {tool_name} (id: {tool_id})"
|
|
)
|
|
logger.debug(
|
|
f"{self.name} - Tool args: {tool_args}"
|
|
)
|
|
# Always capture the latest content from AIMessages
|
|
if (
|
|
hasattr(msg, "content")
|
|
and hasattr(msg, "__class__")
|
|
and "AIMessage" in str(msg.__class__)
|
|
):
|
|
agent_response = msg.content
|
|
logger.info(
|
|
f"{self.name} - Agent response captured: {agent_response[:100]}... (total: {len(str(agent_response))} chars)"
|
|
)
|
|
|
|
elif "tools" in chunk:
|
|
tools_step = chunk["tools"]
|
|
logger.info(
|
|
f"{self.name} - Tools chunk received, processing {len(tools_step.get('messages', []))} messages"
|
|
)
|
|
if "messages" in tools_step:
|
|
for msg in tools_step["messages"]:
|
|
all_messages.append(msg)
|
|
# Log tool executions
|
|
if hasattr(msg, "tool_call_id"):
|
|
tool_name = getattr(msg, "name", "unknown")
|
|
tool_call_id = getattr(
|
|
msg, "tool_call_id", "unknown"
|
|
)
|
|
content_preview = (
|
|
str(msg.content)[:200]
|
|
if hasattr(msg, "content")
|
|
else "No content"
|
|
)
|
|
logger.info(
|
|
f"{self.name} - Tool response received: {tool_name} (id: {tool_call_id}), content: {content_preview}..."
|
|
)
|
|
logger.debug(
|
|
f"{self.name} - Full tool response: {msg.content if hasattr(msg, 'content') else 'No content'}"
|
|
)
|
|
|
|
logger.info(
|
|
f"{self.name} - Executing agent with timeout of {timeout_seconds} seconds"
|
|
)
|
|
await asyncio.wait_for(execute_agent(), timeout=timeout_seconds)
|
|
logger.info(f"{self.name} - Agent execution completed")
|
|
|
|
except asyncio.TimeoutError:
|
|
logger.error(
|
|
f"{self.name} - Agent execution timed out after {timeout_seconds} seconds"
|
|
)
|
|
agent_response = f"Agent execution timed out after {timeout_seconds} seconds. The agent may be stuck on a tool call or LLM response."
|
|
|
|
except Exception as e:
|
|
logger.error(f"{self.name} - Agent execution failed: {e}")
|
|
logger.exception("Full exception details:")
|
|
agent_response = f"Agent execution failed: {str(e)}"
|
|
|
|
# Debug: Check what we captured
|
|
logger.info(
|
|
f"{self.name} - Captured response length: {len(agent_response) if agent_response else 0}"
|
|
)
|
|
if agent_response:
|
|
logger.info(f"{self.name} - Full response: {str(agent_response)}")
|
|
|
|
# Store conversation messages in memory after agent response
|
|
if conversation_manager and user_id and agent_response:
|
|
try:
|
|
# Store the user query and agent response as conversation messages
|
|
messages_to_store = [
|
|
(agent_prompt, "USER"),
|
|
(
|
|
f"[Agent: {self.name}]\n{agent_response}",
|
|
"ASSISTANT",
|
|
), # Include agent name in message content
|
|
]
|
|
|
|
# Also capture tool execution results as TOOL messages
|
|
tool_names = []
|
|
for msg in all_messages:
|
|
if hasattr(msg, "tool_call_id") and hasattr(msg, "content"):
|
|
tool_content = str(msg.content)[
|
|
:500
|
|
] # Limit tool message length
|
|
tool_name = getattr(msg, "name", "unknown")
|
|
tool_names.append(tool_name)
|
|
messages_to_store.append(
|
|
(
|
|
f"[Agent: {self.name}] [Tool: {tool_name}]\n{tool_content}",
|
|
"TOOL",
|
|
)
|
|
)
|
|
|
|
# Count message types
|
|
user_count = len([m for m in messages_to_store if m[1] == "USER"])
|
|
assistant_count = len(
|
|
[m for m in messages_to_store if m[1] == "ASSISTANT"]
|
|
)
|
|
tool_count = len([m for m in messages_to_store if m[1] == "TOOL"])
|
|
|
|
# Log message breakdown before storing
|
|
logger.info(
|
|
f"{self.name} - Message breakdown: {user_count} USER, {assistant_count} ASSISTANT, {tool_count} TOOL messages"
|
|
)
|
|
if tool_names:
|
|
logger.info(
|
|
f"{self.name} - Tools called: {', '.join(tool_names)}"
|
|
)
|
|
else:
|
|
logger.info(f"{self.name} - No tools called")
|
|
|
|
# Store the conversation batch
|
|
success = conversation_manager.store_conversation_batch(
|
|
messages=messages_to_store,
|
|
user_id=user_id,
|
|
session_id=state.get("session_id"), # Use session_id from state
|
|
agent_name=self.name,
|
|
)
|
|
|
|
if success:
|
|
logger.info(
|
|
f"{self.name} - Successfully stored {len(messages_to_store)} conversation messages"
|
|
)
|
|
else:
|
|
logger.warning(
|
|
f"{self.name} - Failed to store conversation messages"
|
|
)
|
|
|
|
except Exception as e:
|
|
logger.error(
|
|
f"{self.name} - Error storing conversation messages: {e}",
|
|
exc_info=True,
|
|
)
|
|
|
|
# Process agent response for pattern extraction and memory capture
|
|
if user_id and agent_response:
|
|
try:
|
|
# Check if memory hooks are available through the memory client
|
|
from .memory.hooks import MemoryHookProvider
|
|
|
|
# Use the SREMemoryClient that's already imported at the top
|
|
memory_client = SREMemoryClient()
|
|
memory_hooks = MemoryHookProvider(memory_client)
|
|
|
|
# Create response object for hooks
|
|
response_obj = {
|
|
"content": agent_response,
|
|
"tool_calls": [
|
|
{
|
|
"name": getattr(msg, "name", "unknown"),
|
|
"content": str(getattr(msg, "content", "")),
|
|
}
|
|
for msg in all_messages
|
|
if hasattr(msg, "tool_call_id")
|
|
],
|
|
}
|
|
|
|
# Call on_agent_response hook to extract patterns
|
|
memory_hooks.on_agent_response(
|
|
agent_name=self.name, response=response_obj, state=state
|
|
)
|
|
|
|
logger.info(
|
|
f"{self.name} - Processed agent response for memory pattern extraction"
|
|
)
|
|
|
|
except Exception as e:
|
|
logger.warning(
|
|
f"{self.name} - Failed to process agent response for memory patterns: {e}"
|
|
)
|
|
|
|
# Update state with streaming info
|
|
return {
|
|
"agent_results": {
|
|
**state.get("agent_results", {}),
|
|
self.name: agent_response,
|
|
},
|
|
"agents_invoked": state.get("agents_invoked", []) + [self.name],
|
|
"messages": messages + all_messages,
|
|
"metadata": {
|
|
**state.get("metadata", {}),
|
|
f"{self.name.replace(' ', '_')}_trace": all_messages,
|
|
},
|
|
}
|
|
|
|
except Exception as e:
|
|
logger.error(f"Error in {self.name}: {e}")
|
|
return {
|
|
"agent_results": {
|
|
**state.get("agent_results", {}),
|
|
self.name: f"Error: {str(e)}",
|
|
},
|
|
"agents_invoked": state.get("agents_invoked", []) + [self.name],
|
|
}
|
|
|
|
|
|
def create_kubernetes_agent(
|
|
tools: List[BaseTool], agent_metadata: AgentMetadata = None, **kwargs
|
|
) -> BaseAgentNode:
|
|
"""Create Kubernetes infrastructure agent."""
|
|
config = _load_agent_config()
|
|
filtered_tools = _filter_tools_for_agent(tools, "kubernetes_agent", config)
|
|
|
|
return BaseAgentNode(
|
|
name="Kubernetes Infrastructure Agent", # Fallback for backward compatibility
|
|
description="Manages Kubernetes cluster operations and monitoring", # Fallback
|
|
tools=filtered_tools,
|
|
agent_metadata=agent_metadata,
|
|
**kwargs,
|
|
)
|
|
|
|
|
|
def create_logs_agent(
|
|
tools: List[BaseTool], agent_metadata: AgentMetadata = None, **kwargs
|
|
) -> BaseAgentNode:
|
|
"""Create application logs agent."""
|
|
config = _load_agent_config()
|
|
filtered_tools = _filter_tools_for_agent(tools, "logs_agent", config)
|
|
|
|
return BaseAgentNode(
|
|
name="Application Logs Agent", # Fallback for backward compatibility
|
|
description="Handles application log analysis and searching", # Fallback
|
|
tools=filtered_tools,
|
|
agent_metadata=agent_metadata,
|
|
**kwargs,
|
|
)
|
|
|
|
|
|
def create_metrics_agent(
|
|
tools: List[BaseTool], agent_metadata: AgentMetadata = None, **kwargs
|
|
) -> BaseAgentNode:
|
|
"""Create performance metrics agent."""
|
|
config = _load_agent_config()
|
|
filtered_tools = _filter_tools_for_agent(tools, "metrics_agent", config)
|
|
|
|
return BaseAgentNode(
|
|
name="Performance Metrics Agent", # Fallback for backward compatibility
|
|
description="Provides application performance and resource metrics", # Fallback
|
|
tools=filtered_tools,
|
|
agent_metadata=agent_metadata,
|
|
**kwargs,
|
|
)
|
|
|
|
|
|
def create_runbooks_agent(
|
|
tools: List[BaseTool], agent_metadata: AgentMetadata = None, **kwargs
|
|
) -> BaseAgentNode:
|
|
"""Create operational runbooks agent."""
|
|
config = _load_agent_config()
|
|
filtered_tools = _filter_tools_for_agent(tools, "runbooks_agent", config)
|
|
|
|
return BaseAgentNode(
|
|
name="Operational Runbooks Agent", # Fallback for backward compatibility
|
|
description="Provides operational procedures and troubleshooting guides", # Fallback
|
|
tools=filtered_tools,
|
|
agent_metadata=agent_metadata,
|
|
**kwargs,
|
|
)
|