mirror of
https://github.com/awslabs/amazon-bedrock-agentcore-samples.git
synced 2025-09-08 20:50:46 +00:00
* feat: integrate long-term memory system into SRE agent - Add AgentCore Memory integration with three memory strategies: * User preferences (escalation, notification, workflow preferences) * Infrastructure knowledge (dependencies, patterns, baselines) * Investigation summaries (timeline, actions, findings) - Implement memory tools for save/retrieve operations - Add automatic memory capture through hooks and pattern recognition - Extend agent state to support memory context - Integrate memory-aware planning in supervisor agent - Add comprehensive test coverage for memory functionality - Create detailed documentation with usage examples This transforms the SRE agent from stateless to learning assistant that becomes more valuable over time by remembering user preferences, infrastructure patterns, and investigation outcomes. Addresses issue #164 * feat: environment variable config, agent routing fixes, and project organization - Move USER_ID/SESSION_ID from metadata parsing to environment variables - Add .memory_id to .gitignore for local memory state - Update .gitignore to use .scratchpad/ folder instead of .scratchpad.md - Fix agent routing issues with supervisor prompt and graph node naming - Add conversation memory tracking for all agents and supervisor - Improve agent metadata system with centralized constants - Add comprehensive logging and debugging for agent tool access - Update deployment script to pass user_id/session_id in payload - Create .scratchpad/ folder structure for better project organization * feat: enhance SRE agent with automatic report archiving and error fixes - Add automatic archiving system for reports by date - Include user_id in report filenames for better organization - Fix Pydantic validation error with string-to-list conversion for investigation steps - Add content length truncation for memory storage to prevent validation errors - Remove status line from report output for cleaner formatting - Implement date-based folder organization (YYYY-MM-DD format) - Add memory content length limits configuration in constants Key improvements: - Reports now auto-archive old files when saving new ones - User-specific filenames: query_user_id_UserName_YYYYMMDD_HHMMSS.md - Robust error handling for memory content length limits - Backward compatibility with existing filename formats * feat: fix memory retrieval system for cross-session searches and user personalization Key fixes and improvements: - Fix case preservation in actor_id sanitization (Carol remains Carol, not carol) - Enable cross-session memory searches for infrastructure and investigation memories - Add XML parsing support for investigation summaries stored in XML format - Enhance user preference integration throughout the system - Add comprehensive debug logging for memory retrieval processes - Update prompts to support user-specific communication styles and preferences Memory system now properly: - Preserves user case in memory namespaces (/sre/users/Carol vs /sre/users/carol) - Searches across all sessions for planning context vs session-specific for current state - Parses both JSON and XML formatted investigation memories - Adapts investigation approach based on user preferences and historical patterns - Provides context-aware planning using infrastructure knowledge and past investigations * feat: enhance SRE agent with user-specific memory isolation and anti-hallucination measures Memory System Improvements: - Fix memory isolation to retrieve only user-specific memories (Alice doesn't see Carol's data) - Implement proper namespace handling for cross-session vs session-specific searches - Add detailed logging for memory retrieval debugging and verification - Remove verbose success logs, keep only error logs for cleaner output Anti-Hallucination Enhancements: - Add tool output validation requirements to agent prompts - Implement timestamp fabrication prevention (use 2024-* format from backend) - Require tool attribution for all metrics and findings in reports - Add backend data alignment patterns for consistent data references - Update supervisor aggregation prompts to flag unverified claims Code Organization: - Extract hardcoded prompts from supervisor.py to external prompt files - Add missing session_id parameters to SaveInfrastructureTool and SaveInvestigationTool - Improve memory client namespace documentation and cross-session search logic - Reduce debug logging noise while maintaining error tracking Verification Complete: - Memory isolation working correctly (only user-specific data retrieval) - Cross-session memory usage properly configured for planning and investigations - Memory integration confirmed in report generation pipeline - Anti-hallucination measures prevent fabricated metrics and timestamps * feat: organize utility scripts in dedicated scripts folder Script Organization: - Move manage_memories.py to scripts/ folder with updated import paths - Move configure_gateway.sh to scripts/ folder with corrected PROJECT_ROOT path - Copy user_config.yaml to scripts/ folder for self-contained script usage Path Fixes: - Update manage_memories.py to import sre_agent module from correct relative path - Fix .memory_id file path resolution for new script location - Update configure_gateway.sh PROJECT_ROOT to point to correct parent directory - Add fallback logic to find user_config.yaml in scripts/ or project root Script Improvements: - Update help text and examples to use 'uv run python scripts/' syntax - Make manage_memories.py executable with proper permissions - Maintain backward compatibility for custom config file paths - Self-contained scripts folder with all required dependencies Verification: - All scripts work correctly from new location - Memory management functions operate properly - Gateway configuration handles paths correctly - User preferences loading works from scripts directory * docs: update SSL certificate paths to use /opt/ssl standard location - Update README.md to reference /opt/ssl for SSL certificate paths - Update docs/demo-environment.md to use /opt/ssl paths - Clean up scripts/configure_gateway.sh SSL fallback paths - Remove duplicate and outdated SSL path references - Establish /opt/ssl as the standard SSL certificate location This ensures consistent SSL certificate management across all documentation and scripts, supporting the established /opt/ssl directory with proper ubuntu:ubuntu ownership. * feat: enhance memory system with infrastructure parsing fix and user personalization analysis Infrastructure Memory Parsing Improvements: - Fix infrastructure memory parsing to handle both JSON and plain text formats - Convert plain text memories to structured InfrastructureKnowledge objects - Change warning logs to debug level for normal text-to-structure conversion - Ensure all infrastructure memories are now retrievable and usable User Personalization Documentation: - Add comprehensive memory system analysis comparing Alice vs Carol reports - Create docs/examples/ folder with real investigation reports demonstrating personalization - Document side-by-side communication differences based on user preferences - Show how same technical incident produces different reports for different user roles Example Reports Added: - Alice's technical detailed investigation report (technical role preferences) - Carol's business-focused executive summary report (executive role preferences) - Memory system analysis with extensive side-by-side comparisons This demonstrates the memory system's ability to: - Maintain technical accuracy while adapting presentation style - Apply user-specific escalation procedures and communication channels - Build institutional knowledge about recurring infrastructure patterns - Personalize identical technical incidents for different organizational roles * feat: enhance memory system with automatic pattern extraction and improved logging ## Memory System Enhancements - **Individual agent memory integration**: Every agent response now triggers automatic memory pattern extraction through on_agent_response() hooks - **Enhanced conversation logging**: Added detailed message breakdown showing USER/ASSISTANT/TOOL message counts and tool names called - **Fixed infrastructure extraction**: Resolved hardcoded agent name issues by using SREConstants for agent identification - **Comprehensive memory persistence**: All agent responses and tool executions stored as conversation memory with proper session tracking ## Tool Architecture Clarification - **Centralized memory access**: Confirmed only supervisor agent has direct access to memory tools (retrieve_memory, save_*) - **Individual agent focus**: Individual agents have NO memory tools, only domain-specific tools (5 tools each for metrics, logs, k8s, runbooks) - **Automatic pattern recognition**: Memory capture happens automatically through hooks, not manual tool calls by individual agents ## Documentation Updates - **Updated memory-system.md**: Comprehensive design documentation reflecting current implementation - **Added example analyses**: Created flight-booking-analysis.md and api-response-time-analysis.md in docs/examples/ - **Enhanced README.md**: Added memory system overview and personalized investigation examples - **Updated .gitignore**: Now ignores entire reports/ folder instead of just .md files ## Implementation Improvements - **Event ID tracking**: All memory operations generate and log event IDs for verification - **Pattern extraction confirmation**: Logs confirm pattern extraction working for all agent types - **Memory save verification**: Comprehensive logging shows successful saves across all memory types - **Script enhancements**: manage_memories.py now handles duplicate removal and improved user management * docs: enhance memory system documentation with planning agent memory usage examples - Add real agent.log snippets showing planning agent retrieving and using memory context - Document XML-structured prompts for improved Claude model interaction - Explain JSON response format enforcement and infrastructure knowledge extraction - Add comprehensive logging and monitoring details - Document actor ID design for proper memory namespace isolation - Fix ASCII flow diagram alignment for better readability - Remove temporal framing and present features as current design facts * docs: add AWS documentation links and clean up memory system documentation - Add hyperlink to Amazon Bedrock AgentCore Memory main documentation - Link to Memory Getting Started Guide for the three memory strategies - Remove Legacy Pattern Recognition section from documentation (code remains) - Remove Error Handling and Fallbacks section to focus on core functionality - Keep implementation details in code while streamlining public documentation * docs: reorganize memory-system.md to eliminate redundancies - Merged Memory Tool Architecture and Planning sections into unified section - Consolidated all namespace/actor_id explanations in architecture section - Combined pattern recognition and memory capture content - Created dedicated Agent Memory Integration section with examples - Removed ~15-20% redundant content while improving clarity - Improved document structure for better navigation * style: apply ruff formatting and fix code style issues - Applied ruff auto-formatting to all Python files - Fixed 383 style issues automatically - Remaining issues require manual intervention: - 29 ruff errors (bare except, unused variables, etc.) - 61 mypy type errors (missing annotations, implicit Optional) - Verified memory system functionality matches documentation - Confirmed user personalization working correctly in reports * docs: make benefits section more succinct in memory-system.md - Consolidated 12 bullet points into 5 focused benefits - Removed redundant three-category structure (Users/Teams/Operations) - Maintained all key value propositions while improving readability - Reduced section length by ~60% while preserving essential information * feat: add comprehensive cleanup script with memory deletion - Added cleanup.sh script to delete all AWS resources (gateway, runtime, memory) - Integrated memory deletion using bedrock_agentcore MemoryClient - Added proper error handling and graceful fallbacks - Updated execution order: servers → gateway → memory → runtime → local files - Added memory deletion to README.md cleanup instructions - Includes confirmation prompts and --force option for automation * fix: preserve .env, .venv, and reports in cleanup script - Modified cleanup script to only remove AWS-generated configuration files - Preserved .env files for development continuity - Preserved .venv directories to avoid reinstalling dependencies - Preserved reports/ directory containing investigation history - Files removed: gateway URIs, tokens, agent ARNs, memory IDs only - Updated documentation to clarify preserved vs removed files * fix: use correct bedrock-agentcore-control client for gateway operations - Changed boto3 client from 'bedrock-agentcore' to 'bedrock-agentcore-control' - Fixes 'list_gateways' method not found error during gateway deletion - Both gateway and runtime deletion now use the correct control plane client * docs: add memory system initialization timing guidance - Added note that memory system takes 10-12 minutes to be ready - Added steps to check memory status with list command after 10 minutes - Added instruction to run update command again once memory is ready - Provides clear workflow for memory system setup and prevents user confusion * docs: comprehensive documentation update and cleanup - Remove unused root .env and .env.example files (not referenced by any code) - Update configuration.md with comprehensive config file documentation - Add configuration overview table with setup instructions and auto-generation info - Consolidate specialized-agents.md content into system-components.md - Update system-components.md with complete AgentCore architecture - Add detailed sections for AgentCore Runtime, Gateway, and Memory primitives - Remove cli-reference.md (excessive documentation for limited use) - Update README.md to reference configuration guide in setup section - Clean up documentation links and organization The documentation now provides a clear, consolidated view of the system architecture and configuration with proper cross-references and setup guidance. * feat: improve runtime deployment and invocation robustness - Increase deletion wait time to 150s for agent runtime cleanup - Add retry logic with exponential backoff for MCP rate limiting (429 errors) - Add session_id and user_id to agent state for memory retrieval - Filter out /ping endpoint logs to reduce noise - Increase boto3 read timeout to 5 minutes for long-running operations - Add clear error messages for agent name conflicts - Update README to clarify virtual environment requirement for scripts - Fix session ID generation to meet 33+ character requirement These changes improve reliability when deploying and invoking agents, especially under heavy load or with complex queries that take time. * chore: remove accidentally committed reports folder Removed 130+ markdown report files from the reports/ directory that were accidentally committed. The .gitignore already includes reports/ to prevent future commits of these generated files.
234 lines
7.7 KiB
Python
Executable File
234 lines
7.7 KiB
Python
Executable File
#!/usr/bin/env python3
|
|
"""
|
|
SRE Report Verification Tool
|
|
|
|
This tool compares SRE investigation reports against ground truth data to identify
|
|
hallucinations and verify the accuracy of claims made in the reports.
|
|
"""
|
|
|
|
import argparse
|
|
import logging
|
|
import os
|
|
import sys
|
|
from pathlib import Path
|
|
|
|
import anthropic
|
|
from dotenv import load_dotenv
|
|
|
|
# Configure logging
|
|
logging.basicConfig(
|
|
level=logging.INFO,
|
|
format="%(asctime)s,p%(process)s,{%(filename)s:%(lineno)d},%(levelname)s,%(message)s",
|
|
)
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
# Load environment variables
|
|
load_dotenv(Path(__file__).parent / "sre_agent" / ".env")
|
|
|
|
|
|
def _get_anthropic_api_key() -> str:
|
|
"""Get Anthropic API key from environment variables."""
|
|
api_key = os.getenv("ANTHROPIC_API_KEY")
|
|
if not api_key:
|
|
raise ValueError(
|
|
"ANTHROPIC_API_KEY environment variable is required for verification"
|
|
)
|
|
return api_key
|
|
|
|
|
|
def _read_file(file_path: str) -> str:
|
|
"""Read content from a file."""
|
|
try:
|
|
with open(file_path, "r", encoding="utf-8") as f:
|
|
return f.read()
|
|
except FileNotFoundError:
|
|
logger.error(f"File not found: {file_path}")
|
|
sys.exit(1)
|
|
except Exception as e:
|
|
logger.error(f"Error reading file {file_path}: {e}")
|
|
sys.exit(1)
|
|
|
|
|
|
def _create_verification_prompt(report_content: str, ground_truth_content: str) -> str:
|
|
"""Create the verification prompt for Claude."""
|
|
return f"""<task>
|
|
You are an expert SRE data verification specialist. Your task is to verify the accuracy of an SRE investigation report by comparing it against ground truth data.
|
|
|
|
<report>
|
|
{report_content}
|
|
</report>
|
|
|
|
<ground_truth_data>
|
|
{ground_truth_content}
|
|
</ground_truth_data>
|
|
</task>
|
|
|
|
<critical_context>
|
|
IMPORTANT: The ground truth data contains a comprehensive dataset representing the ENTIRE infrastructure state, including:
|
|
- Multiple services (some healthy, some with issues)
|
|
- Historical data across different time periods
|
|
- Various pod states (running, failed, crashed, etc.)
|
|
- Mixed performance metrics (good and bad)
|
|
- Different log patterns and error conditions
|
|
|
|
DO NOT expect every entity in the report to have problems in the ground truth. The ground truth shows the complete picture, so:
|
|
- Some services may be healthy while others have issues
|
|
- Some pods may be running fine while others are failing
|
|
- Performance metrics may show both good and bad patterns
|
|
- Only verify that the SPECIFIC claims in the report match what's actually in the data
|
|
|
|
Focus on accuracy of SPECIFIC claims made in the report, not whether the overall system appears healthy or unhealthy.
|
|
</critical_context>
|
|
|
|
<instructions>
|
|
Carefully analyze the SRE investigation report and compare ALL specific claims against the ground truth data. Focus on verifying:
|
|
|
|
1. **Pod Names** - Any pod names mentioned (e.g., api-service-xyz, database-pod-abc)
|
|
2. **Application Names** - Service names referenced
|
|
3. **Timestamps** - Specific times mentioned in logs or metrics
|
|
4. **Log Entries** - Exact log messages quoted
|
|
5. **Metrics Values** - Performance numbers, response times, error rates
|
|
6. **Resource Usage** - CPU, memory percentages
|
|
7. **Error Counts** - Number of errors or occurrences
|
|
8. **Status Information** - Pod states, service health
|
|
|
|
For each entity mentioned in the report:
|
|
- Check if it exists in the ground truth data
|
|
- Verify if the details (timestamps, values, status) match exactly
|
|
- Identify any fabricated or hallucinated information
|
|
- Remember: The absence of problems for a service in the ground truth does NOT invalidate the report unless the report specifically claims that service has issues
|
|
|
|
<output_format>
|
|
If you find hallucinations, respond with:
|
|
|
|
# ❌ HALLUCINATIONS DETECTED
|
|
|
|
## Fabricated Claims:
|
|
- **[Entity Type]**: [Specific claim]
|
|
- **Report Claims**: [What the report states]
|
|
- **Ground Truth**: [What the data actually shows or "NOT FOUND"]
|
|
- **Verification**: FABRICATED/INACCURATE
|
|
|
|
## Additional Issues:
|
|
[Any other accuracy problems found]
|
|
|
|
---
|
|
|
|
If NO hallucinations are found, respond with:
|
|
|
|
# ✅ REPORT VERIFIED ACCURATE
|
|
|
|
## Important Entities Found:
|
|
- **[Entity Type]**: [Entity name/value]
|
|
- **Ground Truth Reference**: Line [X]: "[exact text from ground truth]"
|
|
- **Report Context**: [How it was used in the report]
|
|
|
|
## Verification Summary:
|
|
All claims in the report have been verified against the ground truth data. No fabricated information detected.
|
|
</output_format>
|
|
|
|
Be extremely thorough and precise. SRE operations require absolute accuracy - even small discrepancies in timestamps, pod names, or metric values are critical to identify.
|
|
</instructions>"""
|
|
|
|
|
|
def _verify_report_with_claude(
|
|
report_content: str, ground_truth_content: str, api_key: str
|
|
) -> str:
|
|
"""Use Claude to verify the report against ground truth data."""
|
|
try:
|
|
client = anthropic.Anthropic(api_key=api_key)
|
|
|
|
prompt = _create_verification_prompt(report_content, ground_truth_content)
|
|
|
|
logger.info("Sending verification request to Claude 4 Sonnet...")
|
|
|
|
response = client.messages.create(
|
|
model="claude-sonnet-4-20250514",
|
|
max_tokens=4096,
|
|
temperature=0.1, # Low temperature for consistent, accurate analysis
|
|
messages=[{"role": "user", "content": prompt}],
|
|
)
|
|
|
|
return response.content[0].text
|
|
|
|
except Exception as e:
|
|
logger.error(f"Error calling Claude API: {e}")
|
|
sys.exit(1)
|
|
|
|
|
|
def main():
|
|
"""Main function for report verification."""
|
|
parser = argparse.ArgumentParser(
|
|
description="Verify SRE investigation reports against ground truth data"
|
|
)
|
|
parser.add_argument(
|
|
"report_path", help="Path to the SRE investigation report (markdown file)"
|
|
)
|
|
parser.add_argument(
|
|
"--data-path",
|
|
default="backend/data/all_data_dump.txt",
|
|
help="Path to the ground truth data file (default: backend/data/all_data_dump.txt)",
|
|
)
|
|
parser.add_argument(
|
|
"--output", help="Optional output file to save verification results"
|
|
)
|
|
|
|
args = parser.parse_args()
|
|
|
|
# Validate input files
|
|
if not os.path.exists(args.report_path):
|
|
logger.error(f"Report file not found: {args.report_path}")
|
|
sys.exit(1)
|
|
|
|
if not os.path.exists(args.data_path):
|
|
logger.error(f"Ground truth data file not found: {args.data_path}")
|
|
sys.exit(1)
|
|
|
|
# Get API key
|
|
try:
|
|
api_key = _get_anthropic_api_key()
|
|
except ValueError as e:
|
|
logger.error(f"API key error: {e}")
|
|
sys.exit(1)
|
|
|
|
# Read files
|
|
logger.info(f"Reading report: {args.report_path}")
|
|
report_content = _read_file(args.report_path)
|
|
|
|
logger.info(f"Reading ground truth data: {args.data_path}")
|
|
ground_truth_content = _read_file(args.data_path)
|
|
|
|
# Verify report
|
|
logger.info("Starting verification process...")
|
|
verification_result = _verify_report_with_claude(
|
|
report_content, ground_truth_content, api_key
|
|
)
|
|
|
|
# Output results
|
|
print("\n" + "=" * 80)
|
|
print("SRE REPORT VERIFICATION RESULTS")
|
|
print("=" * 80)
|
|
print(verification_result)
|
|
print("=" * 80)
|
|
|
|
# Save to output file if specified
|
|
if args.output:
|
|
try:
|
|
with open(args.output, "w", encoding="utf-8") as f:
|
|
f.write("# SRE Report Verification Results\n\n")
|
|
f.write(f"**Report**: {args.report_path}\n")
|
|
f.write(f"**Ground Truth**: {args.data_path}\n")
|
|
f.write(f"**Verified on**: {Path().cwd()}\n\n")
|
|
f.write("---\n\n")
|
|
f.write(verification_result)
|
|
logger.info(f"Verification results saved to: {args.output}")
|
|
except Exception as e:
|
|
logger.error(f"Error saving output file: {e}")
|
|
|
|
logger.info("Verification complete!")
|
|
|
|
|
|
if __name__ == "__main__":
|
|
main()
|