Memory is a critical component of intelligence. While Large Language Models (LLMs) have impressive capabilities, they lack persistent memory across conversations. Amazon Bedrock AgentCore Memory addresses this limitation by providing a managed service that enables AI agents to maintain context over time, remember important facts, and deliver consistent, personalized experiences.
Immediate conversation context and session-based information that provides continuity within a single interaction or closely related sessions.
### Long-Term Memory
Persistent information extracted and stored across multiple conversations, including facts, preferences, and summaries that enable personalized experiences over time.
## Memory Architecture
1.**Conversation Storage**: Complete conversations are saved in raw form for immediate access
2.**Strategy Processing**: Configured strategies automatically analyze conversations in the background
3.**Information Extraction**: Important data is extracted based on strategy types (typically takes ~1 minute)
4.**Organized Storage**: Extracted information is stored in structured namespaces for efficient retrieval
5.**Semantic Retrieval**: Natural language queries can retrieve relevant memories using vector similarity
## Memory Strategy Types
AgentCore Memory supports four strategy types:
- **Semantic Memory**: Stores factual information using vector embeddings for similarity search
- **Summary Memory**: Creates and maintains conversation summaries for context preservation
- **User Preference Memory**: Tracks user-specific preferences and settings
- **Custom Memory**: Allows customization of extraction and consolidation logic
## Getting Started
Explore the memory capabilities through our organized tutorials:
- **[Short-Term Memory](./01-short-term-memory/)**: Learn about session-based memory and immediate context management
- **[Long-Term Memory](./02-long-term-memory/)**: Understand persistent memory strategies and cross-conversation continuity