Beyond Vector Similarity
Standard RAG systems rely heavily on vector similarity search (k-NN). While fast, this often fails in complex scenarios:- The “Lost in the Middle” Phenomenon: Retrieving too many loose chunks confuses the LLM.
- Lack of Context: A retrieved chunk might be factually correct but irrelevant to the current nuance.
- Scene Activation: First, the system identifies which MemScene (topic) is currently active.
- Agentic Walk: The system then traverses the connections within that MemScene to find specific MemCells that answer the query.
The Reconstruction Process
When a user prompt arrives (e.g., “Draft an email update for the project we discussed last week”):1
Intent Analysis
The system determines the user wants to write an email (Task) about “the project” (Topic) referencing “last week’s discussion” (Time/Constraint).
2
Scene Loading
The relevant “Project” MemScene is loaded.
3
Context Synthesis
The system doesn’t just return the raw transcript of last week’s meeting. It reconstructs the memory:
- It pulls the Decisions Made (Fact).
- It pulls the Action Items assigned to the user (Fact).
- It ignores the Small Talk about the weather (Irrelevant Episode).
Adaptive Retrieval
Reconstructive Recollection is adaptive.- For factual queries (“What is my API key?”), it performs a precise lookup of Atomic Facts.
- For creative tasks (“Brainstorm ideas based on my previous notes”), it retrieves broader episodic narratives to inspire the model.
- For reasoning tasks (“Why did we decide to switch databases?”), it traces the causal chain across multiple MemCells to explain the history of a decision.