Memory examples enact the trade-off between recall, cost, and latency. Sources underDocumentation Index
Fetch the complete documentation index at: https://cognis.vasanth.xyz/llms.txt
Use this file to discover all available pages before exploring further.
examples/memory/.
| Name | Scenario | Source |
|---|---|---|
memory_types | Run the same 5-turn conversation through Buffer, Window(2), and TokenBufferMemory(200) — watch which ones still pass the recall test. | src |
memory_summary_buffer | 10-turn customer support thread — older turns get LLM-summarized into a running summary so the agent can still recall the original order ID. | src |
memory_knowledge_graph | Bot learns facts about Project Atlas through conversation, then is asked to recall them — the KG memory stores triples and the agent queries them. | src |
How to run
Pick a starting point
- First time?
memory_typesshows all three variants on the same conversation, side by side. - Long sessions?
memory_summary_bufferis the right default — buffer keeps recent turns, summary keeps the long-tail facts. - Knowledge-graph use cases?
memory_knowledge_graphfor triple-shaped memory across sessions.
See also
Memory guide
All variants explained, when to reach for each.
Patterns → Stateful chat
Memory + persistence in a real chat backend.