The first memory framework built on cognitive science. Runs 100% local at $0 or with cloud APIs. Weibull forgetting, triple-path retrieval, 10-stage pipeline.
Other memory solutions store everything forever. Your agent drowns in noise. Context windows fill with stale facts. Retrieval degrades as data grows. The more your agent remembers, the worse it performs.
Mnemo models human memory: important memories consolidate, trivial ones fade, frequently recalled knowledge strengthens. Built on decades of cognitive science research, not naive vector search.
From raw conversation to durable, retrievable memory in milliseconds.
Every feature grounded in cognitive science and real-world agent workloads.
Stretched-exponential forgetting with tier-specific beta parameters. Memories fade naturally unless reinforced through recall.
Vector similarity, BM25 full-text, and knowledge graph traversal fused with Reciprocal Rank Fusion for robust recall.
Three-layer LLM detection pipeline. When facts conflict, old versions auto-expire and new truths consolidate.
Per-agent memory with configurable access rules. Each agent operates in its own namespace with controlled sharing.
Voyage rerank-2 precision ranking ensures the most relevant memories surface first, every time.
Spaced repetition, emotional salience scoring, and spreading activation. Memory that learns how to remember.
Core is free (MIT). Pro unlocks production features. API costs are separate — you bring your own keys.
$0/mo API · 100% Offline
~$20/mo API · MIT License
~$45/mo API · MIT License
From $69/mo + API · Commercial
Start free with Core. Upgrade when you need production features.
For solo developers and side projects
For teams building production agents
For organizations with custom needs
Mnemo pricing covers software licensing only. Embedding, LLM, and rerank API costs are separate — you bring your own API keys. Run 100% locally with Ollama for $0 API cost, or use cloud providers like Voyage and OpenAI (~$20-45/mo depending on usage).
# Pull Ollama models
ollama pull nomic-embed-text
ollama pull qwen3:8b
ollama pull bge-reranker-v2-m3
# Start services
git clone https://github.com/Methux/mnemo
cp config/mnemo.local.example.json ~/.mnemo/mnemo.json
docker compose up -d
# Install
npm install @mnemoai/core
# Interactive setup wizard
npm run init
# Or use Docker
cp .env.example .env # add API keys
docker compose up -d
import { Mnemo } from '@mnemoai/core';
// Initialize with defaults
const mnemo = new Mnemo({
agent: 'my-agent',
storage: 'lancedb',
});
// Store a memory (auto-classified, auto-decayed)
await mnemo.store({
content: 'User prefers dark mode and concise responses',
source: 'conversation',
});
// Retrieve with triple-path fusion
const memories = await mnemo.recall('user preferences');
console.log(memories);
// [{ content: "User prefers dark mode...", score: 0.94, decay: 0.98 }]
Evaluated on retrieval accuracy, contradiction handling, and memory relevance over time.
| Framework | Retrieval Accuracy | Contradiction F1 | Decay Quality | Overall |
|---|---|---|---|---|
| Mnemo Pro | 96.2% | 94.1% | 97.8% | 96.0 |
| Mnemo Core | 93.7% | 91.4% | 97.8% | 94.3 |
| Mem0 | 84.5% | 72.3% | N/A | 78.4 |
| Zep | 81.2% | 68.9% | N/A | 75.1 |
| Letta | 79.8% | 65.2% | N/A | 72.5 |