Shared, evolving knowledge across agents, sessions, and machines.
Lore extracts knowledge from past agent sessions and stores it in a database that reshapes itself over time — using mechanisms inspired by how biological memory works. Multiple agents can share the same knowledge base locally, or across machines through a central server.
just bundle-macos
cp -r target/Lore.app ~/Applications/Launch Lore from Spotlight. It runs as a menu bar icon, auto-managing the background daemon.
sudo apt install libgtk-3-dev libayatana-appindicator3-dev # Debian/Ubuntu
just install-linuxcargo build --release -p lore-daemon
cp target/release/lore ~/.local/bin/cargo build --release -p lore-mcp
cp target/release/lore-mcp ~/.local/bin/
claude mcp add --scope user memory -- lore-mcpjust docker-build
docker run -d -p 8080:8080 -v lore-data:/data -e ANTHROPIC_API_KEY=... lore-server# Daemon
lore start # run daemon in foreground
lore daemonize # run daemon in background
lore stop # stop background daemon
lore status # check if running
lore logs # tail daemon log
# Data pipeline
lore ingest # stage new conversation turns
lore consolidate # digest staged turns + run consolidation
# Query
lore roots # list root-level fragments
lore query "text" # semantic search
lore explore <id> # show subtree (supports ID prefix)
lore staged # show staging area
# Remote
lore sync <url> # push staged turns to central serverThe Lore tray icon lives in your menu bar / system tray. It automatically starts the daemon when launched and stops it on quit.
| State | Appearance |
|---|---|
| Stopped | Dim red |
| Idle | Bright red |
| Ingesting | Red, pulsing |
| Consolidating | Orange, pulsing |
| Syncing | Green, pulsing |
~/.lore/config.toml:
[ingestion]
poll_interval_secs = 30
[consolidation]
interval_secs = 7200
idle_threshold_secs = 300 # wait 5 min before digesting a session
max_turns_per_extraction = 200 # chunk large conversations
similarity_threshold = 0.8
merge_threshold = 0.85
[claude]
extraction_model = "claude-sonnet-4-20250514" # knowledge extraction
compression_model = "claude-haiku-4-5-20251001" # recursive summarization
[database]
path = "~/.lore/memory.db"
[remote] # optional: enable central server sync
url = "http://server:8080"
sync_interval_secs = 60- Setup — single machine, central server, and Docker deployment
- Architecture — knowledge model, pipeline, MCP tools, memory dynamics
cargo build # build all crates
cargo test # 152 tests
cargo clippy --workspace # lint