This example demonstrates using ChromaMemoryService for semantic memory search
with embeddings generated by Ollama.
-
Ollama Server Running
ollama serve
-
Embedding Model Pulled
ollama pull nomic-embed-text
-
Dependencies Installed
pip install chromadb # Or with uv: uv pip install chromadb
cd contributing/samples/memory_chroma
python main.py-
Session 1: Creates memories by having a conversation with the agent
- User introduces themselves as "Jack"
- User mentions they like badminton
- User mentions what they ate recently
-
Memory Storage: The session is saved to ChromaDB with semantic embeddings
- Data persists to
./chroma_dbdirectory - Embeddings are generated using Ollama's
nomic-embed-textmodel
- Data persists to
-
Session 2: Queries the memories using semantic search
- User asks about their hobbies (agent should recall "badminton")
- User asks about what they ate (agent should recall "burger")
| Feature | InMemory | ChromaDB |
|---|---|---|
| Search Type | Keyword matching | Semantic similarity |
| Persistence | No (lost on restart) | Yes (disk) |
| Synonyms | No | Yes |
| Performance | Fast | Fast (with HNSW index) |
You can change the embedding model by modifying the OllamaEmbeddingProvider:
embedding_provider = OllamaEmbeddingProvider(
model="mxbai-embed-large", # Higher quality but slower
host="http://remote-server:11434", # Remote Ollama server
)