Skip to content

Latest commit

 

History

History
65 lines (49 loc) · 1.69 KB

File metadata and controls

65 lines (49 loc) · 1.69 KB

ChromaDB Memory Service Example

This example demonstrates using ChromaMemoryService for semantic memory search with embeddings generated by Ollama.

Prerequisites

  1. Ollama Server Running

    ollama serve
  2. Embedding Model Pulled

    ollama pull nomic-embed-text
  3. Dependencies Installed

    pip install chromadb
    # Or with uv:
    uv pip install chromadb

Running the Example

cd contributing/samples/memory_chroma
python main.py

What This Demo Does

  1. Session 1: Creates memories by having a conversation with the agent

    • User introduces themselves as "Jack"
    • User mentions they like badminton
    • User mentions what they ate recently
  2. Memory Storage: The session is saved to ChromaDB with semantic embeddings

    • Data persists to ./chroma_db directory
    • Embeddings are generated using Ollama's nomic-embed-text model
  3. Session 2: Queries the memories using semantic search

    • User asks about their hobbies (agent should recall "badminton")
    • User asks about what they ate (agent should recall "burger")

Key Differences from InMemoryMemoryService

Feature InMemory ChromaDB
Search Type Keyword matching Semantic similarity
Persistence No (lost on restart) Yes (disk)
Synonyms No Yes
Performance Fast Fast (with HNSW index)

Customization

You can change the embedding model by modifying the OllamaEmbeddingProvider:

embedding_provider = OllamaEmbeddingProvider(
    model="mxbai-embed-large",  # Higher quality but slower
    host="http://remote-server:11434",  # Remote Ollama server
)