|
| 1 | +--- |
| 2 | +tags: |
| 3 | + - Enterprise |
| 4 | +--- |
| 5 | + |
| 6 | +# AI Search Setup |
| 7 | + |
| 8 | +AI-powered search in Docmost uses vector embeddings to provide semantic search across your workspace. This is an enterprise feature that requires a valid license key. |
| 9 | + |
| 10 | +## Prerequisites |
| 11 | + |
| 12 | +1. **PostgreSQL with pgvector extension** - Required for storing vector embeddings |
| 13 | +2. **AI provider account** - OpenAI, Google Gemini, or self-hosted Ollama |
| 14 | + |
| 15 | +## Installing pgvector |
| 16 | + |
| 17 | +### Docker Setup (Recommended) |
| 18 | + |
| 19 | +Use the official pgvector Docker image instead of the standard PostgreSQL image: |
| 20 | + |
| 21 | +```yaml |
| 22 | +# docker-compose.yml |
| 23 | +services: |
| 24 | + db: |
| 25 | + image: pgvector/pgvector:pg17 |
| 26 | + environment: |
| 27 | + POSTGRES_DB: docmost |
| 28 | + POSTGRES_USER: docmost |
| 29 | + POSTGRES_PASSWORD: your_password |
| 30 | + volumes: |
| 31 | + - db_data:/var/lib/postgresql/data |
| 32 | +``` |
| 33 | +
|
| 34 | +The pgvector extension will be automatically available. |
| 35 | +
|
| 36 | +### Manual Installation |
| 37 | +If you are using a non-docker installation of Postgres, you can manually install the pgvector extension. |
| 38 | +See the pgvector installation guide: https://github.com/pgvector/pgvector?tab=readme-ov-file#installation |
| 39 | +
|
| 40 | +## Supported AI Providers |
| 41 | +
|
| 42 | +Docmost supports three AI providers: OpenAI (Azure OpenAI), Google Gemini, and Ollama (local LLMs). |
| 43 | +
|
| 44 | +### Provider Configuration |
| 45 | +
|
| 46 | +All providers require these base environment variables: |
| 47 | +
|
| 48 | +```bash |
| 49 | +AI_DRIVER=<provider> # openai, gemini, or ollama |
| 50 | +AI_EMBEDDING_MODEL=<model> # Model used for generating embeddings |
| 51 | +AI_COMPLETION_MODEL=<model> # Model used for answering questions |
| 52 | +``` |
| 53 | + |
| 54 | +**Important:** `AI_EMBEDDING_DIMENSION` is optional and auto-detected from preset models. Only set it manually if using a custom model not in the preset list. |
| 55 | + |
| 56 | +--- |
| 57 | + |
| 58 | +## OpenAI Configuration |
| 59 | + |
| 60 | +Supports OpenAI API and Azure OpenAI. |
| 61 | + |
| 62 | +### Environment Variables |
| 63 | + |
| 64 | +```bash |
| 65 | +AI_DRIVER=openai |
| 66 | +OPENAI_API_KEY=sk-proj-xxxxx |
| 67 | +AI_EMBEDDING_MODEL=text-embedding-3-small |
| 68 | +AI_COMPLETION_MODEL=gpt-4o-mini |
| 69 | +``` |
| 70 | + |
| 71 | +**Optional:** |
| 72 | +```bash |
| 73 | +OPENAI_API_URL=https://api.openai.com/v1 # For Azure or custom endpoints |
| 74 | +``` |
| 75 | + |
| 76 | +### OpenAI Preset Models |
| 77 | + |
| 78 | +| Model | |
| 79 | +|-------| |
| 80 | +| `text-embedding-3-small` | |
| 81 | +| `text-embedding-3-large` | |
| 82 | +| `text-embedding-ada-002` | |
| 83 | + |
| 84 | +### Example |
| 85 | + |
| 86 | +```bash |
| 87 | +AI_DRIVER=openai |
| 88 | +OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx |
| 89 | +AI_EMBEDDING_MODEL=text-embedding-3-small |
| 90 | +AI_COMPLETION_MODEL=gpt-4o-mini |
| 91 | +``` |
| 92 | + |
| 93 | +--- |
| 94 | + |
| 95 | +## Google Gemini Configuration |
| 96 | + |
| 97 | +### Environment Variables |
| 98 | + |
| 99 | +```bash |
| 100 | +AI_DRIVER=gemini |
| 101 | +GEMINI_API_KEY=AIzaSyxxxxx |
| 102 | +AI_EMBEDDING_MODEL=gemini-embedding-001 |
| 103 | +AI_COMPLETION_MODEL=gemini-2.5-flash |
| 104 | +``` |
| 105 | + |
| 106 | +### Gemini Preset Models |
| 107 | + |
| 108 | +| Model | |
| 109 | +|-------| |
| 110 | +| `gemini-embedding-001` | |
| 111 | + |
| 112 | +--- |
| 113 | + |
| 114 | +## Ollama Configuration |
| 115 | +Docmost AI search and embeddings supports local LLMs via Ollama. |
| 116 | +Ollama Docker Installation Guide: https://docs.ollama.com/docker |
| 117 | + |
| 118 | +### Environment Variables |
| 119 | + |
| 120 | +```bash |
| 121 | +AI_DRIVER=ollama |
| 122 | +OLLAMA_API_URL=http://localhost:11434 |
| 123 | +AI_EMBEDDING_MODEL=nomic-embed-text |
| 124 | +AI_COMPLETION_MODEL=qwen2.5:7b |
| 125 | +``` |
| 126 | + |
| 127 | +### Ollama Preset Models |
| 128 | + |
| 129 | +| Model | |
| 130 | +|-------| |
| 131 | +| `nomic-embed-text` | |
| 132 | +| `qwen3-embedding` | |
| 133 | + |
| 134 | +### Setup Ollama |
| 135 | + |
| 136 | +1. Install Ollama: https://ollama.com/download |
| 137 | +2. Pull embedding model: |
| 138 | + ```bash |
| 139 | + ollama pull nomic-embed-text |
| 140 | + ``` |
| 141 | +3. Pull completion model: |
| 142 | + ```bash |
| 143 | + ollama pull qwen2.5:7b |
| 144 | + ``` |
| 145 | + |
| 146 | +### Example |
| 147 | + |
| 148 | +```bash |
| 149 | +AI_DRIVER=ollama |
| 150 | +OLLAMA_API_URL=http://localhost:11434 |
| 151 | +AI_EMBEDDING_MODEL=nomic-embed-text |
| 152 | +AI_COMPLETION_MODEL=qwen2.5:7b |
| 153 | +AI_EMBEDDING_DIMENSION=768 |
| 154 | +``` |
| 155 | + |
| 156 | +For Docker deployments, use the Ollama container: |
| 157 | +```yaml |
| 158 | +services: |
| 159 | + ollama: |
| 160 | + image: ollama/ollama:latest |
| 161 | + ports: |
| 162 | + - "11434:11434" |
| 163 | + volumes: |
| 164 | + - ollama_data:/root/.ollama |
| 165 | +``` |
| 166 | +
|
| 167 | +Then set `OLLAMA_API_URL=http://ollama:11434` in your Docmost container. |
| 168 | + |
| 169 | +--- |
| 170 | + |
| 171 | +## Enable AI Search in Workspace |
| 172 | + |
| 173 | +After configuring your AI provider: |
| 174 | + |
| 175 | +1. Log in to Docmost as a workspace admin |
| 176 | +2. Go to **Settings** → **AI settings** |
| 177 | +3. Toggle **AI-powered search (Ask AI)** to enable |
| 178 | +4. Wait for background job to generate embeddings for existing pages (monitor via logs) |
| 179 | + |
| 180 | +**Note:** Embeddings are generated asynchronously. New pages get embeddings on creation/update. Existing pages are queued for processing when AI search is enabled. |
0 commit comments