-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathenv.template.0g.example
More file actions
227 lines (196 loc) · 9.17 KB
/
env.template.0g.example
File metadata and controls
227 lines (196 loc) · 9.17 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
# =====================================================
# EverMemOS Configuration — 0G Storage Backend
# =====================================================
#
# SECURITY: Never commit this file to version control.
#
# ── How to use this file ─────────────────────────────
#
# STEP 1: Choose your deployment mode below (Scenario A / B / C).
#
# STEP 2: Fill in required keys based on your scenario:
# Scenario A or B (running local service):
# LLM_API_KEY your LLM provider API key
# VECTORIZE_API_KEY your embedding provider API key
# RERANK_API_KEY your rerank provider API key (default: DeepInfra)
# ZEROG_WALLET_KEY your EVM wallet private key (see Appendix C in README.md)
# MEMORY_USER_ID (optional) your name in memory records; defaults to "default_user"
# Scenario C (connecting to remote server):
# MEMORY_REMOTE_URL remote server URL
# MEMORY_USER_ID your username on that server (required, must be unique)
# ZEROG_WALLET_KEY your EVM wallet private key
#
# EVERYTHING ELSE: leave at defaults.
# These are pre-configured internal settings that work correctly out of the box.
# Changing them may break the backend and is intended only for advanced users
# familiar with the underlying infrastructure.
#
# =====================================================
# =====================================================
# STEP 1 — Deployment mode
# Three deployment scenarios — choose one:
#
# Scenario A (default): Local single-user mode. No auth required.
# SERVER_MODE=false, leave MEMORY_REMOTE_URL empty.
# Run: ./install.sh && ./start_service.sh
#
# Scenario B: Shared server (admin side). You host the service for your team;
# each user connects from their own machine using Scenario C.
# SERVER_MODE=true, leave MEMORY_REMOTE_URL empty.
# Run: ./install.sh && ./start_service.sh
#
# Scenario C: Remote client (user side). Connect to a server set up by your admin.
# No local service needed — just run the installer and you are done.
# Set MEMORY_REMOTE_URL and MEMORY_USER_ID below.
# Run: ./install.sh only (no start_service.sh needed).
# Registration, API key storage, and AI assistant config are fully automatic.
# =====================================================
SERVER_MODE=false
# --- User identity (all scenarios) ---
# Your name in memory records. Optional for Scenario A/B — defaults to
# "default_user" if not set. Required for Scenario C — must be unique on
# the remote server.
# Note: memory is always isolated between different AI assistants regardless
# of this value, so you can safely use the same name across all of them.
MEMORY_USER_ID=
# --- Remote EverMemOS server (Scenario C only) ---
# Leave empty for local deployment (Scenario A or B).
# When set, ./install.sh will auto-register, store credentials in
# .evermemos_remote_secrets, and configure your AI assistant to use this server.
# No local memory service will be started.
#
# Remote server URL (e.g. http://123.45.67.89:1995)
MEMORY_REMOTE_URL=
# =====================================================
# STEP 2 — Required keys
# Scenario A / B (local service): all four keys below are required.
# Scenario C (remote client): only ZEROG_WALLET_KEY is required
# (LLM_API_KEY, VECTORIZE_API_KEY, RERANK_API_KEY are not used on the client).
# =====================================================
# Your LLM provider API key (OpenAI, OpenRouter, DeepSeek, xAI, etc.)
# Not needed for Scenario C.
LLM_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Embedding service API key.
# If using OpenAI (default), set this to the same value as LLM_API_KEY above.
# Not needed for Scenario C.
VECTORIZE_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Rerank service API key. Default provider is DeepInfra (deepinfra.com).
# OpenAI does not offer a reranking API, so a separate key is needed here.
# Not needed for Scenario C.
RERANK_API_KEY=xxxxx
# EVM wallet private key for writing memories to the 0G decentralized storage network.
# Required for all scenarios (A, B, and C).
# In Scenario C, this key is sent to the remote server during registration and used
# for 0G storage writes on your behalf — each user owns their own 0G stream.
# See Appendix C in README.md for how to create a wallet and get free testnet tokens.
# KEEP THIS SECURE — never share it or commit it to version control.
ZEROG_WALLET_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# =====================================================
# OPTIONAL — change only if switching providers or languages
# =====================================================
# LLM model and endpoint.
# Defaults to OpenAI gpt-4o-mini. To use a different OpenAI-compatible provider
# (OpenRouter, DeepSeek, xAI, etc.), update LLM_BASE_URL, LLM_MODEL, and LLM_API_KEY.
LLM_MODEL=gpt-4o-mini
LLM_BASE_URL=https://api.openai.com/v1
# Embedding model and endpoint.
# Defaults to OpenAI text-embedding-3-small.
# To use a different provider, update VECTORIZE_BASE_URL and VECTORIZE_MODEL.
# DeepInfra: https://api.deepinfra.com/v1/openai / Qwen/Qwen3-Embedding-4B
# vLLM: http://localhost:8000/v1 / Qwen3-Embedding-4B
VECTORIZE_BASE_URL=https://api.openai.com/v1
VECTORIZE_MODEL=text-embedding-3-small
# Rerank model and endpoint.
# Defaults to DeepInfra Qwen3-Reranker-4B.
# To use a self-hosted vLLM instance, update RERANK_BASE_URL and RERANK_MODEL.
# vLLM: http://localhost:12000/score / Qwen3-Reranker-4B
RERANK_BASE_URL=https://api.deepinfra.com/v1/inference
RERANK_MODEL=Qwen/Qwen3-Reranker-4B
# Language for memory extraction and summarization.
# Supported values: "en" (English) or "zh" (Chinese). Defaults to "en" if unrecognized.
MEMORY_LANGUAGE=en
# =====================================================
# INTERNAL DEFAULTS — advanced use only
# Pre-configured settings that work correctly out of the box.
# Modifying them may break the backend. Only change if you are familiar
# with the underlying infrastructure.
# =====================================================
# --- LLM internals ---
LLM_PROVIDER=openai
LLM_TEMPERATURE=0.3
LLM_MAX_TOKENS=16384
# Maximum concurrent requests to the LLM endpoint. 0 = unlimited (default, suitable for OpenAI/OpenRouter).
# Set to a positive integer when using a self-hosted endpoint with a low concurrency limit.
LLM_MAX_CONCURRENT=0
# When using Qwen3 via OpenRouter, "cerebras" routing is recommended.
# LLM_OPENROUTER_PROVIDER=cerebras
# --- Embedding internals ---
# "deepinfra" here denotes the OpenAI-compatible API format, not the DeepInfra service.
# This adapter works with any OpenAI-compatible endpoint, including OpenAI itself.
VECTORIZE_PROVIDER=deepinfra
VECTORIZE_TIMEOUT=30
VECTORIZE_MAX_RETRIES=3
VECTORIZE_BATCH_SIZE=10
VECTORIZE_MAX_CONCURRENT=5
VECTORIZE_ENCODING_FORMAT=float
# Set to 0 to disable the dimensions parameter (required for some vLLM models).
VECTORIZE_DIMENSIONS=1024
# --- Rerank internals ---
RERANK_PROVIDER=deepinfra
RERANK_TIMEOUT=30
RERANK_MAX_RETRIES=3
RERANK_BATCH_SIZE=10
RERANK_MAX_CONCURRENT=5
# --- Redis (managed by Docker — do not change) ---
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_DB=8
REDIS_SSL=false
# --- MongoDB (managed by Docker — do not change) ---
MONGODB_HOST=localhost
MONGODB_PORT=27017
MONGODB_USERNAME=admin
MONGODB_PASSWORD=memsys123
MONGODB_DATABASE=memsys
MONGODB_URI_PARAMS=socketTimeoutMS=15000&authSource=admin
# --- Elasticsearch (managed by Docker — do not change) ---
ES_HOSTS=http://localhost:19200
ES_USERNAME=
ES_PASSWORD=
ES_VERIFY_CERTS=false
SELF_ES_INDEX_NS=memsys
# --- Milvus (managed by Docker — do not change) ---
MILVUS_HOST=localhost
MILVUS_PORT=19530
SELF_MILVUS_COLLECTION_NS=memsys
# --- KV storage backend ---
KV_STORAGE_TYPE=zerog
# --- 0G network constants (fixed values for the Galileo Testnet — do not change) ---
# ZEROG_READ_NODE points to the local kv-server started by start_service.sh.
ZEROG_READ_NODE=http://127.0.0.1:6789 # local node
ZEROG_RPC_URL=https://evmrpc-testnet.0g.ai
ZEROG_INDEXER_URL=https://indexer-storage-testnet-turbo.0g.ai
# On-chain contract address for the 0G storage flow — do not change.
ZEROG_FLOW_ADDRESS=0x22E03a6A89B950F1c82ec5e74F8eCa321a105296
# Stream ID and encryption key are auto-generated on first startup and stored in
# .0g_secrets (project root). No manual configuration needed.
# --- API server ---
# Must match the port Claude Code hooks use to reach the backend.
API_BASE_URL=http://localhost:1995
# --- Startup data sync ---
# Validates and re-syncs Milvus/Elasticsearch against MongoDB on each backend start.
# Master switch — set to false to skip the entire sync process on startup.
STARTUP_SYNC_ENABLED=true
# Whether to validate Milvus (vector database) against MongoDB and re-sync missing docs.
STARTUP_SYNC_MILVUS=true
# Whether to validate Elasticsearch (full-text search) against MongoDB and re-sync missing docs.
STARTUP_SYNC_ES=true
# Days to validate: 0 = full database (slower), positive N = last N days only (faster).
STARTUP_SYNC_DAYS=0
STARTUP_SYNC_BATCH_SIZE=500
STARTUP_SYNC_MAX_DOCS=10000
# --- Logging and runtime ---
LOG_LEVEL=INFO
# ENV and PYTHONASYNCIODEBUG are internal runtime flags — leave as-is.
ENV=dev
PYTHONASYNCIODEBUG=1