-
Notifications
You must be signed in to change notification settings - Fork 3
Expand file tree
/
Copy path.env.example
More file actions
72 lines (60 loc) · 3 KB
/
.env.example
File metadata and controls
72 lines (60 loc) · 3 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
PORT=8000
HOST=0.0.0.0
# =============================================================================
# Provider Configuration
# =============================================================================
# Each provider can be configured globally via environment variables.
# Additionally, you can override per-model settings in model-config.yaml
# using provider_config with environment variable references (e.g., "$API_KEY").
# Generic OpenAI provider
# Configure if you have a direct OpenAI-compatible API endpoint
OPENAI_BASE_URL=https://api.openai.com
OPENAI_API_KEY=sk-your-openai-key
# OpenRouter provider
# Configure if using OpenRouter with provider selection capabilities
OPENROUTER_API_KEY=sk-or-your-openrouter-key
# Optional: specify preferred providers (comma-separated)
# Only set if you want to restrict to specific providers
# OPENROUTER_PROVIDERS=OpenAI,Anthropic
# Optional: selection order (array of provider slugs)
# OPENROUTER_ORDER=anthropic,openai
# Optional: sort strategy - "price", "throughput", or "latency"
# OPENROUTER_SORT=price
# Optional: allow fallbacks (default: true)
# OPENROUTER_ALLOW_FALLBACKS=true
# Optional: model shortcut (e.g., ":nitro" for throughput, ":floor" for price)
# OPENROUTER_MODEL_SHORTCUT=:nitro
# Vertex AI MaaS provider (optional)
# GOOGLE_APPLICATION_CREDENTIALS can be a full JSON string or the absolute path to the service-account file.
VERTEX_PROJECT_ID=your-project
VERTEX_LOCATION=us-central1
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
# Optional custom endpoint for Vertex OpenAI-compatible API
# VERTEX_CHAT_ENDPOINT=https://example.com/v1/projects/.../endpoints/openapi/chat/completions
# =============================================================================
# Routing Configuration
# =============================================================================
# Model config can be inline YAML or a path to a YAML file.
# The configuration supports per-model provider overrides via provider_config.
# See examples in model-config.example.yaml
# Inline model config (alternative to file)
# MODEL_CONFIG="default_strategy: round_robin\nmodels:\n - name: gpt-4o\n provider: openai\n model: openai/gpt-4o"
# Path to model config file
MODEL_CONFIG_PATH=./model-config.yaml
# =============================================================================
# Logging Configuration
# =============================================================================
# SQLite log database path
LOG_DB_PATH=./data/logs.db
# Optional: SSE token streaming artifical delay (ms) and chunk size
# Adjust for smoother UI
STREAM_DELAY=10
STREAM_CHUNK_SIZE=5
# LiveStore Configuration
# =============================================================================
# Batch size for LiveStore sync (default: 50, range: 1-500)
LIVESTORE_BATCH=50
# Maximum number of records to keep in LiveStore (frontend memory)
# Set this to limit memory usage in the browser (default: 500 recent logs)
# Set to 0 to disable the limit (not recommended for production)
LIVESTORE_MAX_RECORDS=500