You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fix: config reload race conditions, chatbot model selection, and add UI for top_k/num_hops
- Fix chatbot agent using wrong model (llm_model instead of chat_model)
- Ensure get_completion_config always returns chat_model with llm_model fallback
- Restore startup validation for llm_service and llm_model
- Add _config_file_lock to prevent concurrent config file overwrites
- Replace clear()+update() with atomic dict updates in reload functions
- Load community summarization prompt at call time instead of import time
- Add top_k and num_hops fields to GraphRAG config UI
- Fix ECC URL defaults to match docker-compose service names
- Document all supported config parameters in README
- Bump TigerGraph version to 4.2.2
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@@ -198,10 +200,10 @@ Run command `docker compose down` and wait for all the service containers to sto
198
200
199
201
If you prefer to start a TigerGraph Community Edition instance without a license key, please make sure the container can be accessed from the GraphRAG containers by add `--network graphrag_default`:
> Use **tigergraph/tigergraph:4.2.1** if Enterprise Edition is preferred.
206
+
> Use **tigergraph/tigergraph:4.2.2** if Enterprise Edition is preferred.
205
207
> Setting up **DNS** or `/etc/hosts` properly is an alternative solution to ensure contains can connect to each other.
206
208
> Or modify`hostname` in `db_config` section of `configs/server_config.json` and replace `http://tigergraph` to your tigergraph container IP address, e.g., `http://172.19.0.2`.
207
209
@@ -419,6 +421,8 @@ Copy the below into `configs/server_config.json` and edit the `hostname` and `ge
419
421
"hostname": "http://tigergraph",
420
422
"restppPort": "9000",
421
423
"gsPort": "14240",
424
+
"username": "tigergraph",
425
+
"password": "tigergraph",
422
426
"getToken": false,
423
427
"default_timeout": 300,
424
428
"default_mem_threshold": 5000,
@@ -427,22 +431,64 @@ Copy the below into `configs/server_config.json` and edit the `hostname` and `ge
427
431
}
428
432
```
429
433
434
+
| Parameter | Type | Default | Description |
435
+
| --- | --- | --- | --- |
436
+
|`hostname`| string |`"http://tigergraph"`| TigerGraph server URL. |
437
+
|`restppPort`| string |`"9000"`| RESTPP port for TigerGraph API requests. |
438
+
|`gsPort`| string |`"14240"`| GSQL port for TigerGraph admin operations. |
|`getToken`| bool |`false`| Set to `true` if token authentication is enabled on TigerGraph. |
442
+
|`graphname`| string |`""`| Default graph name. Usually left empty (selected at runtime). |
443
+
|`apiToken`| string |`""`| Pre-generated API token. If set, token-based auth is used instead of username/password. |
444
+
|`default_timeout`| int |`300`| Default query timeout in seconds. |
445
+
|`default_mem_threshold`| int |`5000`| Memory threshold (MB) for query execution. |
446
+
|`default_thread_limit`| int |`8`| Max threads for query execution. |
447
+
430
448
### GraphRAG configuration
431
449
Copy the below code into `configs/server_config.json`. You shouldn’t need to change anything unless you change the port of the chat history service in the Docker Compose file.
432
450
433
-
`reuse_embedding` to `true` will skip re-generating the embedding if it already exists.
434
-
`ecc` and `chat_history_api` are the addresses of internal components of GraphRAG.If you use the Docker Compose file as is, you don’t need to change them.
|`reuse_embedding`| bool |`true`| Skip re-generating the embedding if it already exists on a vertex. |
468
+
|`ecc`| string |`"http://graphrag-ecc:8001"`| URL of the Entity-Context-Community (ECC) service. No change needed when using the provided Docker Compose file. |
469
+
|`chat_history_api`| string |`"http://chat-history:8002"`| URL of the chat history service. No change needed when using the provided Docker Compose file. |
|`entity_extraction_switch`| bool | same as `doc_process_switch`| Enable/disable entity extraction during knowledge graph build. |
482
+
|`community_detection_switch`| bool | same as `entity_extraction_switch`| Enable/disable community detection during knowledge graph build. |
483
+
|`load_batch_size`| int |`500`| Batch size for upserting vertices during document loading. |
484
+
|`upsert_delay`| int |`0`| Delay in seconds between upsert batches. |
485
+
|`tg_concurrency`| int |`10`| Max concurrent requests to TigerGraph during processing. |
486
+
|`process_interval_seconds`| int |`300`| Interval for background consistency processing (when enabled). |
487
+
|`cleanup_interval_seconds`| int |`300`| Interval for background cleanup (when enabled). |
488
+
|`checker_batch_size`| int |`100`| Number of vertices to scan per batch during background consistency checking. (Also accepts legacy key `batch_size`.) |
489
+
|`enable_consistency_checker`| bool |`false`| Enable the background consistency checker. |
490
+
|`graph_names`| list |`[]`| Graphs to monitor when consistency checker is enabled. |
491
+
446
492
### Chat configuration
447
493
Copy the below code into `configs/server_config.json`. You shouldn’t need to change anything unless you change the port of the chat history service in the Docker Compose file.
448
494
@@ -464,6 +510,51 @@ Copy the below code into `configs/server_config.json`. You shouldn’t need to c
464
510
### LLM provider configuration
465
511
In the `llm_config` section of `configs/server_config.json` file, copy JSON config template from below for your LLM provider, and fill out the appropriate fields. Only one provider is needed.
466
512
513
+
#### Supported parameters
514
+
515
+
**Top-level `llm_config` parameters:**
516
+
517
+
| Parameter | Type | Default | Description |
518
+
| --- | --- | --- | --- |
519
+
|`authentication_configuration`| object | — | Shared authentication credentials. Merged into all service configs (service-specific values take precedence). |
520
+
|`token_limit`| int | — | Shared token limit propagated to `completion_service` and `embedding_service` if they don't define their own. Use `0` or negative for unlimited. |
|`llm_model`| string |**Yes**| — | Model name for ECC/GraphRAG tasks (e.g., `gpt-4.1-mini`). |
528
+
|`chat_model`| string | No | same as `llm_model`| Model name for the chatbot. If not set, falls back to `llm_model`. Allows using a different (e.g., cheaper/faster) model for chat vs. ingestion. |
529
+
|`authentication_configuration`| object | No | inherited from top-level | Service-specific auth credentials (overrides top-level). |
530
+
|`model_kwargs`| object | No |`{}`| Additional keyword arguments passed to the LLM (e.g., `{"temperature": 0}`). |
531
+
|`prompt_path`| string | No |`"./common/prompts/openai_gpt4/"`| Path to prompt template files. |
532
+
|`base_url`| string | No | — | Custom API base URL (for self-hosted or proxy endpoints). |
533
+
|`token_limit`| int | No | inherited from top-level | Max token limit for this service. |
|`model_name`| string |**Yes**| — | Embedding model name (e.g., `text-embedding-3-small`). |
541
+
|`dimensions`| int | No |`1536`| Embedding vector dimensions. |
542
+
|`authentication_configuration`| object | No | inherited from top-level | Service-specific auth credentials (overrides top-level). |
543
+
544
+
**`multimodal_service` parameters (optional):**
545
+
546
+
Used for vision/image description tasks during document ingestion. If not configured, a default vision model is auto-derived from the completion service provider.
0 commit comments