Skip to content

Commit bf168ee

Browse files
GML-2015 Add UI page for configurations, Role base access (#28)
### **User description** Updated code for -Pages for Configuration -Role based access to setup configurations -graph level access to change configuration ___ ### **PR Type** Enhancement, Bug fix ___ ### **Description** - Add setup UI for configs and prompts - Enforce role-based access across endpoints - Support graph-specific LLM/prompt configurations - Fix VertexAI imports and model naming GML-2066, GML-2065, GML-2051, GML-2050, GML-2047, GML-2015, GML-2016, GML-2017, GML-2048 ___ ### Diagram Walkthrough ```mermaid flowchart LR uiSetup["Setup UI pages (KG Admin, Server Config, Prompts)"] apiRoutes["New UI API routes (/config, /prompts, /roles)"] roleGuard["Role checks (superuser/globaldesigner/graph admin)"] cfgReload["Config reload (LLM/DB/GraphRAG)"] perGraph["Per-graph completion config + prompts"] eccJobs["ECC jobs use fresh config"] vertexFix["VertexAI import/model fixes"] uiSetup -- "calls" --> apiRoutes apiRoutes -- "protected by" --> roleGuard apiRoutes -- "persist + sanitize" --> perGraph apiRoutes -- "trigger" --> cfgReload cfgReload -- "applies to" --> eccJobs perGraph -- "used by" --> eccJobs vertexFix -- "stabilizes" --> eccJobs ``` <details> <summary><h3> File Walkthrough</h3></summary> <table><thead><tr><th></th><th align="left">Relevant files</th></tr></thead><tbody><tr><td><strong>Enhancement</strong></td><td><details><summary>16 files</summary><table> <tr> <td><strong>ui.py</strong><dd><code>Add config/prompts APIs with role-based access</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-6bd0577645f5c331c0503b21762ac99be7a11b0f1dd72d4435a84a02f0d18f62">+904/-8</a>&nbsp; </td> </tr> <tr> <td><strong>config.py</strong><dd><code>Graph-specific config resolution and reload utilities</code>&nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-1bacff878451e5aa9c6d164150c7b2daad028d5e7acba90bb720cb73ffdd827b">+277/-25</a></td> </tr> <tr> <td><strong>agent.py</strong><dd><code>Use per-graph completion config for agents</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-21b6ee06c77f665fd7772366a70c1d9861d4c82cfed32bc423ac7e72607f1b19">+22/-23</a>&nbsp; </td> </tr> <tr> <td><strong>main.py</strong><dd><code>Reload configs at job start and consistency routes</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-870d07e9c93da59bd2a82227a8dc69ee6276f08f25402fd85c3202f6fb8cb5c3">+47/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>ecc_util.py</strong><dd><code>LLM provider selection via per-graph config</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-890bb6f3c6fbe84bfda83faf66d59a1f8058f9760e9e2ee4cac1c388a90f276f">+23/-20</a>&nbsp; </td> </tr> <tr> <td><strong>community_summarizer.py</strong><dd><code>Load community summary prompt from configured path</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-270a6c623b596eaa5517e1fc7852ae7fc99564cf1891df7e6816f04567e09de8">+32/-13</a>&nbsp; </td> </tr> <tr> <td><strong>workers.py</strong><dd><code>Pass graph to LLM provider for summarization</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-cbee2f3b2dbbc0676fa995614cd945484790f3b540b6186b8ad3ee1dd04a1165">+1/-1</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>KGAdmin.tsx</strong><dd><code>KG admin page for init, ingest, refresh</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-c890ce7851058e4166e4f2e8c6aa0fd6c2614618943979d6634aebcbee828041">+664/-0</a>&nbsp; </td> </tr> <tr> <td><strong>GraphRAGConfig.tsx</strong><dd><code>UI to edit GraphRAG processing settings</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-b1fb191ce371e2ece92c0d79ebc192098e126c3134a1822907321b8f0ad8b27d">+408/-0</a>&nbsp; </td> </tr> <tr> <td><strong>ModeToggle.tsx</strong><dd><code>Show Setup link based on resolved roles</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-71e4cc45ff2a15d61601a45f0fcb76d2bbda863f3f2e7f3fa75ab0cf3eaa2afe">+73/-9</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>main.tsx</strong><dd><code>Router for setup sections and redirects</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-e82a3eb03a96f5d8bc1e29c14b6080b9eb7d437436c438b0664c6cf50e4e8a64">+43/-3</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>IngestGraph.tsx</strong><dd><code>Ingestion UI for local and cloud sources</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-7c6239f8c037f4932a1560909f98d7be9e8b61b55885826868ccc6396e56fe4d">+1557/-0</a></td> </tr> <tr> <td><strong>LLMConfig.tsx</strong><dd><code>UI to edit and test LLM services</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-05a185f34839daf1c51a0df75797327440e048a8fd18ccc84e242b2b7421261b">+1344/-0</a></td> </tr> <tr> <td><strong>GraphDBConfig.tsx</strong><dd><code>UI to edit and test DB connection</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-54811255264c66ff4612849b6f5e9bb33e9a28d12e5ad9a25cd780a7aba9f658">+437/-0</a>&nbsp; </td> </tr> <tr> <td><strong>CustomizePrompts.tsx</strong><dd><code>UI to view and save prompt files</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-944c356cf7bf774a678a10aefa6548022be34e2a435262ff8eb1b333b196431e">+298/-0</a>&nbsp; </td> </tr> <tr> <td><strong>SetupLayout.tsx</strong><dd><code>Shared layout for setup navigation</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-9618af5b3efab64eabcaf0630bd505606c803afc751e6c92d9bb461d852e01b2">+260/-0</a>&nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Bug fix</strong></td><td><details><summary>2 files</summary><table> <tr> <td><strong>embedding_services.py</strong><dd><code>Fix VertexAI embeddings import and parameters</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-62ff02032575bd23572c349ff431d2153113c3155a2aea25140dc9d0bd54a960">+3/-2</a>&nbsp; &nbsp; &nbsp; </td> </tr> <tr> <td><strong>google_vertexai_service.py</strong><dd><code>Switch to official VertexAI client and args</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-966fe514eeac8da87dc59de3b3cf731fd06e1618992cd0f147301c85133fbcb2">+3/-2</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Configuration changes</strong></td><td><details><summary>1 files</summary><table> <tr> <td><strong>nginx.conf</strong><dd><code>Add reverse proxy rules for /setup routes</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-f03c325fe986004de6d5dc60fa018b94878b383e21cee2b2b7173a6638a0a964">+10/-0</a>&nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Dependencies</strong></td><td><details><summary>1 files</summary><table> <tr> <td><strong>requirements.txt</strong><dd><code>Add langchain-google-vertexai dependency</code>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; </dd></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-e66887509cf346e41e1a9ccb30ddab589260f362899946f6b26602b5e29c547c">+1/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr><tr><td><strong>Additional files</strong></td><td><details><summary>6 files</summary><table> <tr> <td><strong>community_summarization.txt</strong></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-53833bf48ab85e4e40fa95f9fbc3b1a74bbe68524b5e12b4fb86dab372ef1685">+11/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>community_summarization.txt</strong></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-8cd711e857e436db39e75eaa04ab3c726bafde0c1a2673c8217e566cfda7dbb1">+11/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>community_summarization.txt</strong></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-613eb6dcb521365ef109dfdc5194b2ac8c4d44320deb1b86711bc52a2d8f30e6">+11/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>community_summarization.txt</strong></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-210f52b3d8035997a4c91287cc2fb566b75fec16c3c393fb2ed78ee11f154c0e">+11/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>community_summarization.txt</strong></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-6f555d1a39a1bdc30123f90c415c072f7f3ef65c1e200cc5dfedb3aac9bd8e04">+11/-0</a>&nbsp; &nbsp; </td> </tr> <tr> <td><strong>Bot.tsx</strong></td> <td><a href="https://github.com/tigergraph/graphrag/pull/28/files#diff-240a9434a7410733cf592017b89dd0fe904ccc99e0d71b2b819dfacdae66feae">+1/-0</a>&nbsp; &nbsp; &nbsp; </td> </tr> </table></details></td></tr></tr></tbody></table> </details> ___
2 parents 13d868a + cd2c578 commit bf168ee

86 files changed

Lines changed: 9301 additions & 1123 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

README.md

Lines changed: 150 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,10 @@
3131
- [More Detailed Configurations](#more-detailed-configurations)
3232
- [DB configuration](#db-configuration)
3333
- [GraphRAG configuration](#graphrag-configuration)
34-
- [Chat configuration](#chat-configuration)
34+
- [Chat History Configuration](#chat-history-configuration)
3535
- [LLM provider configuration](#llm-provider-configuration)
36+
- [Supported parameters](#supported-parameters)
37+
- [Provider examples](#provider-examples)
3638
- [OpenAI](#openai)
3739
- [Google GenAI](#google-genai)
3840
- [GCP VertexAI](#gcp-vertexai)
@@ -53,7 +55,7 @@
5355
---
5456

5557
## Releases
56-
* **2/28/2025**: GraphRAG v1.2.0 released. Added Admin UI for graph initialization, document ingestion, and knowledge graph rebuild, along with many other improvements and bug fixes. See [release notes](https://github.com/tigergraph/graphrag/releases/tag/v1.2.0) for details.
58+
* **2/28/2026**: GraphRAG v1.2.0 released. Added Admin UI for graph initialization, document ingestion, and knowledge graph rebuild, along with many other improvements and bug fixes. See [release notes](https://github.com/tigergraph/graphrag/releases/tag/v1.2.0) for details.
5759
* **9/22/2025**: GraphRAG is available now officially v1.1 (v1.1.0). AWS Bedrock support is completed with BDA integration for multimodal document ingestion. See [release notes](https://github.com/tigergraph/graphrag/releases/tag/v1.1.0) for details.
5860
* **6/18/2025**: GraphRAG is available now officially v1.0 (v1.0.0). TigerGraph database is the only graph and vector storagge supported.
5961
Please see [Release Notes](https://docs.tigergraph.com/tg-graphrag/current/release-notes/) for details.
@@ -103,7 +105,7 @@ Organizing the data as a knowledge graph allows a chatbot to access accurate, fa
103105
### Quick Start
104106

105107
#### Use TigerGraph Docker-Based Instance
106-
Set your LLM Provider (supported `openai` or `gemini`) api key as environment varabiel LLM_API_KEY and use the following command for a one-step quick deployment with TigerGraph Community Edition and default configurations:
108+
Set your LLM Provider (supported `openai` or `gemini`) api key as environment variable LLM_API_KEY and use the following command for a one-step quick deployment with TigerGraph Community Edition and default configurations:
107109
```
108110
curl -k https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/setup_graphrag.sh | bash
109111
```
@@ -198,10 +200,10 @@ Run command `docker compose down` and wait for all the service containers to sto
198200
199201
If you prefer to start a TigerGraph Community Edition instance without a license key, please make sure the container can be accessed from the GraphRAG containers by add `--network graphrag_default`:
200202
```
201-
docker run -d -p 14240:14240 --name tigergraph --ulimit nofile=1000000:1000000 --init --network graphrag_default -t tigergraph/community:4.2.1
203+
docker run -d -p 14240:14240 --name tigergraph --ulimit nofile=1000000:1000000 --init --network graphrag_default -t tigergraph/community:4.2.2
202204
```
203205

204-
> Use **tigergraph/tigergraph:4.2.1** if Enterprise Edition is preferred.
206+
> Use **tigergraph/tigergraph:4.2.2** if Enterprise Edition is preferred.
205207
> Setting up **DNS** or `/etc/hosts` properly is an alternative solution to ensure contains can connect to each other.
206208
> Or modify`hostname` in `db_config` section of `configs/server_config.json` and replace `http://tigergraph` to your tigergraph container IP address, e.g., `http://172.19.0.2`.
207209
@@ -419,6 +421,8 @@ Copy the below into `configs/server_config.json` and edit the `hostname` and `ge
419421
"hostname": "http://tigergraph",
420422
"restppPort": "9000",
421423
"gsPort": "14240",
424+
"username": "tigergraph",
425+
"password": "tigergraph",
422426
"getToken": false,
423427
"default_timeout": 300,
424428
"default_mem_threshold": 5000,
@@ -427,23 +431,65 @@ Copy the below into `configs/server_config.json` and edit the `hostname` and `ge
427431
}
428432
```
429433

434+
| Parameter | Type | Default | Description |
435+
| --- | --- | --- | --- |
436+
| `hostname` | string | `"http://tigergraph"` | TigerGraph server URL. |
437+
| `restppPort` | string | `"9000"` | RESTPP port for TigerGraph API requests. |
438+
| `gsPort` | string | `"14240"` | GSQL port for TigerGraph admin operations. |
439+
| `username` | string | `"tigergraph"` | TigerGraph database username. |
440+
| `password` | string | `"tigergraph"` | TigerGraph database password. |
441+
| `getToken` | bool | `false` | Set to `true` if token authentication is enabled on TigerGraph. |
442+
| `graphname` | string | `""` | Default graph name. Usually left empty (selected at runtime). |
443+
| `apiToken` | string | `""` | Pre-generated API token. If set, token-based auth is used instead of username/password. |
444+
| `default_timeout` | int | `300` | Default query timeout in seconds. |
445+
| `default_mem_threshold` | int | `5000` | Memory threshold (MB) for query execution. |
446+
| `default_thread_limit` | int | `8` | Max threads for query execution. |
447+
430448
### GraphRAG configuration
431449
Copy the below code into `configs/server_config.json`. You shouldn’t need to change anything unless you change the port of the chat history service in the Docker Compose file.
432450

433-
`reuse_embedding` to `true` will skip re-generating the embedding if it already exists.
434-
`ecc` and `chat_history_api` are the addresses of internal components of GraphRAG.If you use the Docker Compose file as is, you don’t need to change them.
435-
436451
```json
437452
{
438453
"graphrag_config": {
439454
"reuse_embedding": false,
440-
"ecc": "http://eventual-consistency-service:8001",
441-
"chat_history_api": "http://chat-history:8002"
455+
"ecc": "http://graphrag-ecc:8001",
456+
"chat_history_api": "http://chat-history:8002",
457+
"chunker": "semantic",
458+
"extractor": "llm",
459+
"top_k": 5,
460+
"num_hops": 2
442461
}
443462
}
444463
```
445464

446-
### Chat configuration
465+
| Parameter | Type | Default | Description |
466+
| --- | --- | --- | --- |
467+
| `reuse_embedding` | bool | `true` | Reuse existing embeddings instead of regenerating them. |
468+
| `ecc` | string | `"http://graphrag-ecc:8001"` | URL of the knowledge graph build service. No change needed when using the provided Docker Compose file. |
469+
| `chat_history_api` | string | `"http://chat-history:8002"` | URL of the chat history service. No change needed when using the provided Docker Compose file. |
470+
| `chunker` | string | `"semantic"` | Default document chunker. Options: `semantic`, `character`, `regex`, `markdown`, `html`, `recursive`. |
471+
| `extractor` | string | `"llm"` | Entity extraction method. Options: `llm`, `graphrag`. |
472+
| `chunker_config` | object | `{}` | Chunker-specific settings. For `character`/`markdown`/`recursive`: `chunk_size`, `overlap_size`. For `semantic`: `method`, `threshold`. For `regex`: `pattern`. |
473+
| `top_k` | int | `5` | Number of top similar results to retrieve during search. |
474+
| `num_hops` | int | `2` | Number of graph hops to traverse when expanding retrieved results. |
475+
| `num_seen_min` | int | `2` | Minimum occurrence threshold for a node to be included in search results. |
476+
| `community_level` | int | `2` | Community hierarchy level used for community search. |
477+
| `chunk_only` | bool | `true` | If true, hybrid search only retrieves document chunks (not entities). |
478+
| `doc_only` | bool | `false` | If true, hybrid search retrieves whole documents instead of chunks. |
479+
| `with_chunk` | bool | `true` | If true, community search also includes document chunks in results. |
480+
| `doc_process_switch` | bool | `true` | Enable/disable document processing during knowledge graph build. |
481+
| `entity_extraction_switch` | bool | same as `doc_process_switch` | Enable/disable entity extraction during knowledge graph build. |
482+
| `community_detection_switch` | bool | same as `entity_extraction_switch` | Enable/disable community detection during knowledge graph build. |
483+
| `load_batch_size` | int | `500` | Batch size for document loading. |
484+
| `upsert_delay` | int | `0` | Delay in seconds between loading batches. |
485+
| `default_concurrency` | int | `10` | Base concurrency level for parallel processing. Configurable per graph. |
486+
| `process_interval_seconds` | int | `300` | Interval (seconds) for background consistency processing. |
487+
| `cleanup_interval_seconds` | int | `300` | Interval (seconds) for background cleanup. |
488+
| `checker_batch_size` | int | `100` | Batch size for background consistency checking. |
489+
| `enable_consistency_checker` | bool | `false` | Enable the background consistency checker. |
490+
| `graph_names` | list | `[]` | Graphs to monitor when consistency checker is enabled. |
491+
492+
### Chat History Configuration
447493
Copy the below code into `configs/server_config.json`. You shouldn’t need to change anything unless you change the port of the chat history service in the Docker Compose file.
448494

449495
```json
@@ -464,6 +510,99 @@ Copy the below code into `configs/server_config.json`. You shouldn’t need to c
464510
### LLM provider configuration
465511
In the `llm_config` section of `configs/server_config.json` file, copy JSON config template from below for your LLM provider, and fill out the appropriate fields. Only one provider is needed.
466512

513+
#### Structure overview
514+
515+
```json
516+
{
517+
"llm_config": {
518+
"authentication_configuration": {
519+
"OPENAI_API_KEY": "sk-..."
520+
},
521+
"completion_service": {
522+
"llm_service": "openai",
523+
"llm_model": "gpt-4.1-mini",
524+
"model_kwargs": { "temperature": 0 },
525+
"prompt_path": "./common/prompts/openai_gpt4/"
526+
},
527+
"embedding_service": {
528+
"embedding_model_service": "openai",
529+
"model_name": "text-embedding-3-small"
530+
},
531+
"chat_service": {
532+
"llm_model": "gpt-4.1"
533+
},
534+
"multimodal_service": {
535+
"llm_service": "openai",
536+
"llm_model": "gpt-4o"
537+
}
538+
}
539+
}
540+
```
541+
542+
- `authentication_configuration`: Shared credentials for all services. Service-level keys take precedence over top-level keys.
543+
- `completion_service` **(required)**: LLM for knowledge graph building and query generation.
544+
- `embedding_service` **(required)**: Text embedding model for document indexing.
545+
- `chat_service` *(optional)*: Chatbot LLM override. Missing keys are inherited from `completion_service`. Configurable per graph.
546+
- `multimodal_service` *(optional)*: Vision/image model for document ingestion.
547+
548+
#### Supported parameters
549+
550+
**Top-level `llm_config` parameters:**
551+
552+
| Parameter | Type | Default | Description |
553+
| --- | --- | --- | --- |
554+
| `authentication_configuration` | object || Shared authentication credentials for all services. Service-level values take precedence. |
555+
| `token_limit` | int || Maximum token count for retrieved context. Inherited by all services if not set at service level. `0` or omitted means unlimited. |
556+
557+
**`completion_service` parameters:**
558+
559+
| Parameter | Type | Required | Default | Description |
560+
| --- | --- | --- | --- | --- |
561+
| `llm_service` | string | **Yes** || LLM provider. Options: `openai`, `azure`, `vertexai`, `genai`, `bedrock`, `sagemaker`, `groq`, `ollama`, `huggingface`, `watsonx`. |
562+
| `llm_model` | string | **Yes** || Model name for knowledge graph building and query generation (e.g., `gpt-4.1-mini`). |
563+
| `authentication_configuration` | object | No | inherited from top-level | Service-specific auth credentials. Overrides top-level values. |
564+
| `model_kwargs` | object | No | `{}` | Additional model parameters (e.g., `{"temperature": 0}`). |
565+
| `prompt_path` | string | No | `"./common/prompts/openai_gpt4/"` | Path to prompt template files. |
566+
| `base_url` | string | No || Custom API endpoint URL. |
567+
| `token_limit` | int | No | inherited from top-level | Max token count for retrieved context sent to the LLM. `0` or omitted means unlimited. |
568+
569+
**`embedding_service` parameters:**
570+
571+
| Parameter | Type | Required | Default | Description |
572+
| --- | --- | --- | --- | --- |
573+
| `embedding_model_service` | string | **Yes** || Embedding provider. Options: `openai`, `azure`, `vertexai`, `genai`, `bedrock`, `ollama`. |
574+
| `model_name` | string | **Yes** || Embedding model name (e.g., `text-embedding-3-small`). |
575+
| `dimensions` | int | No | `1536` | Embedding vector dimensions. |
576+
| `authentication_configuration` | object | No | inherited from top-level | Service-specific auth credentials. Overrides top-level values. |
577+
578+
**`chat_service` parameters (optional):**
579+
580+
Chatbot LLM override. If not configured, inherits from `completion_service`. Configurable per graph via the UI.
581+
582+
| Parameter | Type | Required | Default | Description |
583+
| --- | --- | --- | --- | --- |
584+
| `llm_service` | string | No | same as completion | LLM provider for the chatbot. |
585+
| `llm_model` | string | No | same as completion | Model name for the chatbot. |
586+
| `authentication_configuration` | object | No | inherited from completion | Auth credentials. Service-level values take precedence. |
587+
| `model_kwargs` | object | No | inherited from completion | Additional model parameters (e.g., `{"temperature": 0}`). |
588+
| `prompt_path` | string | No | inherited from completion | Path to prompt template files. |
589+
| `base_url` | string | No | inherited from completion | Custom API endpoint URL. |
590+
| `token_limit` | int | No | inherited from completion | Max token count for retrieved context sent to the chatbot LLM. `0` or omitted means unlimited. |
591+
592+
**`multimodal_service` parameters (optional):**
593+
594+
Vision model for image processing during document ingestion. If not configured, inherits from `completion_service` with a default vision model derived per provider.
595+
596+
| Parameter | Type | Required | Default | Description |
597+
| --- | --- | --- | --- | --- |
598+
| `llm_service` | string | No | inherited from completion | Multimodal LLM provider. |
599+
| `llm_model` | string | No | auto-derived per provider | Vision model name (e.g., `gpt-4o`). |
600+
| `authentication_configuration` | object | No | inherited from completion | Service-specific auth credentials. Overrides top-level values. |
601+
| `model_kwargs` | object | No | inherited from completion | Additional model parameters. |
602+
| `prompt_path` | string | No | inherited from completion | Path to prompt template files. |
603+
604+
#### Provider examples
605+
467606
#### OpenAI
468607
In addition to the `OPENAI_API_KEY`, `llm_model` and `model_name` can be edited to match your specific configuration details.
469608

0 commit comments

Comments
 (0)