Skip to content

GML-2015 Add UI page for configurations, Role base access#28

Merged
chengbiao-jin merged 16 commits intomainfrom
ui_page_server_config_clean
Apr 8, 2026
Merged

GML-2015 Add UI page for configurations, Role base access#28
chengbiao-jin merged 16 commits intomainfrom
ui_page_server_config_clean

Conversation

@prinskumar-tigergraph
Copy link
Copy Markdown
Contributor

@prinskumar-tigergraph prinskumar-tigergraph commented Mar 11, 2026

User description

Updated code for
-Pages for Configuration
-Role based access to setup configurations
-graph level access to change configuration


PR Type

Enhancement, Bug fix


Description

  • Add setup UI for configs and prompts

  • Enforce role-based access across endpoints

  • Support graph-specific LLM/prompt configurations

  • Fix VertexAI imports and model naming


Diagram Walkthrough

flowchart LR
  uiSetup["Setup UI pages (KG Admin, Server Config, Prompts)"]
  apiRoutes["New UI API routes (/config, /prompts, /roles)"]
  roleGuard["Role checks (superuser/globaldesigner/graph admin)"]
  cfgReload["Config reload (LLM/DB/GraphRAG)"]
  perGraph["Per-graph completion config + prompts"]
  eccJobs["ECC jobs use fresh config"]
  vertexFix["VertexAI import/model fixes"]

  uiSetup -- "calls" --> apiRoutes
  apiRoutes -- "protected by" --> roleGuard
  apiRoutes -- "persist + sanitize" --> perGraph
  apiRoutes -- "trigger" --> cfgReload
  cfgReload -- "applies to" --> eccJobs
  perGraph -- "used by" --> eccJobs
  vertexFix -- "stabilizes" --> eccJobs
Loading

File Walkthrough

Relevant files
Enhancement
16 files
ui.py
Add config/prompts APIs with role-based access                     
+904/-8 
config.py
Graph-specific config resolution and reload utilities       
+277/-25
agent.py
Use per-graph completion config for agents                             
+22/-23 
main.py
Reload configs at job start and consistency routes             
+47/-0   
ecc_util.py
LLM provider selection via per-graph config                           
+23/-20 
community_summarizer.py
Load community summary prompt from configured path             
+32/-13 
workers.py
Pass graph to LLM provider for summarization                         
+1/-1     
KGAdmin.tsx
KG admin page for init, ingest, refresh                                   
+664/-0 
GraphRAGConfig.tsx
UI to edit GraphRAG processing settings                                   
+408/-0 
ModeToggle.tsx
Show Setup link based on resolved roles                                   
+73/-9   
main.tsx
Router for setup sections and redirects                                   
+43/-3   
IngestGraph.tsx
Ingestion UI for local and cloud sources                                 
+1557/-0
LLMConfig.tsx
UI to edit and test LLM services                                                 
+1344/-0
GraphDBConfig.tsx
UI to edit and test DB connection                                               
+437/-0 
CustomizePrompts.tsx
UI to view and save prompt files                                                 
+298/-0 
SetupLayout.tsx
Shared layout for setup navigation                                             
+260/-0 
Bug fix
2 files
embedding_services.py
Fix VertexAI embeddings import and parameters                       
+3/-2     
google_vertexai_service.py
Switch to official VertexAI client and args                           
+3/-2     
Configuration changes
1 files
nginx.conf
Add reverse proxy rules for /setup routes                               
+10/-0   
Dependencies
1 files
requirements.txt
Add langchain-google-vertexai dependency                                 
+1/-0     
Additional files
6 files
community_summarization.txt +11/-0   
community_summarization.txt +11/-0   
community_summarization.txt +11/-0   
community_summarization.txt +11/-0   
community_summarization.txt +11/-0   
Bot.tsx +1/-0     

@tg-pr-agent
Copy link
Copy Markdown

tg-pr-agent bot commented Mar 11, 2026

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 5 🔵🔵🔵🔵🔵
🧪 No relevant tests
🔒 Security concerns

Path traversal:
In graphrag/app/routers/ui.py, user-provided graphname is used directly to construct filesystem paths (e.g., common/prompts/{graphname}, configs/{graphname}/server_config.json) without sanitization. A crafted graphname like ../../some/dir could lead to writing outside intended directories. Sanitize/validate graphname against a strict allowlist and normalize paths before file operations.

Sensitive information handling: reload_llm_config may erase authentication_configuration when saving UI-provided configs (which are intentionally stripped of secrets by get_config). This can break services and may cause admins to re-enter secrets unnecessarily. Merge configs to preserve secrets when not provided.

Additionally, several admin-only endpoints return raw exception messages in responses. While gated by role checks, consider standardizing error messages to avoid leaking internal details.

⚡ Recommended focus areas for review

Path Traversal Risk

Unsanitized graphname is interpolated into filesystem paths for prompts/configs, enabling potential directory traversal and arbitrary file overwrite/creation. Sanitize graphname (e.g., allowlist of [A-Za-z0-9_-]) before using it in paths.

if graphname:
    # Create graph-specific prompt dir, seed from default (first time only)
    graph_prompt_dir = f"common/prompts/{graphname}"
    os.makedirs(graph_prompt_dir, exist_ok=True)
    if os.path.exists(default_prompt_path):
        for fname in os.listdir(default_prompt_path):
            src = os.path.join(default_prompt_path, fname)
            dst = os.path.join(graph_prompt_dir, fname)
            if os.path.isfile(src) and not os.path.exists(dst):
                shutil.copy2(src, dst)

    # Create or update configs/{graphname}/server_config.json
    graph_config_dir = f"configs/{graphname}"
    os.makedirs(graph_config_dir, exist_ok=True)
    graph_config_path = os.path.join(graph_config_dir, "server_config.json")
    if not os.path.exists(graph_config_path):
        with open(SERVER_CONFIG, "r") as f:
            graph_server_config = json.load(f)
    else:
        with open(graph_config_path, "r") as f:
            graph_server_config = json.load(f)
    graph_server_config["llm_config"]["completion_service"]["prompt_path"] = f"./{graph_prompt_dir}/"
    with open(graph_config_path, "w") as f:
        json.dump(graph_server_config, f, indent=2)

    prompt_path = graph_prompt_dir
else:
    prompt_path = default_prompt_path
Secret Wipe on Save

reload_llm_config overwrites the on-disk llm_config with the incoming payload without preserving authentication_configuration. Since get_config strips secrets before sending to UI, saving back may erase provider credentials. Consider merging to retain existing secrets when not explicitly provided.

# Preserve existing API keys if not provided in new config
existing_llm_config = server_config.get("llm_config", {})

# Directly save the new LLM config without preserving old API keys
server_config["llm_config"] = new_llm_config

with open(SERVER_CONFIG, "w") as f:
    json.dump(server_config, f, indent=2)
Per-Graph Prompt Not Respected

The community summarization prompt is loaded once at import using the default completion_config, ignoring per-graph overrides and runtime config reloads. This likely causes mismatched prompts when graph-specific prompt paths are configured.

# Load prompt from file
def load_community_prompt():
    prompt_path = completion_config.get("prompt_path", "./common/prompts/openai_gpt4/")
    if prompt_path.startswith("./"):
        prompt_path = prompt_path[2:]
    prompt_path = prompt_path.rstrip("/")

    prompt_file = os.path.join(prompt_path, "community_summarization.txt")
    if not os.path.exists(prompt_file):
        error_msg = f"Community summarization prompt file not found: {prompt_file}. Please ensure the file exists in the configured prompt path."
        logger.error(error_msg)
        raise FileNotFoundError(error_msg)

    try:
        with open(prompt_file, "r", encoding="utf-8") as f:
            content = f.read()
            logger.info(f"Successfully loaded community summarization prompt from: {prompt_file}")
            return content
    except Exception as e:
        error_msg = f"Failed to read community summarization prompt from {prompt_file}: {str(e)}"
        logger.error(error_msg)
        raise Exception(error_msg)


# src: https://github.com/microsoft/graphrag/blob/main/graphrag/index/graph/extractors/summarize/prompts.py
SUMMARIZE_PROMPT = PromptTemplate.from_template(load_community_prompt())

@chengbiao-jin chengbiao-jin force-pushed the ui_page_server_config_clean branch from 5359610 to 4972093 Compare April 3, 2026 20:40
…UI for top_k/num_hops

- Fix chatbot agent using wrong model (llm_model instead of chat_model)
- Ensure get_completion_config always returns chat_model with llm_model fallback
- Restore startup validation for llm_service and llm_model
- Add _config_file_lock to prevent concurrent config file overwrites
- Replace clear()+update() with atomic dict updates in reload functions
- Load community summarization prompt at call time instead of import time
- Add top_k and num_hops fields to GraphRAG config UI
- Fix ECC URL defaults to match docker-compose service names
- Document all supported config parameters in README
- Bump TigerGraph version to 4.2.2
…nt, and UI improvements

- Add chat_service to llm_config for per-graph chatbot LLM provider override
  with inheritance from completion_service
- Mask secrets in GET responses instead of stripping; backend substitutes
  masked values on save/test so credentials never reach the frontend
- Migrate session data from localStorage to sessionStorage; theme stays
  in localStorage
- Add idle timeout (1 hour) that clears session on inactivity
- Wire up default_mem_threshold and default_thread_limit in
  TigerGraphConnectionProxy
- Add GraphRAG config UI fields: num_seen_min, community_level, doc_only,
  and advanced ingestion settings
- Add apiToken auth option to GraphDB config with conditional UI
- Add ConfigScopeToggle graphOnly prop for graph admin role restriction
- Fix SPA routing: serve -s in Dockerfile, catch-all route to login page
…ing, and token_limit cleanup

- Add RequireAuth wrapper to all routes except login page
- Fix SPA routing with serve -s and catch-all route to login
- Replace idle timer signalActivity with pause/resume for long-running
  operations (ingest, rebuild)
- Fix rebuild dialog button flickering between status labels; stop
  polling once rebuild completes and preserve final status message
- Remove incorrect token_limit-to-max_tokens pass-through in Google
  GenAI and Azure OpenAI services (token_limit is for input truncation)
- Update doc_only description and token_limit README docs
@chengbiao-jin chengbiao-jin force-pushed the ui_page_server_config_clean branch from 4972093 to be6759a Compare April 7, 2026 08:15
…nd UI/agent fixes

- Migrate all LLM call sites (15+) to invoke_with_parser/ainvoke_with_parser for unified token usage tracking
- Add JSON parsing fallback (regex extraction) for LLMs that wrap output in preamble or code fences
- Add 3-tier JSON fallback to LLMEntityRelationshipExtractor
- Migrate community_summarizer from with_structured_output to ainvoke_with_parser
- Lift scoring logic from CommunityRetriever into BaseRetriever for all retrievers
- Validate Cypher/GSQL output before execution to avoid running invalid queries
- Detect greeting inputs early in agent router to skip unnecessary query generation
- Fix Login.tsx to show proper error messages instead of always "Invalid credentials"
- Skip typewriter animation for chat history messages
- Fix tiktoken warning for Bedrock model names
- Add unit tests for invoke_with_parser and JSON parsing fallback
- Update README config parameter descriptions for clarity and consistency
- Sanitize graph name to prevent path traversal
- Fix community summarizer to respect per-graph prompt overrides
- Clear stale ingestion status when uploading new files
- Reset file input after upload so files can be reselected
- Disable delete and upload buttons during file processing
- Style file picker as a visible button with selection count
- Validate Cypher/GSQL output before executing queries
- Detect greetings early in agent router to skip query generation
- Fix graph name validation error showing [object Object]
- Add client-side graph name format validation
- Update README config parameter descriptions

GML-2066, GML-2065, GML-2051, GML-2050, GML-2047, GML-2015, GML-2016, GML-2017, GML-2048
@chengbiao-jin chengbiao-jin merged commit bf168ee into main Apr 8, 2026
1 check failed
@chengbiao-jin chengbiao-jin deleted the ui_page_server_config_clean branch April 8, 2026 01:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants