GML-2015 Add UI page for configurations, Role base access#28
GML-2015 Add UI page for configurations, Role base access#28chengbiao-jin merged 16 commits intomainfrom
Conversation
Roles base access for configuration
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
…mic writes for graph-specific configs
…oss container restarts
…odel and prompt_path logging
5359610 to
4972093
Compare
…UI for top_k/num_hops - Fix chatbot agent using wrong model (llm_model instead of chat_model) - Ensure get_completion_config always returns chat_model with llm_model fallback - Restore startup validation for llm_service and llm_model - Add _config_file_lock to prevent concurrent config file overwrites - Replace clear()+update() with atomic dict updates in reload functions - Load community summarization prompt at call time instead of import time - Add top_k and num_hops fields to GraphRAG config UI - Fix ECC URL defaults to match docker-compose service names - Document all supported config parameters in README - Bump TigerGraph version to 4.2.2
…nt, and UI improvements - Add chat_service to llm_config for per-graph chatbot LLM provider override with inheritance from completion_service - Mask secrets in GET responses instead of stripping; backend substitutes masked values on save/test so credentials never reach the frontend - Migrate session data from localStorage to sessionStorage; theme stays in localStorage - Add idle timeout (1 hour) that clears session on inactivity - Wire up default_mem_threshold and default_thread_limit in TigerGraphConnectionProxy - Add GraphRAG config UI fields: num_seen_min, community_level, doc_only, and advanced ingestion settings - Add apiToken auth option to GraphDB config with conditional UI - Add ConfigScopeToggle graphOnly prop for graph admin role restriction - Fix SPA routing: serve -s in Dockerfile, catch-all route to login page
…ing, and token_limit cleanup - Add RequireAuth wrapper to all routes except login page - Fix SPA routing with serve -s and catch-all route to login - Replace idle timer signalActivity with pause/resume for long-running operations (ingest, rebuild) - Fix rebuild dialog button flickering between status labels; stop polling once rebuild completes and preserve final status message - Remove incorrect token_limit-to-max_tokens pass-through in Google GenAI and Azure OpenAI services (token_limit is for input truncation) - Update doc_only description and token_limit README docs
4972093 to
be6759a
Compare
…nd UI/agent fixes - Migrate all LLM call sites (15+) to invoke_with_parser/ainvoke_with_parser for unified token usage tracking - Add JSON parsing fallback (regex extraction) for LLMs that wrap output in preamble or code fences - Add 3-tier JSON fallback to LLMEntityRelationshipExtractor - Migrate community_summarizer from with_structured_output to ainvoke_with_parser - Lift scoring logic from CommunityRetriever into BaseRetriever for all retrievers - Validate Cypher/GSQL output before execution to avoid running invalid queries - Detect greeting inputs early in agent router to skip unnecessary query generation - Fix Login.tsx to show proper error messages instead of always "Invalid credentials" - Skip typewriter animation for chat history messages - Fix tiktoken warning for Bedrock model names - Add unit tests for invoke_with_parser and JSON parsing fallback - Update README config parameter descriptions for clarity and consistency
- Sanitize graph name to prevent path traversal - Fix community summarizer to respect per-graph prompt overrides - Clear stale ingestion status when uploading new files - Reset file input after upload so files can be reselected - Disable delete and upload buttons during file processing - Style file picker as a visible button with selection count - Validate Cypher/GSQL output before executing queries - Detect greetings early in agent router to skip query generation - Fix graph name validation error showing [object Object] - Add client-side graph name format validation - Update README config parameter descriptions GML-2066, GML-2065, GML-2051, GML-2050, GML-2047, GML-2015, GML-2016, GML-2017, GML-2048
User description
Updated code for
-Pages for Configuration
-Role based access to setup configurations
-graph level access to change configuration
PR Type
Enhancement, Bug fix
Description
Add setup UI for configs and prompts
Enforce role-based access across endpoints
Support graph-specific LLM/prompt configurations
Fix VertexAI imports and model naming
Diagram Walkthrough
File Walkthrough
16 files
Add config/prompts APIs with role-based accessGraph-specific config resolution and reload utilitiesUse per-graph completion config for agentsReload configs at job start and consistency routesLLM provider selection via per-graph configLoad community summary prompt from configured pathPass graph to LLM provider for summarizationKG admin page for init, ingest, refreshUI to edit GraphRAG processing settingsShow Setup link based on resolved rolesRouter for setup sections and redirectsIngestion UI for local and cloud sourcesUI to edit and test LLM servicesUI to edit and test DB connectionUI to view and save prompt filesShared layout for setup navigation2 files
Fix VertexAI embeddings import and parametersSwitch to official VertexAI client and args1 files
Add reverse proxy rules for /setup routes1 files
Add langchain-google-vertexai dependency6 files