GuardScan Chat is an interactive AI-powered feature that uses RAG (Retrieval-Augmented Generation) to answer questions about your codebase. It provides context-aware responses by retrieving relevant code snippets and documentation from your project.
RAG (Retrieval-Augmented Generation) is a technique that:
- Retrieves relevant code and documentation from your codebase based on your question
- Augments your question with this context
- Generates accurate, codebase-specific responses
This approach ensures the AI has access to your actual code, making responses more accurate and relevant than generic AI responses.
- GuardScan initialized (
guardscan init) - AI provider configured (
guardscan config) - Codebase with at least some code files
guardscan chatThe first time you run chat, GuardScan will:
- Index your codebase (creates embeddings for code search)
- Create a new chat session
- Display the welcome message with available commands
- Wait for your first question
Subsequent runs will use the existing index (unless you use --rebuild).
While in the chat interface, you can use these commands:
Displays a help message with:
- Available commands
- Example questions you can ask
- Usage tips
Usage:
💬 You: /help
Clears the conversation history while keeping the session active. Useful when you want to start fresh without losing the session context.
Usage:
💬 You: /clear
✓ Conversation history cleared
Note: This only clears messages, not the session itself. The session ID and metadata remain.
Shows detailed statistics about the current chat session:
- Message count - Total messages in the conversation
- Total tokens - Cumulative tokens used across all messages
- Average tokens per message - Average token usage
- Duration - How long the session has been active
- Questions asked - Number of user questions
Usage:
💬 You: /stats
📊 Chat Statistics:
Messages: 5
Total Tokens: 2,450
Avg Tokens/Message: 490
Duration: 12m 34s
Questions Asked: 3
Exports the entire conversation to a markdown file. This command:
- Displays the conversation in markdown format in the terminal
- Automatically saves to a file in the parent directory of your project
- Uses a descriptive filename with session ID and date
Usage:
💬 You: /export
[Markdown output displayed in terminal]
✓ Conversation exported to: /path/to/parent/directory/guardscan-chat-chat-abc123-2025-12-06.md
File Location:
- Saved to the parent directory (one level up from your project root)
- Filename format:
guardscan-chat-{sessionId}-{timestamp}.md - Example:
guardscan-chat-miuk2rhd-leflihb-2025-12-06.md
Export Format: The exported markdown includes:
- Session metadata (repository name, creation date, total tokens)
- All messages with timestamps
- Relevant files referenced in each response
- Conversation structure with clear separators
Exits the chat session and returns to the command line.
Usage:
💬 You: /exit
You can customize the chat experience using command-line options:
Override the AI model for this chat session.
guardscan chat --model gpt-4o
guardscan chat --model claude-sonnet-4.5
guardscan chat --model gemini-2.5-flashAvailable models depend on your configured AI provider:
- OpenAI:
gpt-5.1,gpt-4o,gpt-4.1-mini,gpt-3.5-turbo - Claude:
claude-opus-4.5,claude-sonnet-4.5,claude-haiku-4.5 - Gemini:
gemini-3-pro,gemini-2.5-pro,gemini-2.5-flash,gemini-2.5-flash-lite
Control the creativity/randomness of responses (0.0 to 1.0).
- Lower values (0.0-0.3): More focused, deterministic responses
- Medium values (0.4-0.7): Balanced (default: 0.7)
- Higher values (0.8-1.0): More creative, varied responses
guardscan chat --temperature 0.5 # More focused
guardscan chat --temperature 0.9 # More creativeRebuild the embeddings index from scratch. Useful when:
- You've made significant code changes
- You want to ensure the index is up-to-date
- You're experiencing search quality issues
guardscan chat --rebuildNote: Rebuilding can take time depending on your codebase size.
Override the embedding provider for this session. Options:
openai- OpenAI embeddings (1536 dimensions)gemini- Google Gemini embeddings (768 dimensions)ollama- Local Ollama embeddings (768 dimensions)claude- Claude with Ollama/LM Studio fallback (768 dimensions)lmstudio- LM Studio embeddings (768 dimensions)
guardscan chat --embedding-provider openaiNote: The embedding provider should match your AI provider's capabilities or use a compatible fallback.
Load an existing chat session from a file.
guardscan chat --session /path/to/session.jsonExport the conversation to a specific file path (alternative to using /export command).
guardscan chat --export /path/to/my-conversation.md💬 You: How is authentication implemented in this project?
🤖 Assistant: [Provides detailed explanation with relevant code snippets]
💬 You: Are there any security vulnerabilities in the authentication code?
🤖 Assistant: [Analyzes code and identifies potential issues]
💬 You: Explain the UserService.createUser function
🤖 Assistant: [Breaks down the function with context from related files]
💬 You: Show me all functions that handle database queries
🤖 Assistant: [Lists relevant functions with file locations]
💬 You: What are the main components of this application?
🤖 Assistant: [Provides architectural overview with component relationships]
The chat interface provides rich, formatted output:
- File paths - Displayed in cyan (e.g.,
src/utils/auth.ts) - Code snippets - Displayed in yellow (e.g.,
function authenticate()) - Bold text - Displayed in cyan for emphasis
- Headings - Formatted with appropriate colors and styling
- Code blocks - Displayed in bordered boxes with language labels
- Lists - Properly formatted with bullets and numbering
- Quotes - Styled with visual indicators
Each response includes:
- Relevant files - Files referenced in the response
- Token usage - Tokens used for the response
- Response time - How long the AI took to respond
- Model used - Which AI model generated the response
When processing your question, you'll see:
🤔 Thinking...
This indicates the AI is:
- Searching your codebase for relevant context
- Building the RAG context
- Generating the response
Good:
- "How does the authentication middleware validate JWT tokens?"
- "What security measures are in place for user passwords?"
Less effective:
- "Tell me about the code"
- "What does this do?"
Build on previous responses:
💬 You: How does authentication work?
🤖 Assistant: [Explains authentication]
💬 You: What about password hashing?
🤖 Assistant: [Explains password hashing in context]
Use /export to save valuable conversations for:
- Documentation
- Team sharing
- Future reference
- Learning notes
If you've made significant changes:
guardscan chat --rebuildThis ensures the AI has access to your latest code.
- Code explanations: Lower temperature (0.3-0.5) for accuracy
- Creative suggestions: Higher temperature (0.7-0.9) for variety
- Security analysis: Lower temperature (0.2-0.4) for precision
Issue: Chat hangs or doesn't respond
Solutions:
- Check your AI provider connection:
guardscan status - Verify API key is valid:
guardscan config - Check internet connection (if using cloud AI)
- Try restarting the chat session
Issue: Responses are generic or not codebase-specific
Solutions:
- Rebuild the index:
guardscan chat --rebuild - Ask more specific questions
- Check that your codebase is properly indexed
- Verify embedding provider is working
Issue: /export command fails or file not created
Solutions:
- Check write permissions in parent directory
- Verify session exists:
/statsshould show session info - Try using
--exportflag instead - Check disk space
Issue: Chat is slow to respond
Solutions:
- Use a faster AI model (e.g.,
gpt-4.1-miniinstead ofgpt-4o) - Reduce codebase size (exclude large files in config)
- Use local AI (Ollama) for faster responses
- Rebuild index to optimize search
Issue: Errors related to embedding provider
Solutions:
- Check embedding provider is running (for Ollama/LM Studio)
- Verify API key for cloud providers
- Try a different embedding provider:
--embedding-provider - Rebuild embeddings:
--rebuild
You can combine multiple options:
guardscan chat --model gpt-4o --temperature 0.5 --rebuildSessions are automatically managed, but you can:
- Use
/statsto monitor session health - Use
/clearto reset conversation without losing session - Export sessions for backup
Chat works alongside other GuardScan features:
- Use
guardscan explainfor quick explanations - Use
guardscan chatfor interactive exploration - Export chat conversations to document findings
- Your codebase - Indexed locally for search
- Conversation history - Stored locally in session
- AI provider - Sends context and questions (using your API key)
- Your API keys - Never sent to GuardScan servers
- Source code - Never uploaded to GuardScan (only to your AI provider)
- Personal data - Only code-related context is used
Exported conversations contain:
- Your questions
- AI responses
- Relevant file paths
- Session metadata
Note: Review exported files before sharing, as they may contain code snippets.
- Read the Getting Started Guide for basic usage
- Check Configuration Guide for AI provider setup
- Explore API Documentation for programmatic access
- Review Security Scanners for security features
- Documentation: See other guides in
docs/ - Issues: GitHub Issues
- Discussions: GitHub Discussions