Welcome Claude! This document contains everything you need to know to work effectively on the Omnara project.
Omnara is a platform that allows users to communicate with their AI agents (like you!) from anywhere. It uses the Model Context Protocol (MCP) to enable real-time communication between agents and users through a web dashboard.
- Purpose: Let users see what their AI agents are doing and communicate with them in real-time
- Key Innovation: Agents can ask questions and receive feedback while working
- Architecture: Separate read (backend) and write (servers) operations for optimal performance
- Open Source: This is a community project - code quality and clarity matter!
omnara/
├── src/ # All source code
│ ├── omnara/ # Main Python package (CLI & SDK)
│ ├── backend/ # FastAPI - Web dashboard API (read operations)
│ ├── servers/ # Agent communication servers
│ │ ├── mcp/ # MCP protocol server
│ │ ├── api/ # REST API server
│ │ └── shared/ # Shared server code
│ ├── shared/ # Shared database models and infrastructure
│ ├── mcp-installer/ # NPX tool for MCP client configuration
│ └── integrations/ # Integration connectors (flat structure)
│ ├── cli_wrappers/ # Agent CLI wrappers
│ ├── headless/ # Background agents
│ ├── n8n/ # n8n workflow package
│ ├── github/ # GitHub Actions
│ ├── utils/ # Utilities
│ └── webhooks/ # Webhook handlers
├── apps/ # User-facing applications
│ ├── web/ # Next.js web dashboard
│ └── mobile/ # React Native mobile app
├── infrastructure/ # DevOps & deployment
│ ├── docker/ # Dockerfiles
│ └── scripts/ # Build & utility scripts
├── tests/ # Test suites
└── docs/ # Documentation
- Two separate JWT systems:
- Backend: Supabase JWTs for web users
- Servers: Custom JWT with weaker RSA (shorter API keys for agents)
- API keys are hashed (SHA256) before storage - never store raw tokens
- PostgreSQL with SQLAlchemy 2.0+
- Alembic for migrations - ALWAYS create migrations for schema changes
- Multi-tenant design - all data is scoped by user_id
- Key tables: users, user_agents, agent_instances, messages, api_keys
- Unified messaging system: All agent interactions (steps, questions, feedback) are now stored in the
messagestable withsender_typeandrequires_user_inputfields
- Unified server (
src/servers/app.py) supports both MCP and REST - MCP endpoint:
/mcp/ - REST endpoints:
/api/v1/* - Both use the same authentication and business logic
-
Always activate the virtual environment first:
source .venv/bin/activate # macOS/Linux .venv\Scripts\activate # Windows
-
Install pre-commit hooks (one-time):
make pre-commit-install
- Check current branch: Ensure you're on the right branch
- Update dependencies: Run
pip install -r requirements.txtif needed - Check migrations: Run
alembic currentinshared/directory
- Modify models in
src/shared/models/ - Generate migration:
cd src/shared/ alembic revision --autogenerate -m "Descriptive message"
- Review the generated migration file
- Test migration:
alembic upgrade head - Include migration file in your commit
- Follow existing patterns - check similar files first
- Use type hints - We use Python 3.12 with full type annotations
- Import style: Prefer absolute imports from project root
make test # Run all tests
make test-integration # Integration tests (needs Docker)-
Run linting and formatting:
make lint # Check for issues make format # Auto-fix formatting
-
Verify your changes work:
- Test the specific functionality you changed
- Run relevant test suites
- Check that migrations apply cleanly
-
Update documentation if you changed functionality
The unified messaging system uses a single messages table:
- Agent messages: Set
sender_type=AGENT, userequires_user_input=Truefor questions - User messages: Set
sender_type=USERfor feedback/responses - Reading messages: Use
last_read_message_idto track reading progress - Queued messages: Agent receives unread user messages when sending new messages
- Add route in
src/backend/api/orsrc/servers/api/routers.py - Create Pydantic models for request/response in
models.py - Add database queries in appropriate query files
- Write tests for the endpoint
- Add tool definition in
src/servers/mcp/tools.py - Register tool in
src/servers/mcp/server.py - Share logic with REST endpoint if applicable
- Update agent documentation
- Change models in
src/shared/models/ - Generate and review migration
- Update any affected queries
- Update Pydantic models if needed
- Test thoroughly with existing data
src/shared/config.py- Central configuration using Pydantic settingssrc/shared/models/base.py- SQLAlchemy base configurationservers/app.py- Unified server entry pointsrc/backend/auth/- Authentication logic for web userssrc/servers/api/auth.py- Agent authentication
Key variables you might need:
DATABASE_URL- PostgreSQL connectionJWT_PUBLIC_KEY/JWT_PRIVATE_KEY- For agent authSUPABASE_URL/SUPABASE_ANON_KEY- For web authENVIRONMENT- Set to "development" for auto-reload
- Don't commit without migrations - Pre-commit hooks will catch this
- Don't store raw JWT tokens - Always hash API keys
- Don't mix authentication systems - Backend uses Supabase, Servers use custom JWT
- Don't forget user scoping - All queries must filter by user_id
- Don't skip type hints - Pyright will complain
- Database issues: Check migrations are up to date
- Auth failures: Verify JWT keys are properly formatted (with newlines)
- Import errors: Ensure you're using absolute imports
- Type errors: Run
make typecheckto catch issues early
- Check existing code for patterns
- Read test files for usage examples
- Error messages usually indicate what's wrong
- The codebase is well-structured - similar things are grouped together
As Claude Code, you're particularly good at:
- Understanding the full codebase quickly
- Maintaining consistency across files
- Catching potential security issues
- Writing comprehensive tests
- Suggesting architectural improvements
Remember: This is an open-source project that helps AI agents communicate with humans. Your work here directly improves the AI-human collaboration experience!