Thank you for your interest in contributing to ChatML! This guide will help you get set up and understand our development workflow.
- Node.js v20+
- Go 1.22+
- Rust (latest stable)
- Tauri CLI v2
- macOS 10.15+ (ChatML is currently macOS-only)
-
Clone the repository
git clone https://github.com/chatml/chatml.git cd chatml -
Copy environment config
cp .env.example .env
Edit
.envwith your API keys and OAuth credentials (see OAuth Setup below). -
Start development
make dev
This installs dependencies, builds the Go backend and agent-runner, and starts the Tauri dev server.
| Command | Description |
|---|---|
make dev |
Start all services (backend + frontend + Tauri) |
make backend |
Build Go backend only |
make agent-runner |
Build agent-runner only |
make build |
Production build |
make build-debug |
Debug build (no code signing required) |
make test |
Run Go backend tests |
make clean |
Remove all build artifacts |
Before submitting a PR, run:
# Go backend
make test # Run tests with race detection
cd backend && go vet ./... # Static analysis
# Frontend
npm run lint # ESLint
npm run build # TypeScript type checking + buildChatML integrates with GitHub and Linear via OAuth. For development, you need your own OAuth apps.
- Go to GitHub Developer Settings
- Click "New OAuth App"
- Set Authorization callback URL to:
chatml://oauth/callback - Copy the Client ID and Client Secret to your
.envfile
- Go to Linear API Applications
- Create a new application
- Set Callback URL to:
chatml://oauth/callback - Copy the Client ID to your
.envfile
GitHub and Linear OAuth are optional. ChatML works without them — you just won't have PR creation or Linear issue tracking features. The Anthropic API key is required for AI functionality.
The auto-updater in src-tauri/tauri.conf.json is configured for the official ChatML distribution. If you're building a fork:
- Update the
endpointsURL to point to your own releases - Or set
"createUpdaterArtifacts": falsein the bundle config to disable it - Or simply ignore updater errors during development (
make build-debughandles this)
ChatML is a polyglot application with four layers:
Tauri (Rust) → Next.js Frontend (React) → Go Backend → Agent Runner (Node.js)
See ARCHITECTURE.md for details.
ChatML currently supports Claude via the Anthropic Claude Agent SDK. The architecture is designed for community-contributed providers:
-
Go backend: The
ai.Providerinterface inbackend/ai/provider.godefines the contract for lightweight AI tasks (PR generation, summarization). Implement this interface for your provider. -
Agent runner: The agent runner communicates with the Go backend via a stdin/stdout JSON protocol documented in
docs/agent-runner-protocol.md. To add a new provider, implement an agent runner that speaks this protocol. -
Frontend: Provider capabilities are exposed via
GET /api/provider/capabilities. The UI conditionally shows features based on what the provider supports.
- Create a feature branch from
main:git checkout -b feature/description - Make your changes
- Run the full test/lint suite (see above)
- Push and open a PR against
main - Describe what you changed and why in the PR description
- Go: Standard
gofmtformatting, no special linter config - TypeScript/React: ESLint config in the repo (run
npm run lint) - Rust: Standard
rustfmtformatting - Keep changes focused — one logical change per PR