Named after Batman's assistant, Robin AI is an open source GitHub Action and GitLab CI job that automatically reviews pull requests and merge requests using AI models from OpenAI (GPT) or Anthropic (Claude). It analyzes your code changes and provides:
- A quality score (0-100)
- Actionable improvement suggestions
- Sample code snippets for better implementation
- Fast, automated feedback (average runtime: 14s)
- A GitHub repository with pull request workflows or a GitLab project with merge request pipelines
- An API key for your chosen AI provider:
- OpenAI: Get an API key here
- Claude (Anthropic): Get an API key here
- In your Github repository, navigate to the "Actions" tab
- Click "New workflow"
- Choose "Set up a workflow yourself"
- Create a new file (e.g.,
robin.yml) with one of these configurations:
name: Robin AI Reviewer
on:
pull_request:
branches: [main]
types:
- opened
- reopened
- synchronize
- ready_for_review
permissions:
contents: read
issues: write
pull-requests: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Robin AI Reviewer
uses: Integral-Healthcare/robin-ai-reviewer@v[INSERT_LATEST_RELEASE]
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AI_PROVIDER: openai
AI_API_KEY: ${{ secrets.OPEN_AI_API_KEY }}
AI_MODEL: gpt-5-mini
files_to_ignore: 'README.md assets/* package-lock.json'Note:
files_to_ignoremust be written as a single-line, whitespace-separated string (as shown above, e.g.'README.md assets/* package-lock.json'). Do not use YAML block scalars (|or>) — embedded newlines break CLI argument parsing.
name: Robin AI Reviewer
on:
pull_request:
branches: [main]
types:
- opened
- reopened
- synchronize
- ready_for_review
permissions:
contents: read
issues: write
pull-requests: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Robin AI Reviewer
uses: Integral-Healthcare/robin-ai-reviewer@v[INSERT_LATEST_RELEASE]
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AI_PROVIDER: claude
AI_API_KEY: ${{ secrets.CLAUDE_API_KEY }}
AI_MODEL: claude-sonnet-4-5
files_to_ignore: 'README.md assets/* package-lock.json'name: Robin AI Reviewer
on:
pull_request:
branches: [main]
types:
- opened
- reopened
- synchronize
- ready_for_review
permissions:
contents: read
issues: write
pull-requests: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Robin AI Reviewer
uses: Integral-Healthcare/robin-ai-reviewer@v[INSERT_LATEST_RELEASE]
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OPEN_AI_API_KEY: ${{ secrets.OPEN_AI_API_KEY }}
gpt_model_name: gpt-5-mini
files_to_ignore: 'README.md assets/* package-lock.json'- Add your API key as a repository secret:
- Go to repository Settings → Secrets and Variables → Actions
- For OpenAI: Create a secret named
OPEN_AI_API_KEY - For Claude: Create a secret named
CLAUDE_API_KEY - Paste your API key as the value
Robin AI runs in GitLab CI from the published Docker image. Add a project or personal access token with the api scope as a CI/CD variable named GITLAB_TOKEN, plus the API key for your selected AI provider.
robin_ai_review:
image:
name: ghcr.io/integral-healthcare/robin-ai-reviewer:latest
entrypoint: [""]
stage: test
variables:
# Pass secrets via env so they don't show up in argv / `ps` / job logs.
AI_API_KEY_INPUT: $OPEN_AI_API_KEY
script:
- >
/entrypoint.sh
--git_provider=gitlab
--git_token="${GITLAB_TOKEN}"
--ai_provider=openai
--ai_model=gpt-5-mini
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'For Claude, set --ai_provider=claude, pass your Claude API key to --ai_api_key, and set --ai_model to the Claude model you want to use.
| Name | Required | Default | Description |
|---|---|---|---|
GITHUB_TOKEN |
Yes | Auto-supplied | GitHub token for API access |
AI_PROVIDER |
No | openai |
AI provider to use (openai or claude) |
AI_API_KEY |
Yes | N/A | API key for the selected AI provider |
AI_MODEL |
No | Provider-specific | AI model to use (see supported models below) |
ai_max_tokens |
No | 8192 |
Maximum tokens the model may generate in its review |
max_diff_bytes |
No | 200000 |
Soft cap on diff size in bytes; larger diffs are truncated before being sent to the model |
chunk_threshold_bytes |
No | 100000 |
When the diff exceeds this many bytes, split per-file and review each chunk individually. Set to 0 to disable. Ignored in review_mode=review. |
prompt_override |
No | (empty) | Inline replacement for the system prompt. Takes precedence over prompt_file. |
prompt_file |
No | (empty) | Path to a file in the workspace containing the system prompt to use instead of the bundled default. |
review_mode |
No | comment |
comment for one summary PR comment; review for inline line-anchored comments via the GitHub Reviews API. |
github_api_url |
No | https://api.github.com |
GitHub API URL (for enterprise) |
files_to_ignore |
No | (empty) | Single-line, whitespace-separated list of files to exclude from review (e.g. 'README.md assets/*'). Do not use YAML block scalars (| or >) — embedded newlines break CLI argument parsing. |
| Name | Required | Default | Description |
|---|---|---|---|
git_provider |
No | github |
Git provider to review (github or gitlab) |
git_token |
Yes for GitLab | N/A | GitLab token with the api scope |
ai_provider |
No | openai |
AI provider to use (openai or claude) |
ai_api_key |
Yes | N/A | API key for the selected AI provider |
ai_model |
No | Provider-specific | AI model to use |
ai_max_tokens |
No | 8192 |
Maximum tokens the model may generate in its review |
max_diff_bytes |
No | 200000 |
Soft cap on diff size in bytes; larger diffs are truncated |
prompt_override |
No | (empty) | Inline replacement for the system prompt |
prompt_file |
No | (empty) | Path to a file containing a custom system prompt |
files_to_ignore |
No | (empty) | Single-line, whitespace-separated list of files to exclude from review (e.g. 'README.md assets/*'). Do not use YAML block scalars (| or >) — embedded newlines break CLI argument parsing. |
| Name | Required | Default | Description |
|---|---|---|---|
OPEN_AI_API_KEY |
No | N/A | [DEPRECATED] Use AI_API_KEY instead. Will be removed in v2.0 (target 2026-Q3). |
gpt_model_name |
No | N/A | [DEPRECATED] Use AI_MODEL instead. Will be removed in v2.0 (target 2026-Q3). |
Robin AI passes the value of AI_MODEL straight through to the upstream provider, so any model your account has access to should work. Defaults are kept current with the latest GA release of each provider.
OpenAI (default: gpt-5-mini)
gpt-5,gpt-5-mini,gpt-5-nanogpt-4.1,gpt-4.1-mini- Reasoning models such as
o3,o4-miniare also accepted.
Anthropic Claude (default: claude-sonnet-4-5)
claude-opus-4-5claude-sonnet-4-5claude-haiku-4-5
If you need a specific model snapshot (e.g., claude-sonnet-4-5-20250929), pass the dated alias directly via AI_MODEL.
By default, when the combined diff exceeds chunk_threshold_bytes (100KB), Robin splits the diff per file and reviews each file with its own AI call. The aggregated feedback is posted as a single comment with one section per file. This avoids the "diff truncated" path on big PRs and keeps reviews actionable on a per-file basis.
Set chunk_threshold_bytes: 0 to disable chunking entirely, or raise the threshold for models with larger context windows. Chunked review is currently bypassed when review_mode: review is set; that combination is a future enhancement.
Set review_mode: review to have Robin post line-anchored comments via the GitHub Reviews API instead of one summary PR comment. In this mode the prompt asks the model for structured JSON, the response is validated against the diff, and any hallucinated line references are dropped before the review is posted. If the model returns invalid JSON, Robin falls back to the standard summary comment.
- uses: Integral-Healthcare/robin-ai-reviewer@v[INSERT_LATEST_RELEASE]
with:
AI_API_KEY: ${{ secrets.OPEN_AI_API_KEY }}
review_mode: reviewInline review currently requires GitHub. On GitLab the value is silently downgraded to comment.
You can replace the bundled review prompt without forking the action:
prompt_override— pass a prompt string directly in your workflow. Useful for short, repo-specific guidance.prompt_file— point at a file inside your repo (e.g.,.github/robin-prompt.md). The file's contents become the system prompt.
prompt_override takes precedence when both are set. If neither is set, Robin uses its built-in prompt.
- uses: Integral-Healthcare/robin-ai-reviewer@v[INSERT_LATEST_RELEASE]
with:
AI_API_KEY: ${{ secrets.OPEN_AI_API_KEY }}
prompt_file: .github/robin-prompt.mdWhen Robin AI runs, it will post a comment on the pull request or merge request with its score out of 100, suggested improvements, and sample code for improvement. You can use this information to improve the quality of your code and make your pull requests more likely to be accepted.
When Robin AI reviews your pull request, you'll see a comment like this:
Score: 85/100
Improvements:
- Consider adding input validation for the user parameters
- The error handling could be more specific
- Variable naming could be more descriptive
# Before
def process(x):
return x * 2
# After
def process_user_input(value: int) -> int:
if not isinstance(value, int):
raise ValueError("Input must be an integer")
return value * 2- Docker Image Size: 15.6MB
- Average Runtime: 14 seconds
- Memory Usage: Minimal (<100MB)
See Robin AI in action: View Demo
We welcome contributions! Here's how you can help:
- Submit bug reports or feature requests through Issues
- Submit pull requests for bug fixes or new features
- Improve documentation
- Share feedback on Twitter
Robin AI is MIT licensed. See LICENSE for details.
