Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughUpdates to devcontainer tooling and CI, finer-grained Bash CLI permissions, a new workflow validating tagged Copier template versions, a Copier template bump, dynamic safe.directory handling in setup scripts, and documentation expansions for agent/testing/tooling guidance. Changes
Sequence Diagram(s)sequenceDiagram
participant PR as "Pull Request"
participant CI as "GitHub CI (ci.yaml)"
participant Checker as "check-skip-duplicate job"
participant Confirm as "confirm-on-tagged-copier-template"
participant Jobs as "lint/test jobs"
participant Summary as "workflow-summary"
PR->>CI: trigger (push / pull_request / merge_group)
CI->>Checker: run check-skip-duplicate
Checker-->>CI: outputs.should-run (true/false)
alt should-run == true
CI->>Confirm: invoke confirm-on-tagged-copier-template (workflow_call)
Confirm-->>CI: success | failure
CI->>Jobs: run lint & test (with skip-install flags)
Jobs-->>CI: job results
else should-run == false
CI-->>Jobs: skip lint & test
end
CI->>Summary: aggregate results (includes check & confirm)
Summary->>PR: post commit status (context="required-check", state=success)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes Possibly related PRs
🚥 Pre-merge checks | ❌ 3❌ Failed checks (1 warning, 2 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/settings/permissions/bash.jsonc:
- Around line 96-105: The deny list for PR mutation commands is missing the
reopen command; add an entry like "Bash(gh pr reopen *)" to the permissions
array alongside the existing "Bash(gh pr create *)", "Bash(gh pr edit *)",
"Bash(gh pr merge *)", etc., so Claude cannot reopen closed PRs; update the
.claude/settings/permissions/bash.jsonc list to include the new "Bash(gh pr
reopen *)" deny pattern.
In @.devcontainer/devcontainer.json:
- Around line 41-56: Replace the hardcoded path
"/workspaces/ephemeral-pulumi-deploy/ruff-test.toml" referenced next to the TODO
comment by making it configurable and portable: either change the
devcontainer.json entry to use a workspace-relative path (e.g., reference the
file name only or use ${workspaceFolder}) or read the path from an environment
variable and document that in the TODO; update any references where that exact
string appears so the devcontainer no longer assumes
/workspaces/ephemeral-pulumi-deploy.
In @.devcontainer/manual-setup-deps.py:
- Around line 136-146: The devcontainer build step calls the missing script
".devcontainer/install-pulumi-cli.sh" (invoked from the block that checks
generate_lock_file_only, args.skip_installing_pulumi_cli, platform.system() ==
"Linux", and '"pulumi"' in env.lock_file.read_text()) which will raise when
absent; fix by either adding the install-pulumi-cli.sh script to the repository
(ensuring it is executable and safe to run from REPO_ROOT_DIR) or guard the
subprocess.run call to skip invocation when the script file does not exist
(check for (REPO_ROOT_DIR / ".devcontainer" / "install-pulumi-cli.sh").exists()
before calling subprocess.run), keeping the existing argument usage of
str(env.lock_file) and retaining check=True behavior when the script is present.
In @.devcontainer/on-create-command.sh:
- Around line 6-8: The devcontainer uses dynamic path resolution via the script
variables script_dir and repo_root but docker-compose.yml and devcontainer.json
still contain hardcoded /workspaces/ephemeral-pulumi-deploy paths; update those
references so they use an environment variable (e.g., export REPO_ROOT or
BEADS_DIR from the on-create script or an .env file) and then substitute that
variable into docker-compose volume mounts and the BEADS_DIR env var and into
devcontainer.json's ruff.configuration path (search for BEADS_DIR,
ruff.configuration, and the hardcoded /workspaces/ephemeral-pulumi-deploy
strings) so the container config becomes portable across workspace names.
In @.github/workflows/ci.yaml:
- Around line 142-151: The step "Mark the required-check as succeeded so the PR
can be merged" currently posts only to github.event.pull_request.statuses_url
and fails for merge_group runs; split this into two steps conditioned by event
type: one step with if: github.event_name == 'pull_request' that posts to ${{
github.event.pull_request.statuses_url }}, and a second step with if:
github.event_name == 'merge_group' that POSTs to /repos/${{ github.repository
}}/statuses/${{ github.sha }} (keeping the same GH_TOKEN env and payload fields
state/context/description/target_url) so merge-queue runs report status to the
merge_group head SHA.
In @.github/workflows/confirm-on-tagged-copier-template.yaml:
- Around line 29-33: The current COMMIT_LINE check is brittle; instead extract
the value after the ":" from COMMIT_LINE, strip any inline comments (text after
"#") and surrounding whitespace, ensure the resulting commit_value is non-empty,
then validate it against the regex ^v[0-9]+\.[0-9]+\.[0-9]+$; replace the simple
hyphen check with this logic in the block that sets COMMIT_LINE (reading from
ANSWERS_FILE) and on failure echo a clear error message and exit 1.
In `@AGENTS.md`:
- Around line 59-60: Update the two bullets that instruct using a separate cd
call (the "When running terminal commands..." and the "Never use `pnpm --prefix
<path>`..." bullets) to also recommend preferring absolute paths as an
alternative to issuing `cd` (so you can avoid an extra tool call entirely); add
a concise sentence stating that using absolute/full paths to target files or
binaries is often simpler and preserves the permission allow-list matcher, and
give a short note that this is preferred when feasible over separate `cd` calls.
- Line 14: Replace the phrase "very clear single-argument function" in the
guideline sentence that reads "Prefer keyword-only parameters (unless a very
clear single-argument function): use `*` in Python signatures and destructured
options objects in TypeScript." with a less colloquial alternative such as "an
unambiguously single-argument function" or "an obviously single-argument
function" so the wording is more precise and avoids the intensifier "very
clear."
- Around line 1-63: Add a new "Autonomous Agents" section to AGENTS.md that
enumerates every agent (e.g., CodeRabbit, Dependabot, custom CI bots), gives a
one-line responsibility for each (what they own/produce), lists their triggers
and outputs, and documents interaction patterns and handoff rules (e.g., PR
Workflow: Dependabot → CI → CodeRabbit → human approve → merge) so readers can
see coordination and conflict-avoidance; place the section near the top-level
guidelines and use headings like "Active Agents", "Responsibility", "Triggers",
"Outputs", and "Interactions" to match the suggested structure.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 1840ea7a-fce4-40f4-be47-06bc27758670
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (15)
.claude/settings/permissions/bash.jsonc.coderabbit.yaml.copier-answers.yml.devcontainer/Dockerfile.devcontainer/devcontainer.json.devcontainer/install-ci-tooling.py.devcontainer/manual-setup-deps.py.devcontainer/on-create-command.sh.devcontainer/post-start-command.sh.github/actions/install_deps/action.yml.github/workflows/ci.yaml.github/workflows/confirm-on-tagged-copier-template.yaml.github/workflows/pre-commit.yamlAGENTS.mdpyproject.toml
| // Github | ||
| // Claude should not ever interfere with the PR process, that is how we gate AI's work | ||
| "Bash(gh pr create *)", | ||
| "Bash(gh pr edit *)", | ||
| "Bash(gh pr ready *)", | ||
| "Bash(gh pr review *)", | ||
| "Bash(gh pr merge *)", | ||
| "Bash(gh pr close *)", | ||
| "Bash(gh pr comment *)", | ||
| "Bash(gh pr update-branch *)", |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Good deny list for PR mutations, consider adding gh pr reopen.
Comprehensive blocking of PR workflow commands. Consider also adding gh pr reopen * to the deny list to prevent reopening closed PRs.
Proposed addition
"Bash(gh pr comment *)",
"Bash(gh pr update-branch *)",
+ "Bash(gh pr reopen *)",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/settings/permissions/bash.jsonc around lines 96 - 105, The deny list
for PR mutation commands is missing the reopen command; add an entry like
"Bash(gh pr reopen *)" to the permissions array alongside the existing "Bash(gh
pr create *)", "Bash(gh pr edit *)", "Bash(gh pr merge *)", etc., so Claude
cannot reopen closed PRs; update the .claude/settings/permissions/bash.jsonc
list to include the new "Bash(gh pr reopen *)" deny pattern.
| "github.copilot-chat@0.42.2026032602", | ||
| "anthropic.claude-code@2.1.84", | ||
|
|
||
| // Python | ||
| "ms-python.python@2026.2.2026021801", | ||
| "ms-python.vscode-pylance@2026.1.1", | ||
| "ms-python.python@2026.5.2026032701", | ||
| "ms-python.vscode-pylance@2026.1.102", | ||
| "ms-vscode-remote.remote-containers@0.414.0", | ||
| "charliermarsh.ruff@2026.36.0", | ||
| "charliermarsh.ruff@2026.38.0", | ||
|
|
||
| // Misc file formats | ||
| "bierner.markdown-mermaid@1.29.0", | ||
| "samuelcolvin.jinjahtml@0.20.0", | ||
| "tamasfe.even-better-toml@0.19.2", | ||
| "emilast.LogFileHighlighter@3.3.3", | ||
| "esbenp.prettier-vscode@12.3.0" | ||
| "esbenp.prettier-vscode@12.4.0" | ||
| ], |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Extension version updates look good.
Note: Line 69 (unchanged) still contains a hardcoded path /workspaces/ephemeral-pulumi-deploy/ruff-test.toml. If the PR goal is to remove hardcoded paths for portability, this should also be addressed. The TODO comment on line 68 acknowledges this limitation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.devcontainer/devcontainer.json around lines 41 - 56, Replace the hardcoded
path "/workspaces/ephemeral-pulumi-deploy/ruff-test.toml" referenced next to the
TODO comment by making it configurable and portable: either change the
devcontainer.json entry to use a workspace-relative path (e.g., reference the
file name only or use ${workspaceFolder}) or read the path from an environment
variable and document that in the TODO; update any references where that exact
string appears so the devcontainer no longer assumes
/workspaces/ephemeral-pulumi-deploy.
| script_dir="$(CDPATH= cd -- "$(dirname -- "$0")" && pwd)" | ||
| repo_root="$(CDPATH= cd -- "$script_dir/.." && pwd)" | ||
| git config --global --add safe.directory "$repo_root" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for remaining hardcoded workspace paths in devcontainer config
rg -n "ephemeral-pulumi-deploy" .devcontainer/Repository: LabAutomationAndScreening/ephemeral-pulumi-deploy
Length of output: 692
Good refactor to dynamic path resolution.
The CDPATH= cd pattern is intentional (clears CDPATH to ensure predictable cd behavior) — the Shellcheck SC1007 warning is a false positive for this idiom.
However, the dynamic path resolution is incomplete. The devcontainer config still contains hardcoded /workspaces/ephemeral-pulumi-deploy paths in:
docker-compose.yml: volume mounts (lines 11, 13) andBEADS_DIRenv var (line 21)devcontainer.json:ruff.configurationpath (line 69)
For full portability, consider parameterizing these references using environment variables or shell variable substitution.
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 6-6: Remove space after = if trying to assign a value (for empty string, use var='' ... ).
(SC1007)
[warning] 7-7: Remove space after = if trying to assign a value (for empty string, use var='' ... ).
(SC1007)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.devcontainer/on-create-command.sh around lines 6 - 8, The devcontainer uses
dynamic path resolution via the script variables script_dir and repo_root but
docker-compose.yml and devcontainer.json still contain hardcoded
/workspaces/ephemeral-pulumi-deploy paths; update those references so they use
an environment variable (e.g., export REPO_ROOT or BEADS_DIR from the on-create
script or an .env file) and then substitute that variable into docker-compose
volume mounts and the BEADS_DIR env var and into devcontainer.json's
ruff.configuration path (search for BEADS_DIR, ruff.configuration, and the
hardcoded /workspaces/ephemeral-pulumi-deploy strings) so the container config
becomes portable across workspace names.
| - name: Mark the required-check as succeeded so the PR can be merged | ||
| if: ${{ github.event_name == 'pull_request' || github.event_name == 'merge_group' }} | ||
| env: | ||
| GH_TOKEN: ${{ github.token }} | ||
| run: | | ||
| gh api \ | ||
| -X POST -H "Accept: application/vnd.github.v3+json" \ | ||
| "${{ github.event.pull_request.statuses_url }}" \ | ||
| -f state=success -f context="required-check" -f description="✅ All required checks passed in the job triggered by pull_request" \ | ||
| -f target_url="${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd .github/workflows && head -160 ci.yaml | tail -25Repository: LabAutomationAndScreening/ephemeral-pulumi-deploy
Length of output: 1400
🌐 Web query:
GitHub Actions merge_group event payload structure statuses_url
💡 Result:
The GitHub Actions merge_group event payload structure, particularly regarding statuses_url, is as follows based on available evidence: The top-level payload includes: - action: string (e.g., "checks_requested") - merge_group: object containing: - base_ref: string (e.g., "refs/heads/main") - base_sha: string - head_commit: object with author, committer, id, message, timestamp, tree_id - head_ref: string (e.g., "refs/heads/gh-readonly-queue/main/pr-123-abc123def456") - head_sha: string (the commit SHA for status checks) - organization: object - repository: standard GitHub repository object The statuses_url is not present at the top level or directly in merge_group. However, since repository follows the standard GitHub repository schema (as seen in common webhook payloads and schemas), it includes statuses_url: "https://api.github.com/repos/{owner}/{repo}/statuses/{sha}", where {sha} is typically the head commit SHA. Status checks for merge groups are reported against the merge_group.head_sha using this URL, as confirmed by GitHub documentation on merge queues requiring status checks on the merge group head commit. Official GitHub Docs do not provide a complete merge_group payload example, but schemas from octokit/webhooks and API examples confirm the repository object inclusion with standard fields like statuses_url. Sample payloads from third-party sources (e.g., MagicBell) show the repository object with expected fields like html_url, implying standard properties including statuses_url. Confidence is high for the structure from multiple confirming sources, medium for exact statuses_url presence due to reliance on standard repository schema rather than a direct merge_group example.
Citations:
- 1: https://docs.github.com/en/webhooks/webhook-events-and-payloads
- 2: https://www.magicbell.com/workflows/github/merge-group-checks-requested
- 3: https://docs.github.com/actions/using-workflows/events-that-trigger-workflows
- 4: https://github.blog/changelog/2022-08-18-merge-group-webhook-event-and-github-actions-workflow-trigger
- 5: https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/managing-a-merge-queue
- 6: https://github.apidog.io/schema-699783
- 7: https://github.com/octokit/webhooks
- 8: https://docs.github.com/en/rest/pulls/pulls
Split status API calls by event type to support merge-queue runs.
The step at lines 142–151 runs on both pull_request and merge_group events but posts status only to github.event.pull_request.statuses_url. The merge_group event payload does not contain pull_request context—its status must be reported to /repos/{owner}/{repo}/statuses/{merge_group_head_sha} instead. On merge-group triggers, the current code will fail this step, blocking the merge queue.
Split into two separate steps: one for pull_request using github.event.pull_request.statuses_url, and one for merge_group using /repos/${{ github.repository }}/statuses/${{ github.sha }}.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yaml around lines 142 - 151, The step "Mark the
required-check as succeeded so the PR can be merged" currently posts only to
github.event.pull_request.statuses_url and fails for merge_group runs; split
this into two steps conditioned by event type: one step with if:
github.event_name == 'pull_request' that posts to ${{
github.event.pull_request.statuses_url }}, and a second step with if:
github.event_name == 'merge_group' that POSTs to /repos/${{ github.repository
}}/statuses/${{ github.sha }} (keeping the same GH_TOKEN env and payload fields
state/context/description/target_url) so merge-queue runs report status to the
merge_group head SHA.
| COMMIT_LINE=$(grep "^_commit:" "$ANSWERS_FILE") | ||
| if echo "$COMMIT_LINE" | grep -q "-"; then | ||
| echo "Error: $COMMIT_LINE" | ||
| echo "_commit must be a clean release tag (e.g. v0.0.111), not a dev commit (e.g. v0.0.106-14-g7847d7b)" | ||
| exit 1 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
check() {
local label="$1"
local content="$2"
COMMIT_LINE=$(printf '%s\n' "$content" | grep '^_commit:' || true)
if echo "$COMMIT_LINE" | grep -q '-'; then
result=fail
else
result=pass
fi
printf '%-15s -> %s | %q\n' "$label" "$result" "$COMMIT_LINE"
}
check missing 'foo: bar'
check nonsensical '_commit: main'
check inline_comment '_commit: v0.0.111 # stable-release'Repository: LabAutomationAndScreening/ephemeral-pulumi-deploy
Length of output: 226
Parse and validate the _commit value against the semantic version format.
The current validation checks whether the raw line contains -, which creates three issues:
- Missing
_commitpasses the check (empty line has no hyphen) - Invalid values like
_commit: mainpass - Valid tags with inline comments like
_commit: v0.0.111 # stable-releasefail
Extract the value portion, strip comments and whitespace, then validate it matches v[0-9]+\.[0-9]+\.[0-9]+ format.
Proposed fix
- name: Check _commit is a clean release tag
run: |
ANSWERS_FILE="${{ inputs.answers_file }}"
if [ ! -f "$ANSWERS_FILE" ]; then
echo "Error: $ANSWERS_FILE not found"
exit 1
fi
- COMMIT_LINE=$(grep "^_commit:" "$ANSWERS_FILE")
- if echo "$COMMIT_LINE" | grep -q "-"; then
- echo "Error: $COMMIT_LINE"
+ COMMIT_VALUE=$(grep -m1 "^_commit:" "$ANSWERS_FILE" | cut -d: -f2- | cut -d# -f1 | xargs)
+ if [ -z "$COMMIT_VALUE" ]; then
+ echo "Error: _commit not found in $ANSWERS_FILE"
+ exit 1
+ fi
+ if ! echo "$COMMIT_VALUE" | grep -Eq '^v[0-9]+\.[0-9]+\.[0-9]+$'; then
+ echo "Error: _commit=$COMMIT_VALUE"
echo "_commit must be a clean release tag (e.g. v0.0.111), not a dev commit (e.g. v0.0.106-14-g7847d7b)"
exit 1
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| COMMIT_LINE=$(grep "^_commit:" "$ANSWERS_FILE") | |
| if echo "$COMMIT_LINE" | grep -q "-"; then | |
| echo "Error: $COMMIT_LINE" | |
| echo "_commit must be a clean release tag (e.g. v0.0.111), not a dev commit (e.g. v0.0.106-14-g7847d7b)" | |
| exit 1 | |
| COMMIT_VALUE=$(grep -m1 "^_commit:" "$ANSWERS_FILE" | cut -d: -f2- | cut -d# -f1 | xargs) | |
| if [ -z "$COMMIT_VALUE" ]; then | |
| echo "Error: _commit not found in $ANSWERS_FILE" | |
| exit 1 | |
| fi | |
| if ! echo "$COMMIT_VALUE" | grep -Eq '^v[0-9]+\.[0-9]+\.[0-9]+$'; then | |
| echo "Error: _commit=$COMMIT_VALUE" | |
| echo "_commit must be a clean release tag (e.g. v0.0.111), not a dev commit (e.g. v0.0.106-14-g7847d7b)" | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/confirm-on-tagged-copier-template.yaml around lines 29 -
33, The current COMMIT_LINE check is brittle; instead extract the value after
the ":" from COMMIT_LINE, strip any inline comments (text after "#") and
surrounding whitespace, ensure the resulting commit_value is non-empty, then
validate it against the regex ^v[0-9]+\.[0-9]+\.[0-9]+$; replace the simple
hyphen check with this logic in the block that sets COMMIT_LINE (reading from
ANSWERS_FILE) and on failure echo a clear error message and exit 1.
| # Project Structure | ||
|
|
||
| This project is a Python library. | ||
|
|
||
| # Code Guidelines | ||
|
|
||
| ## Code Style | ||
|
|
||
| - Comments should be used very rarely. Code should generally express its intent. | ||
| - Never write a one-line docstring — either the name is sufficient or the behavior warrants a full explanation. | ||
| - Don't sort or remove imports manually — pre-commit handles it. | ||
| - Always include type hints for pyright in Python | ||
| - Respect the pyright rule reportUnusedCallResult; assign unneeded return values to `_` | ||
| - Prefer keyword-only parameters (unless a very clear single-argument function): use `*` in Python signatures and destructured options objects in TypeScript. | ||
| - When disabling a linting rule with an inline directive, provide a comment at the end of the line (or on the line above for tools that don't allow extra text after an inline directive) describing the reasoning for disabling the rule. | ||
| - Avoid telling the type checker what a type is rather than letting it prove it. This includes type assertions (`as SomeType` in TypeScript, `cast()` in Python) and variable annotations that override inference. Prefer approaches that let the type checker verify the type itself: `isinstance`/`instanceof` narrowing, restructuring code so the correct type flows naturally, or using discriminated unions. When there is genuinely no alternative, add a comment explaining why the workaround is necessary and why it is safe. | ||
|
|
||
| ## Testing | ||
| - Always run tests with an explicit path (e.g. uv run pytest tests/unit) — test runners discover all types by default. | ||
|
|
||
| - Always run tests with an explicit path (e.g. uv run pytest tests/unit) — test runners discover all types (unit, integration, E2E...) by default. | ||
| - When iterating on a single test, run that test in isolation first and confirm it is in the expected state (red or green) before widening to the full suite. Use the most targeted invocation available: a specific test function for Python (e.g. `uv run pytest path/to/test.py::test_name --no-cov`) or a file path and name filter for TypeScript (e.g. `pnpm test-unit -- path/to/test.spec.ts -t "test name" --no-coverage`). Only run the full suite once the target test behaves as expected. | ||
| - Test coverage requirements are usually at 100%, so when running a subset of tests, always disable test coverage to avoid the test run failing for insufficient coverage. | ||
| - Avoid magic values in comparisons in tests in all languages (like ruff rule PLR2004 specifies) | ||
| - Prefer using random values in tests rather than arbitrary ones (e.g. the faker library, uuids, random.randint) when possible. For enums, pick randomly rather than hardcoding one value. | ||
| - Avoid loops in tests — assert each item explicitly so failures pinpoint the exact element. When verifying a condition across all items in a collection, collect the violations into a list and assert it's empty (e.g., assert [x for x in items if bad_condition(x)] == []). | ||
| - Key `data-testid` selectors off unique IDs (e.g. UUIDs), not human-readable names which may collide or change. | ||
| - When asserting a mock or spy was called with specific arguments, always constrain as tightly as possible. In order of preference: (1) assert called exactly once with those args (`assert_called_once_with` in Python, `toHaveBeenCalledExactlyOnceWith` in Vitest/Jest); (2) if multiple calls are expected, assert the total call count and use a positional or last-call assertion (`nthCalledWith`, `lastCalledWith` / `assert_has_calls` with `call_args_list[n]`); (3) plain "called with at any point" (`toHaveBeenCalledWith`, `assert_called_with`) is a last resort only when neither the call count nor the call order can reasonably be constrained. | ||
|
|
||
| ### Python Testing | ||
| - When using `mocker.spy` on a class-level method (including inherited ones), the spy records the unbound call, so assertions need `ANY` as the first argument to match self: `spy.assert_called_once_with(ANY, expected_arg)` | ||
|
|
||
| - When using `mocker.spy` on a class-level method (including inherited ones), the spy records the unbound call, so assertions need `ANY` as the first argument to match self: `spy.assert_called_once_with(ANY, expected_arg)` | ||
| - Before writing new mock/spy helpers, check the `tests/unit/` folder for pre-built helpers in files like `fixtures.py` or `*mocks.py` | ||
| - When a test needs a fixture only for its side effects (not its return value), use `@pytest.mark.usefixtures(fixture_name.__name__)` instead of adding an unused parameter with a noqa comment | ||
| - Use `__name__` instead of string literals when referencing functions/methods (e.g., `mocker.patch.object(MyClass, MyClass.method.__name__)`, `pytest.mark.usefixtures(my_fixture.__name__)`). This enables IDE refactoring tools to catch renames. | ||
| - When using the faker library, prefer the pytest fixture (provided by the faker library) over instantiating instances of Faker. | ||
| - **Choosing between cassettes and mocks:** At the layer that directly wraps an external API or service, strongly prefer VCR cassette-recorded interactions (via pytest-recording/vcrpy) — they capture real HTTP traffic and verify the wire format, catching integration issues that mocks would miss. At layers above that (e.g. business logic, route handlers), mock the wrapper layer instead (e.g. `mocker.patch.object(ThresholdsRepository, ...)`) — there is no value in re-testing the HTTP interaction from higher up. | ||
| - **Never hand-write VCR cassette YAML files.** Cassettes must be recorded from real HTTP interactions by running the test once with `--record-mode=once` against a live external service: `uv run pytest --record-mode=once <test path> --no-cov`. The default mode is `none` — a missing cassette will cause an error, which is expected until recorded. | ||
| - **Never hand-edit syrupy snapshot files.** Snapshots are auto-generated — to create or update them, run `uv run pytest --snapshot-update <test path> --no-cov`. A missing snapshot causes the test to fail, which is expected until you run with `--snapshot-update`. When a snapshot mismatch occurs, fix the code if the change was unintentional; run `--snapshot-update` if it was intentional. | ||
| - **Never hand-write or hand-edit pytest-reserial `.jsonl` recording files.** Recordings must be captured from real serial port traffic by running the test with `--record` while the device is connected: `uv run pytest --record <test path> --no-cov`. The default mode replays recordings — a missing recording causes an error, which is expected until recorded against a live device. | ||
|
|
||
| ### Frontend Testing | ||
|
|
||
| - Key `data-testid` selectors off unique IDs (e.g. UUIDs), not human-readable names which may collide or change. | ||
| - In DOM-based tests, scope queries to the tightest relevant container. Only query `document` or `document.body` directly to find the top-level portal/popup element (e.g. a Reka UI dialog via `[role="dialog"][data-state="open"]`); all further queries should run on that element, not on `document.body` again. | ||
|
|
||
| # Agent Implementations & Configurations | ||
|
|
||
| ## Memory and Rules | ||
|
|
||
| - Before saving any memory or adding any rule, explicitly ask the user whether the concept should be: (1) added to AGENTS.md as a general rule applicable across all projects, (2) added to AGENTS.md as a rule specific to this project, or (3) stored as a temporary local memory only relevant to the current active work. The devcontainer environment is ephemeral, so local memory files are rarely the right choice. | ||
|
|
||
| ## Tooling | ||
| - Always use `uv run python` instead of `python3` or `python` when running Python commands. | ||
| - Prefer dedicated shell tools over `python3`/`python` for simple one-off tasks: use `jq` for JSON parsing, standard shell builtins for string manipulation, etc. Only reach for `python3` when no simpler tool covers the need. | ||
|
|
||
| - ❌ Never use `python3` or `python` directly. ✅ Always use `uv run python` for Python commands. | ||
| - ❌ Never use `python3`/`python` for one-off data tasks. ✅ Use `jq` for JSON parsing, standard shell builtins for string manipulation. Only reach for `uv run python` when no dedicated tool covers the need. | ||
| - Check .devcontainer/devcontainer.json for tooling versions (Python, Node, etc.) when reasoning about version-specific stdlib or tooling behavior. | ||
| - For frontend work, run commands via `pnpm` scripts from `frontend/package.json` — never invoke tools directly (not pnpm exec <tool>, npx <tool>, etc.). ✅ pnpm test-unit ❌ pnpm vitest ... or npx vitest ... | ||
| - When running terminal commands, execute exactly one command per tool call. Do not chain commands with &&, ||, ;, or & — this prohibition has no exceptions, even for `cd && ...` patterns. Use absolute paths instead of `cd` to avoid needing to chain. Pipes (|) are allowed for output transformation (e.g., head, tail, grep). If two sequential commands are needed, run them in separate tool calls. Chained commands break the permission allow-list matcher and cause unnecessary permission prompts | ||
| - For frontend tests, run commands via `pnpm` scripts from `frontend/package.json` — never invoke tools directly (not pnpm exec <tool>, npx <tool>, etc.). ✅ pnpm test-unit ❌ pnpm vitest ... or npx vitest ... | ||
| - For linting and type-checking, prefer `pre-commit run <hook-id>` over invoking tools directly — this matches the permission allow-list and mirrors what CI runs. Key hook IDs: `typescript-check`, `eslint`, `pyright`, `ruff`, `ruff-format`. | ||
| - Never rely on IDE diagnostics for ruff warnings — the IDE may not respect the project's ruff.toml config. Run `pre-commit run ruff -a` to get accurate results. | ||
| - When running terminal commands, execute exactly one command per tool call. Do not chain commands with &&, ||, ;, or & — this prohibition has no exceptions, even for `cd && ...` patterns. Use `cd` to change to the directory you want before running the command, avoiding the need to chain. Pipes (|) are allowed for output transformation (e.g., head, tail, grep). If two sequential commands are needed, run them in separate tool calls. Chained commands break the permission allow-list matcher and cause unnecessary permission prompts | ||
| - Never use `pnpm --prefix <path>` or `uv --directory <path>` to target a different directory — these flags break the permission allow-list matcher the same way chained `cd &&` commands do. Instead, rely on the working directory already being correct (the cwd persists between Bash tool calls), or issue a plain `cd <path>` as a separate prior tool call to reposition before running the command. | ||
| - Never use backslash line continuations in shell commands — always write the full command on a single line. Backslashes break the permission allow-list matcher. | ||
| - **Never manually edit files in any `generated/` folder.** These files are produced by codegen tooling (typically Kiota) and any manual changes will be overwritten. If a generated file needs to change, update the source (e.g. the OpenAPI schema) and re-run the generator. | ||
|
|
There was a problem hiding this comment.
Missing documentation of autonomous agents, their responsibilities, and interaction patterns.
The file provides comprehensive guidelines FOR agents but doesn't document the agents themselves. As per coding guidelines, AGENTS.md should include:
- A list or registry of all autonomous agents operating in this repository (e.g., CodeRabbit, Dependabot, custom CI bots, etc.)
- Clear responsibility statements for each agent (what each one owns/does)
- Interaction patterns between agents (e.g., how agents coordinate, hand off work, or avoid conflicts)
Currently, the file only contains rules that agents should follow, not documentation of which agents exist and how they work together.
📋 Suggested structure to add
Consider adding a section like:
# Autonomous Agents
## Active Agents
### CodeRabbit
**Responsibility**: Automated code review on pull requests
**Triggers**: New PR creation, PR updates
**Outputs**: Review comments, approval/request changes
**Interactions**:
- Waits for CI checks before reviewing
- Does not auto-merge; requires human approval
### Dependabot
**Responsibility**: Dependency updates
**Triggers**: Weekly schedule, security advisories
**Outputs**: Pull requests with dependency updates
**Interactions**:
- PRs are reviewed by CodeRabbit
- Auto-merge enabled for patch updates if tests pass
[... other agents ...]
## Interaction Patterns
- **PR Workflow**: Dependabot creates PR → CI runs → CodeRabbit reviews → Human approves → Merge
- **Issue Creation**: [Define which agents can create issues and how they coordinate]Based on coding guidelines requiring documentation of all autonomous agents, their responsibilities, and interaction patterns.
🧰 Tools
🪛 LanguageTool
[style] ~14-~14: As an alternative to the over-used intensifier ‘very’, consider replacing this phrase.
Context: ... Prefer keyword-only parameters (unless a very clear single-argument function): use * in P...
(EN_WEAK_ADJECTIVE)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AGENTS.md` around lines 1 - 63, Add a new "Autonomous Agents" section to
AGENTS.md that enumerates every agent (e.g., CodeRabbit, Dependabot, custom CI
bots), gives a one-line responsibility for each (what they own/produce), lists
their triggers and outputs, and documents interaction patterns and handoff rules
(e.g., PR Workflow: Dependabot → CI → CodeRabbit → human approve → merge) so
readers can see coordination and conflict-avoidance; place the section near the
top-level guidelines and use headings like "Active Agents", "Responsibility",
"Triggers", "Outputs", and "Interactions" to match the suggested structure.
| - Don't sort or remove imports manually — pre-commit handles it. | ||
| - Always include type hints for pyright in Python | ||
| - Respect the pyright rule reportUnusedCallResult; assign unneeded return values to `_` | ||
| - Prefer keyword-only parameters (unless a very clear single-argument function): use `*` in Python signatures and destructured options objects in TypeScript. |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Optional: Consider alternative phrasing for "very clear".
Static analysis suggests "very clear" is an over-used intensifier. Alternatives like "obviously single-argument" or "unambiguously single-argument" might be slightly stronger, but the current phrasing is perfectly understandable.
🧰 Tools
🪛 LanguageTool
[style] ~14-~14: As an alternative to the over-used intensifier ‘very’, consider replacing this phrase.
Context: ... Prefer keyword-only parameters (unless a very clear single-argument function): use * in P...
(EN_WEAK_ADJECTIVE)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AGENTS.md` at line 14, Replace the phrase "very clear single-argument
function" in the guideline sentence that reads "Prefer keyword-only parameters
(unless a very clear single-argument function): use `*` in Python signatures and
destructured options objects in TypeScript." with a less colloquial alternative
such as "an unambiguously single-argument function" or "an obviously
single-argument function" so the wording is more precise and avoids the
intensifier "very clear."
| - When running terminal commands, execute exactly one command per tool call. Do not chain commands with &&, ||, ;, or & — this prohibition has no exceptions, even for `cd && ...` patterns. Use `cd` to change to the directory you want before running the command, avoiding the need to chain. Pipes (|) are allowed for output transformation (e.g., head, tail, grep). If two sequential commands are needed, run them in separate tool calls. Chained commands break the permission allow-list matcher and cause unnecessary permission prompts | ||
| - Never use `pnpm --prefix <path>` or `uv --directory <path>` to target a different directory — these flags break the permission allow-list matcher the same way chained `cd &&` commands do. Instead, rely on the working directory already being correct (the cwd persists between Bash tool calls), or issue a plain `cd <path>` as a separate prior tool call to reposition before running the command. |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider mentioning absolute paths as the preferred alternative to cd.
The guidance recommends using cd as a separate tool call to avoid chaining, but doesn't mention that using absolute paths is often simpler and avoids the need for cd entirely. Based on learnings, absolute paths are preferred over cd commands when possible.
📝 Suggested clarification
-When running terminal commands, execute exactly one command per tool call. Do not chain commands with &&, ||, ;, or & — this prohibition has no exceptions, even for `cd && ...` patterns. Use `cd` to change to the directory you want before running the command, avoiding the need to chain. Pipes (|) are allowed for output transformation (e.g., head, tail, grep). If two sequential commands are needed, run them in separate tool calls. Chained commands break the permission allow-list matcher and cause unnecessary permission prompts
+When running terminal commands, execute exactly one command per tool call. Do not chain commands with &&, ||, ;, or & — this prohibition has no exceptions, even for `cd && ...` patterns. Prefer using absolute paths in commands to avoid needing `cd` at all; if that's not feasible, use `cd` to change to the directory you want as a separate prior tool call. Pipes (|) are allowed for output transformation (e.g., head, tail, grep). If two sequential commands are needed, run them in separate tool calls. Chained commands break the permission allow-list matcher and cause unnecessary permission promptsBased on learnings stating: "Use absolute paths instead of cd to avoid needing to chain."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - When running terminal commands, execute exactly one command per tool call. Do not chain commands with &&, ||, ;, or & — this prohibition has no exceptions, even for `cd && ...` patterns. Use `cd` to change to the directory you want before running the command, avoiding the need to chain. Pipes (|) are allowed for output transformation (e.g., head, tail, grep). If two sequential commands are needed, run them in separate tool calls. Chained commands break the permission allow-list matcher and cause unnecessary permission prompts | |
| - Never use `pnpm --prefix <path>` or `uv --directory <path>` to target a different directory — these flags break the permission allow-list matcher the same way chained `cd &&` commands do. Instead, rely on the working directory already being correct (the cwd persists between Bash tool calls), or issue a plain `cd <path>` as a separate prior tool call to reposition before running the command. | |
| - When running terminal commands, execute exactly one command per tool call. Do not chain commands with &&, ||, ;, or & — this prohibition has no exceptions, even for `cd && ...` patterns. Prefer using absolute paths in commands to avoid needing `cd` at all; if that's not feasible, use `cd` to change to the directory you want as a separate prior tool call. Pipes (|) are allowed for output transformation (e.g., head, tail, grep). If two sequential commands are needed, run them in separate tool calls. Chained commands break the permission allow-list matcher and cause unnecessary permission prompts | |
| - Never use `pnpm --prefix <path>` or `uv --directory <path>` to target a different directory — these flags break the permission allow-list matcher the same way chained `cd &&` commands do. Instead, rely on the working directory already being correct (the cwd persists between Bash tool calls), or issue a plain `cd <path>` as a separate prior tool call to reposition before running the command. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AGENTS.md` around lines 59 - 60, Update the two bullets that instruct using a
separate cd call (the "When running terminal commands..." and the "Never use
`pnpm --prefix <path>`..." bullets) to also recommend preferring absolute paths
as an alternative to issuing `cd` (so you can avoid an extra tool call
entirely); add a concise sentence stating that using absolute/full paths to
target files or binaries is often simpler and preserves the permission
allow-list matcher, and give a short note that this is preferred when feasible
over separate `cd` calls.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
.github/workflows/ci.yaml (1)
143-152:⚠️ Potential issue | 🔴 CriticalSplit the status update by event type.
This step still uses
github.event.pull_request.statuses_urlfor bothpull_requestandmerge_group. GitHub documentsmerge_groupas a separate event, withGITHUB_SHAset to the merge-group SHA, and says checks for that event should report back on the head SHA via commit statuses. The commit-status API itself isPOST /repos/{owner}/{repo}/statuses/{sha}, so merge-queue runs need a repository/SHA endpoint instead of the PR payload URL. (docs.github.com)🔧 Suggested fix
- - name: Mark the required-check as succeeded so the PR can be merged - if: ${{ github.event_name == 'pull_request' || github.event_name == 'merge_group' }} + - name: Mark the required-check as succeeded on pull_request + if: ${{ github.event_name == 'pull_request' }} env: GH_TOKEN: ${{ github.token }} run: | gh api \ -X POST -H "Accept: application/vnd.github.v3+json" \ "${{ github.event.pull_request.statuses_url }}" \ -f state=success -f context="required-check" -f description="✅ All required checks passed in the job triggered by pull_request" \ -f target_url="${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}" + + - name: Mark the required-check as succeeded on merge_group + if: ${{ github.event_name == 'merge_group' }} + env: + GH_TOKEN: ${{ github.token }} + run: | + gh api \ + -X POST -H "Accept: application/vnd.github.v3+json" \ + "/repos/${{ github.repository }}/statuses/${{ github.sha }}" \ + -f state=success -f context="required-check" -f description="✅ All required checks passed in the job triggered by merge_group" \ + -f target_url="${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"#!/bin/bash set -euo pipefail sed -n '143,152p' .github/workflows/ci.yaml curl -fsSL https://docs.github.com/en/webhooks/webhook-events-and-payloads | grep -n 'head SHA and report status back using check runs or commit statuses' curl -fsSL https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows | grep -n 'SHA of the merge group' curl -fsSL https://docs.github.com/en/rest/commits/statuses | grep -n 'post/repos/{owner}/{repo}/statuses/{sha}'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yaml around lines 143 - 152, The status update step currently posts to github.event.pull_request.statuses_url for both pull_request and merge_group events; update the step to branch on github.event_name (or the existing if condition) and, when github.event_name == "merge_group", call the commit-status endpoint for the merge-group head SHA (use GITHUB_SHA or github.sha and POST to /repos/{owner}/{repo}/statuses/{sha}) instead of using github.event.pull_request.statuses_url; keep the existing success payload (state, context, description, target_url built from github.server_url, github.repository, github.run_id) but construct the API URL from owner/repo/sha for merge_group so the status reports to the commit rather than the PR payload URL.AGENTS.md (1)
46-50:⚠️ Potential issue | 🟠 MajorDocument the actual autonomous agents and their handoffs.
This update adds more agent rules, but AGENTS.md still doesn't identify which autonomous agents operate in this repo, what each one owns, or how they interact. Please add an explicit registry/interaction section here instead of only behavioral guidance.
As per coding guidelines, "Create an AGENTS.md file at the repository root to document all autonomous agents, their responsibilities, and interaction patterns," "Document agent responsibilities clearly in AGENTS.md to establish ownership and accountability for autonomous processes," and "Define interaction patterns between agents in AGENTS.md to ensure predictable and maintainable autonomous behavior."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` around lines 46 - 50, Add an explicit "Agent Registry" and "Interaction Patterns" section to AGENTS.md (in addition to the existing "Memory and Rules" section) that enumerates each autonomous agent by name, ownership/responsibility, inputs/outputs, and lifecycle (e.g., "agent_name: responsibilities, owned_by, triggers, outputs"), and then document handoffs and communication patterns between agents under "Interaction Patterns" (include synchronous vs asynchronous handoffs, expected message formats, error/fallback behavior, and where memory decisions are made). Ensure each agent entry references the exact agent name used elsewhere in the repo and link to the "Memory and Rules" decision flow so readers can see where agent-specific rules vs local memory are applied.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.copier-answers.yml:
- Line 2: The _commit entry in .copier-answers.yml is set to a non-tagged value
("v0.0.63-1-g3d9cd7a"), which fails the new Copier template check that requires
a clean release tag; update the _commit key to a proper tagged revision (e.g. a
tag like v0.0.63) or rerun the copier generation against a tagged template
release so _commit contains a tag without hyphenated build metadata, ensuring
the new workflow check passes.
---
Duplicate comments:
In @.github/workflows/ci.yaml:
- Around line 143-152: The status update step currently posts to
github.event.pull_request.statuses_url for both pull_request and merge_group
events; update the step to branch on github.event_name (or the existing if
condition) and, when github.event_name == "merge_group", call the commit-status
endpoint for the merge-group head SHA (use GITHUB_SHA or github.sha and POST to
/repos/{owner}/{repo}/statuses/{sha}) instead of using
github.event.pull_request.statuses_url; keep the existing success payload
(state, context, description, target_url built from github.server_url,
github.repository, github.run_id) but construct the API URL from owner/repo/sha
for merge_group so the status reports to the commit rather than the PR payload
URL.
In `@AGENTS.md`:
- Around line 46-50: Add an explicit "Agent Registry" and "Interaction Patterns"
section to AGENTS.md (in addition to the existing "Memory and Rules" section)
that enumerates each autonomous agent by name, ownership/responsibility,
inputs/outputs, and lifecycle (e.g., "agent_name: responsibilities, owned_by,
triggers, outputs"), and then document handoffs and communication patterns
between agents under "Interaction Patterns" (include synchronous vs asynchronous
handoffs, expected message formats, error/fallback behavior, and where memory
decisions are made). Ensure each agent entry references the exact agent name
used elsewhere in the repo and link to the "Memory and Rules" decision flow so
readers can see where agent-specific rules vs local memory are applied.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: e7b6a119-bebb-47ea-8902-af3fb5a9f0b1
📒 Files selected for processing (6)
.claude/commands/red.md.copier-answers.yml.devcontainer/devcontainer.json.devcontainer/manual-setup-deps.py.github/workflows/ci.yamlAGENTS.md
Pull in upstream template changes
Summary by CodeRabbit
New Features
Improvements
Documentation
Dependencies