feat(cli): add GitHub repository package installation#238
feat(cli): add GitHub repository package installation#238khaliqgant wants to merge 10 commits intomainfrom
Conversation
Adds ability to install packages directly from GitHub repositories without requiring a prpm.json manifest. Auto-discovers AI tool configurations by scanning for known file patterns. Supported syntax: - prpm install owner/repo - prpm install github:owner/repo - prpm install owner/repo@tag - prpm install owner/repo:path/to/file Features: - Downloads via GitHub tarball API (no git required) - Auto-discovers .cursor/rules, .claude/skills, CLAUDE.md, etc. - Tracks commit SHA in lockfile for reproducibility - Supports GITHUB_TOKEN for private repos
- Remove auto-detection of owner/repo pattern to avoid ambiguity - Only github:, gh:, and https://github.com URLs trigger GitHub install - Add package count summary grouped by format - Prompt user to confirm when multiple packages found - Show how to install specific files if user declines
- Cache downloaded tarballs in ~/.prpm/cache/github/{owner}/{repo}/{sha}.tar.gz
- Check lockfile for previously installed commit SHA
- Skip reinstall if commit SHA unchanged (use -y to force)
- Show "(cached)" indicator when serving from cache
- Add github-cache route in registry for shared tarball caching - Implement 3-tier cache hierarchy: local -> registry -> GitHub - Add cacheSource field to show where tarball was loaded from - Auto-register discovered GitHub packages to PRPM registry - Display cache source (local/registry/GitHub) in install output
The shared tarball cache already provides caching benefits for all users. Auto-publishing to registry would incorrectly attribute packages to the downloading user instead of the GitHub repository owner.
- Add /register endpoint for GitHub package registration - Packages are attributed to GitHub owner, not downloading user - Deduplication by content hash prevents duplicate storage - Extract license info from repository - Fire-and-forget registration doesn't block installs
Unit tests (CLI): - generateRegistryPackageId: lowercase package ID generation - computeIntegrity: SHA-256 hash computation for tarballs Integration tests (Registry): - GET /github-cache/:owner/:repo/:sha: cache hit/miss, LRU tracking - POST /github-cache/:owner/:repo/:sha: store, skip if exists - GET /github-cache/stats: cache statistics - POST /github-cache/register: new package, duplicate detection, metadata storage
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
🤖 My Senior Dev — Analysis Complete👤 For @khaliqgant📁 Expert in View your contributor analytics → 📊 12 files reviewed • 8 high risk • 5 need attention 🚨 High Risk:
🚀 Open Interactive Review →The full interface unlocks features not available in GitHub:
💬 Chat here: 📖 View all 12 personas & slash commandsYou can interact with me by mentioning In PR comments or on any line of code:
Slash commands:
AI Personas (mention to get their perspective):
For the best experience, view this PR on myseniordev.com — includes AI chat, file annotations, and interactive reviews. |
There was a problem hiding this comment.
3 issues found across 7 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="packages/cli/src/commands/install.ts">
<violation number="1" location="packages/cli/src/commands/install.ts:518">
P1: Path traversal vulnerability: `file.path` from `additionalFiles` is not validated before being used in `path.join`. A malicious repository could use paths like `../../../.bashrc` to write files outside the intended directory. Consider reusing the existing `hasUnsafePathPatterns` and `isPathSafe` functions that are already used for tarball extraction.</violation>
</file>
<file name="packages/cli/src/core/github.ts">
<violation number="1" location="packages/cli/src/core/github.ts:109">
P2: Duplicate pattern for `AGENTS.md` - the codex format pattern on this line will never match because an identical pattern for 'agents.md' format appears earlier in the array (line 94), and the code breaks after the first match. Consider using a different pattern like `/^codex\.md$/i` or removing one of the duplicates.</violation>
</file>
<file name="packages/registry/src/routes/github-cache.ts">
<violation number="1" location="packages/registry/src/routes/github-cache.ts:325">
P1: Missing TTL on stored package data. Unlike the tarball cache which uses `setex` with TTL and LRU eviction, the hash, metadata, and content keys in `/register` are stored with `set` and never expire. This could lead to unbounded Redis memory growth. Consider using `setex` with a TTL or implementing LRU eviction similar to the tarball cache.</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
| if (pkg.additionalFiles) { | ||
| const pkgDir = path.dirname(destPath); | ||
| for (const file of pkg.additionalFiles) { | ||
| const fileDest = path.join(pkgDir, file.path); |
There was a problem hiding this comment.
P1: Path traversal vulnerability: file.path from additionalFiles is not validated before being used in path.join. A malicious repository could use paths like ../../../.bashrc to write files outside the intended directory. Consider reusing the existing hasUnsafePathPatterns and isPathSafe functions that are already used for tarball extraction.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/cli/src/commands/install.ts, line 518:
<comment>Path traversal vulnerability: `file.path` from `additionalFiles` is not validated before being used in `path.join`. A malicious repository could use paths like `../../../.bashrc` to write files outside the intended directory. Consider reusing the existing `hasUnsafePathPatterns` and `isPathSafe` functions that are already used for tarball extraction.</comment>
<file context>
@@ -256,6 +265,375 @@ function findMainFile(
+ if (pkg.additionalFiles) {
+ const pkgDir = path.dirname(destPath);
+ for (const file of pkg.additionalFiles) {
+ const fileDest = path.join(pkgDir, file.path);
+ await fs.mkdir(path.dirname(fileDest), { recursive: true });
+ await fs.writeFile(fileDest, file.content, 'utf-8');
</file context>
| const fileDest = path.join(pkgDir, file.path); | |
| if (hasUnsafePathPatterns(file.path) || !isPathSafe(pkgDir, file.path)) { | |
| console.log(` ${chalk.yellow('⚠️ Skipped unsafe path:')} ${file.path}`); | |
| continue; | |
| } | |
| const fileDest = path.join(pkgDir, file.path); |
| } | ||
|
|
||
| // Store content hash -> package ID mapping for deduplication | ||
| await server.redis.set(hashKey, packageId); |
There was a problem hiding this comment.
P1: Missing TTL on stored package data. Unlike the tarball cache which uses setex with TTL and LRU eviction, the hash, metadata, and content keys in /register are stored with set and never expire. This could lead to unbounded Redis memory growth. Consider using setex with a TTL or implementing LRU eviction similar to the tarball cache.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/registry/src/routes/github-cache.ts, line 325:
<comment>Missing TTL on stored package data. Unlike the tarball cache which uses `setex` with TTL and LRU eviction, the hash, metadata, and content keys in `/register` are stored with `set` and never expire. This could lead to unbounded Redis memory growth. Consider using `setex` with a TTL or implementing LRU eviction similar to the tarball cache.</comment>
<file context>
@@ -0,0 +1,365 @@
+ }
+
+ // Store content hash -> package ID mapping for deduplication
+ await server.redis.set(hashKey, packageId);
+
+ // Store package metadata
</file context>
| try { | ||
| const content = await readFile(join(rootDir, spec.filePath), 'utf-8'); |
There was a problem hiding this comment.
Suggestion: The spec.filePath value is joined directly with the extracted repo root and passed to readFile without any validation, so a user (or malicious command) can supply absolute or ..-containing paths (e.g. github:owner/repo:../../etc/passwd or github:owner/repo:/etc/passwd) that cause the CLI to read arbitrary local files outside the cloned repository and potentially leak their contents (e.g. via later registry upload); you should validate that the resolved path stays within the extracted repository directory before reading it. [security]
Severity Level: Critical 🚨
| try { | |
| const content = await readFile(join(rootDir, spec.filePath), 'utf-8'); | |
| // Resolve and validate that the requested path stays within the extracted repo | |
| const requestedPath = join(rootDir, spec.filePath); | |
| const relativeToRoot = relative(rootDir, requestedPath); | |
| if (relativeToRoot.startsWith('..')) { | |
| throw new Error(`Invalid file path (must be inside repository): ${spec.filePath}`); | |
| } | |
| try { | |
| const content = await readFile(requestedPath, 'utf-8'); |
Why it matters? ⭐
The suggestion points out a real directory-traversal/absolute-path risk: join(rootDir, spec.filePath) will happily resolve "../something" or an absolute path and read files outside the extracted repo. The proposed guard using relative(rootDir, requestedPath) and rejecting paths that start with ".." (and implicitly handling absolute paths which produce a leading '..' when relative) fixes the vulnerability with minimal change. This is a security fix, not a cosmetic one, and directly relates to the code in the PR.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** packages/cli/src/core/github.ts
**Line:** 400:401
**Comment:**
*Security: The `spec.filePath` value is joined directly with the extracted repo root and passed to `readFile` without any validation, so a user (or malicious command) can supply absolute or `..`-containing paths (e.g. `github:owner/repo:../../etc/passwd` or `github:owner/repo:/etc/passwd`) that cause the CLI to read arbitrary local files outside the cloned repository and potentially leak their contents (e.g. via later registry upload); you should validate that the resolved path stays within the extracted repository directory before reading it.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| */ | ||
|
|
||
| import { FastifyInstance, FastifyRequest, FastifyReply } from 'fastify'; | ||
| import { config } from '../config.js'; |
There was a problem hiding this comment.
Suggestion: The imported config value from ../config.js is never used in this module, which is dead code and may cause linting or build failures in setups that treat unused imports as errors; it should be removed. [code quality]
Severity Level: Minor
| import { config } from '../config.js'; |
Why it matters? ⭐
The import of config is unused in this module (the file never references config). Removing the import is a small, correct cleanup that avoids linter warnings or build errors in strict configurations. This is a safe code-quality improvement.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** packages/registry/src/routes/github-cache.ts
**Line:** 7:7
**Comment:**
*Code Quality: The imported `config` value from `../config.js` is never used in this module, which is dead code and may cause linting or build failures in setups that treat unused imports as errors; it should be removed.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| description?: string; | ||
| }; | ||
|
|
||
| try { |
There was a problem hiding this comment.
Suggestion: The /register endpoint accepts arbitrary content strings and writes them directly to Redis without any size limit, which allows a client to send a single extremely large payload and consume excessive memory or Redis storage, enabling a denial-of-service; you should enforce a reasonable maximum content length and reject bodies that exceed it. [security]
Severity Level: Critical 🚨
| try { | |
| // Protect Redis and the server from overly large package bodies | |
| const MAX_PACKAGE_CONTENT_SIZE = 1024 * 1024; // 1MB | |
| if (body.content.length > MAX_PACKAGE_CONTENT_SIZE) { | |
| return reply.status(400).send({ | |
| error: 'Package content too large', | |
| }); | |
| } | |
Why it matters? ⭐
This is a valid security/resource-protection concern: the register endpoint accepts arbitrary content and writes it into Redis with no size checks. A malicious client could send very large content and exhaust memory or Redis storage. Adding an explicit maximum content size (and validating before any Redis writes) is a real hardening change, not merely stylistic.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** packages/registry/src/routes/github-cache.ts
**Line:** 303:303
**Comment:**
*Security: The `/register` endpoint accepts arbitrary `content` strings and writes them directly to Redis without any size limit, which allows a client to send a single extremely large payload and consume excessive memory or Redis storage, enabling a denial-of-service; you should enforce a reasonable maximum content length and reject bodies that exceed it.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| // Store content hash -> package ID mapping for deduplication | ||
| await server.redis.set(hashKey, packageId); | ||
|
|
||
| // Store package metadata | ||
| const metadataKey = `github:package:${packageId}`; | ||
| const metadata = { | ||
| owner: body.owner, | ||
| repo: body.repo, | ||
| commitSha: body.commitSha, | ||
| name: body.name, | ||
| format: body.format, | ||
| subtype: body.subtype, | ||
| sourcePath: body.sourcePath, | ||
| contentHash: body.contentHash, | ||
| license: body.license, | ||
| description: body.description, | ||
| createdAt: new Date().toISOString(), | ||
| }; | ||
|
|
||
| await server.redis.set(metadataKey, JSON.stringify(metadata)); | ||
|
|
||
| // Store content (for future retrieval if needed) | ||
| const contentKey = `github:content:${packageId}`; | ||
| await server.redis.set(contentKey, body.content); | ||
|
|
There was a problem hiding this comment.
Suggestion: The deduplication mapping from content hash to package ID is written to Redis before the package metadata and content; if either of those later writes fails, you end up with a hash that points to a package ID for which no metadata/content exists, and future registrations with the same hash will be treated as duplicates but remain unusable. The mapping should only be stored after the metadata and content are successfully written so the system cannot enter this inconsistent state. [logic error]
Severity Level: Minor
| // Store content hash -> package ID mapping for deduplication | |
| await server.redis.set(hashKey, packageId); | |
| // Store package metadata | |
| const metadataKey = `github:package:${packageId}`; | |
| const metadata = { | |
| owner: body.owner, | |
| repo: body.repo, | |
| commitSha: body.commitSha, | |
| name: body.name, | |
| format: body.format, | |
| subtype: body.subtype, | |
| sourcePath: body.sourcePath, | |
| contentHash: body.contentHash, | |
| license: body.license, | |
| description: body.description, | |
| createdAt: new Date().toISOString(), | |
| }; | |
| await server.redis.set(metadataKey, JSON.stringify(metadata)); | |
| // Store content (for future retrieval if needed) | |
| const contentKey = `github:content:${packageId}`; | |
| await server.redis.set(contentKey, body.content); | |
| // Store package metadata | |
| const metadataKey = `github:package:${packageId}`; | |
| const metadata = { | |
| owner: body.owner, | |
| repo: body.repo, | |
| commitSha: body.commitSha, | |
| name: body.name, | |
| format: body.format, | |
| subtype: body.subtype, | |
| sourcePath: body.sourcePath, | |
| contentHash: body.contentHash, | |
| license: body.license, | |
| description: body.description, | |
| createdAt: new Date().toISOString(), | |
| }; | |
| await server.redis.set(metadataKey, JSON.stringify(metadata)); | |
| // Store content (for future retrieval if needed) | |
| const contentKey = `github:content:${packageId}`; | |
| await server.redis.set(contentKey, body.content); | |
| // Store content hash -> package ID mapping for deduplication (only after successful writes) | |
| await server.redis.set(hashKey, packageId); |
Why it matters? ⭐
The suggestion points out a real logic flaw in the current PR: the content-hash -> packageId mapping is written before metadata/content, which can leave Redis in an inconsistent state if subsequent writes fail. Reordering the writes (or using a Redis transaction/multi/exec) to persist metadata and content first, then set the hash mapping, prevents creating unreachable "duplicates". This is a correctness fix, not a cosmetic change.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** packages/registry/src/routes/github-cache.ts
**Line:** 324:348
**Comment:**
*Logic Error: The deduplication mapping from content hash to package ID is written to Redis before the package metadata and content; if either of those later writes fails, you end up with a hash that points to a package ID for which no metadata/content exists, and future registrations with the same hash will be treated as duplicates but remain unusable. The mapping should only be stored after the metadata and content are successfully written so the system cannot enter this inconsistent state.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| if (cachedCommitSha && cachedCommitSha === result.commitSha && !options.force) { | ||
| console.log(`\n${chalk.green('✓')} Already up to date at commit ${chalk.dim(result.commitSha.slice(0, 7))}`); | ||
| console.log(` ${chalk.dim('Use -y or --force to reinstall')}`); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Suggestion: When a repository is already up to date, the message instructs the user to "Use -y or --force to reinstall", but the CLI only defines -y, --yes (no --force flag), so following the guidance will cause an unknown-option error instead of working as described; the message should reference the actual --yes flag. [logic error]
Severity Level: Minor
| if (cachedCommitSha && cachedCommitSha === result.commitSha && !options.force) { | |
| console.log(`\n${chalk.green('✓')} Already up to date at commit ${chalk.dim(result.commitSha.slice(0, 7))}`); | |
| console.log(` ${chalk.dim('Use -y or --force to reinstall')}`); | |
| return; | |
| } | |
| console.log(` ${chalk.dim('Use -y or --yes to reinstall')}`); |
Why it matters? ⭐
The CLI defines '-y, --yes' for auto-confirming prompts and the action later uses options.yes to set force on installs; there is no global '--force' CLI option declared in this command. Instructing users to use '--force' is incorrect and will produce an unknown-option error. Updating the message to reference '-y' or '--yes' aligns documentation with actual flags.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** packages/cli/src/commands/install.ts
**Line:** 407:411
**Comment:**
*Logic Error: When a repository is already up to date, the message instructs the user to "Use -y or --force to reinstall", but the CLI only defines `-y, --yes` (no `--force` flag), so following the guidance will cause an unknown-option error instead of working as described; the message should reference the actual `--yes` flag.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.| console.log(`\n${chalk.yellow('⚠️ No AI tool configurations found in this repository.')}`); | ||
| console.log(`\n Looking for:`); | ||
| console.log(` • ${chalk.dim('.cursor/rules/*.mdc')}`); | ||
| console.log(` • ${chalk.dim('.claude/skills/**/*.md')}`); | ||
| console.log(` • ${chalk.dim('CLAUDE.md, AGENTS.md')}`); | ||
| console.log(` • ${chalk.dim('.cursorrules, .windsurfrules')}`); | ||
| console.log(` • ${chalk.dim('and more...')}`); | ||
| console.log(`\n ${chalk.dim('Specify a file path:')} prpm install ${repoName}:path/to/file.md`); | ||
| return; |
There was a problem hiding this comment.
Suggestion: The help text shown when no GitHub packages are discovered suggests running prpm install owner/repo:path/to/file.md, but looksLikeGitHubSpec only recognizes inputs with prefixes like github: or https://github.com/, so this example will be parsed as a registry package and fail; the example should include an explicit GitHub prefix so it is actually handled by the GitHub installer. [logic error]
Severity Level: Minor
| console.log(`\n${chalk.yellow('⚠️ No AI tool configurations found in this repository.')}`); | |
| console.log(`\n Looking for:`); | |
| console.log(` • ${chalk.dim('.cursor/rules/*.mdc')}`); | |
| console.log(` • ${chalk.dim('.claude/skills/**/*.md')}`); | |
| console.log(` • ${chalk.dim('CLAUDE.md, AGENTS.md')}`); | |
| console.log(` • ${chalk.dim('.cursorrules, .windsurfrules')}`); | |
| console.log(` • ${chalk.dim('and more...')}`); | |
| console.log(`\n ${chalk.dim('Specify a file path:')} prpm install ${repoName}:path/to/file.md`); | |
| return; | |
| console.log(`\n ${chalk.dim('Specify a file path:')} prpm install github:${repoName}:path/to/file.md`); |
Why it matters? ⭐
The message suggests "prpm install owner/repo:path" but the code only detects GitHub specs when they match the patterns parsed by looksLikeGitHubSpec (the code imports looksLikeGitHubSpec and the actual detection earlier expects prefixes like 'github:' or full URLs). The example as printed will be parsed as a registry package and not handled by the GitHub installer, so the help text is misleading. Changing the printed example to include the explicit 'github:' prefix fixes user confusion.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** packages/cli/src/commands/install.ts
**Line:** 414:422
**Comment:**
*Logic Error: The help text shown when no GitHub packages are discovered suggests running `prpm install owner/repo:path/to/file.md`, but `looksLikeGitHubSpec` only recognizes inputs with prefixes like `github:` or `https://github.com/`, so this example will be parsed as a registry package and fail; the example should include an explicit GitHub prefix so it is actually handled by the GitHub installer.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.|
CodeAnt AI finished reviewing your PR. |
💡 Enhance Your PR ReviewsWe noticed that 3 feature(s) are not configured for this repository. Enabling these features can help improve your code quality and workflow: 🚦 Quality GatesStatus: Quality Gates are not enabled at the organization level 🎫 Jira Ticket ComplianceStatus: Jira credentials file not found. Please configure Jira integration in your settings ⚙️ Custom RulesStatus: No custom rules configured. Add rules via organization settings or .codeant/review.json in your repository Want to enable these features? Contact your organization admin or check our documentation for setup instructions. |
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
There was a problem hiding this comment.
2 issues found across 8 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="public-documentation/cli/commands.mdx">
<violation number="1" location="public-documentation/cli/commands.mdx:413">
P2: Example comment is misleading: says 'Skip prompts when installing multiple packages' but the command only installs one repo. Either fix the comment or show an actual multi-package scenario.</violation>
<violation number="2" location="public-documentation/cli/commands.mdx:431">
P2: Documentation incorrectly lists Codex format discovering 'AGENTS.md'. According to the implementation, Codex only discovers 'codex.md' - AGENTS.md is handled by the 'Agents.md' format.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| # Convert to a specific format | ||
| prpm install github:anthropics/skills --as cursor | ||
|
|
||
| # Skip prompts when installing multiple packages |
There was a problem hiding this comment.
P2: Example comment is misleading: says 'Skip prompts when installing multiple packages' but the command only installs one repo. Either fix the comment or show an actual multi-package scenario.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At public-documentation/cli/commands.mdx, line 413:
<comment>Example comment is misleading: says 'Skip prompts when installing multiple packages' but the command only installs one repo. Either fix the comment or show an actual multi-package scenario.</comment>
<file context>
@@ -360,6 +360,126 @@ packages/
+# Convert to a specific format
+prpm install github:anthropics/skills --as cursor
+
+# Skip prompts when installing multiple packages
+prpm install github:owner/repo -y
+```
</file context>
| # Skip prompts when installing multiple packages | |
| # Skip prompts during installation |
| | Kiro | `.kiro/steering/*.md`, `.kiro/agents/*.json` | | ||
| | Agents.md | `AGENTS.md` | | ||
| | Aider | `.aider.conf.yml`, `CONVENTIONS.md` | | ||
| | Codex | `AGENTS.md`, `codex.md` | |
There was a problem hiding this comment.
P2: Documentation incorrectly lists Codex format discovering 'AGENTS.md'. According to the implementation, Codex only discovers 'codex.md' - AGENTS.md is handled by the 'Agents.md' format.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At public-documentation/cli/commands.mdx, line 431:
<comment>Documentation incorrectly lists Codex format discovering 'AGENTS.md'. According to the implementation, Codex only discovers 'codex.md' - AGENTS.md is handled by the 'Agents.md' format.</comment>
<file context>
@@ -360,6 +360,126 @@ packages/
+| Kiro | `.kiro/steering/*.md`, `.kiro/agents/*.json` |
+| Agents.md | `AGENTS.md` |
+| Aider | `.aider.conf.yml`, `CONVENTIONS.md` |
+| Codex | `AGENTS.md`, `codex.md` |
+
+**Caching:**
</file context>
| | Codex | `AGENTS.md`, `codex.md` | | |
| | Codex | `codex.md` | |
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
| eager: options.eager, | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Suggestion: GitHub-installed packages recorded in the lockfile cannot be faithfully reinstalled via installFromLockfile, because handleInstall only treats explicit github:/gh:/URL specs as GitHub sources and ignores lockfile entries with githubSource, so lockfile-driven reinstalls fall back to registry resolution using the derived package name and will either fail or install the wrong package instead of re-downloading from GitHub with the stored commit SHA. [logic error]
Severity Level: Critical 🚨
- ❌ Reinstalling GitHub-sourced packages from prpm.lock fails
- ⚠️ Lockfile reproducibility guarantees broken for GitHub installs
- ⚠️ CLI install-from-lockfile path uses registry resolution instead| } | |
| } else { | |
| // If not an explicit GitHub spec, check lockfile for a GitHub-sourced entry | |
| const lockfileForGitHub = await readLockfile(); | |
| if (lockfileForGitHub) { | |
| for (const [key, pkg] of Object.entries(lockfileForGitHub.packages)) { | |
| const { packageId: lockedPackageId } = parseLockfileKey(key); | |
| if (lockedPackageId === packageSpec && pkg.githubSource) { | |
| const githubSpecFromLock: GitHubSpec = { | |
| owner: pkg.githubSource.owner, | |
| repo: pkg.githubSource.repo, | |
| ref: pkg.githubSource.ref, | |
| filePath: pkg.githubSource.sourcePath, | |
| }; | |
| return await handleGitHubInstall(githubSpecFromLock, { | |
| as: options.as, | |
| subtype: options.subtype, | |
| force: options.force, | |
| location: options.location, | |
| noAppend: options.noAppend, | |
| manifestFile: options.manifestFile, | |
| eager: options.eager, | |
| }); | |
| } | |
| } | |
| } |
Steps of Reproduction ✅
1. Start with a project that previously installed a GitHub-sourced package using
handleGitHubInstall (see packages/cli/src/commands/install.ts:399-586) so prpm.lock
contains an entry with githubSource (verify by inspecting prpm.lock produced by a GitHub
install).
2. Run the CLI to reinstall from the lockfile path: call handleInstall(packageSpec, ...)
with packageSpec equal to the packageId recorded in prpm.lock (entry lookup occurs at
packages/cli/src/commands/install.ts:723-731 and lockfile read at 726-727).
3. Execution reaches the GitHub-spec check at packages/cli/src/commands/install.ts:687-701
(the existing snippet). Because the packageSpec string is the packageId (not a github:...
spec or URL), looksLikeGitHubSpec(packageSpec) returns false and the code continues
without checking the lockfile githubSource.
4. Later, handleInstall treats the package as a registry package and calls
getRegistryClient(...).getPackage(packageId)
(packages/cli/src/commands/install.ts:834-856). The registry lookup will either fail or
return a different package, so the CLI will attempt registry-based install instead of
re-downloading the original GitHub repo at the recorded commit SHA.
5. Observe incorrect installation: the installed content differs from the original
GitHub-sourced package, or installation fails because registry package doesn't exist. This
reproduces the logic bug described.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** packages/cli/src/commands/install.ts
**Line:** 701:701
**Comment:**
*Logic Error: GitHub-installed packages recorded in the lockfile cannot be faithfully reinstalled via `installFromLockfile`, because `handleInstall` only treats explicit `github:`/`gh:`/URL specs as GitHub sources and ignores lockfile entries with `githubSource`, so lockfile-driven reinstalls fall back to registry resolution using the derived package name and will either fail or install the wrong package instead of re-downloading from GitHub with the stored commit SHA.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.|
CodeAnt AI Incremental review completed. |
There was a problem hiding this comment.
14 issues found across 95 files (changes from recent commits).
Note: This PR contains a large number of files. cubic only reviews up to 75 files per PR, so some files may not have been reviewed.
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name=".claude/rules/types.md">
<violation number="1">
P3: The `Format` union example omits `"windsurf"`, yet the subsequent `Record<Format, …>` example includes a `windsurf` key; as written, the sample code will not compile. Update the union so every key used in later examples is part of `Format`.</violation>
</file>
<file name=".github/workflows/deploy.yml">
<violation number="1">
P2: Writing newline-separated `invalidation_paths` with plain `echo` truncates the GitHub output to the first path, so CloudFront invalidation skips most changed files.</violation>
</file>
<file name=".claude/skills/creating-claude-commands/SKILL.md">
<violation number="1">
P2: The Quick Commit example repeats the type in the commit message and cannot produce the documented `type(scope): message` format because it commits `$1: $ARGUMENTS` while only accepting `<type> <message>`. Use just the message argument instead of `$ARGUMENTS` (or update the arguments to include a scope) so the generated commit follows the conventional commit spec.</violation>
</file>
<file name="packages/converters/schemas/amp-command.schema.json">
<violation number="1">
P1: `frontmatter.additionalProperties` is set to false while only `description` is defined, so valid Amp command fields like `name` and `argument-hint` will be rejected.</violation>
</file>
<file name="packages/types/src/package.ts">
<violation number="1">
P2: `Subtype` now includes "snippet", but no format in `FORMAT_SUBTYPES` allows that subtype, so every snippet package will fail validation.</violation>
</file>
<file name=".claude/skills/creating-claude-rules/SKILL.md">
<violation number="1">
P3: Quote the single-pattern example to stay consistent with the documented requirement that glob patterns must be quoted.</violation>
</file>
<file name=".claude/rules/converters.md">
<violation number="1">
P3: The `toFormat` example returns `content: result` without ever defining `result`, so the documented pattern is invalid TypeScript and will fail if copied. Provide or reference an actual converted value before returning the `ConversionResult`.</violation>
</file>
<file name="packages/converters/schemas/amp.schema.json">
<violation number="1">
P2: The schema never sets `"additionalProperties": false` at the root, so files with unsupported top-level keys still validate, defeating the purpose of the format validator.</violation>
</file>
<file name="packages/converters/src/taxonomy-utils.ts">
<violation number="1">
P2: `normalizeFormat` now matches any string containing the letters “amp”, so inputs like “example.md” or “sample rules” will be misclassified as the AMP format. Use a token/word-boundary check (or explicit prefixes such as `amp:`/`amp/`) so only true AMP identifiers are matched.</violation>
</file>
<file name="packages/cli/src/commands/show.ts">
<violation number="1">
P2: `show` cannot display packages that are delivered as a gzipped single file (no tar) because it always streams into `tar.extract` and aborts on `TAR_BAD_ARCHIVE`, unlike the install path which falls back to treating the payload as a single file. Add the same fallback so single-file packages can be previewed.</violation>
<violation number="2">
P2: Version ranges/dist-tags (e.g. `^1.0.0` or `beta`) are passed straight to `getPackageVersion`, so `prpm show` fails for specs that `prpm install` accepts. Resolve semver ranges/tags to an exact version before downloading, just like the install command does.</violation>
<violation number="3">
P2: `--json` mode still emits the human-friendly console logs before the JSON payload, so the output is never valid JSON. Suppress or redirect the progress logs whenever JSON output is requested.</violation>
</file>
<file name="packages/converters/src/cross-converters/mcp-transformer.ts">
<violation number="1">
P2: `%APPDATA%` resolves to AppData\Roaming, not to the home directory. Translating it to `${home}` during the Kiro→Gemini conversion rewrites paths to the wrong location and breaks any commands relying on the AppData subtree.</violation>
</file>
<file name="packages/cli/src/commands/install.ts">
<violation number="1">
P2: Reinstall detection for snippets ignores their actual target file, so rerunning a snippet that defaults to CLAUDE.md (or any non‑AGENTS.md target) re-appends it instead of reporting it as already installed.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| @@ -78,6 +78,15 @@ import { | |||
| type CanonicalPackage, | |||
There was a problem hiding this comment.
P2: Reinstall detection for snippets ignores their actual target file, so rerunning a snippet that defaults to CLAUDE.md (or any non‑AGENTS.md target) re-appends it instead of reporting it as already installed.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/cli/src/commands/install.ts, line 761:
<comment>Reinstall detection for snippets ignores their actual target file, so rerunning a snippet that defaults to CLAUDE.md (or any non‑AGENTS.md target) re-appends it instead of reporting it as already installed.</comment>
<file context>
@@ -750,23 +751,65 @@ export async function handleInstall(
+ let matchedKey: string | undefined;
+
+ // First, check for snippet installations at the requested location (or default AGENTS.md)
+ const snippetLocation = requestedLocation || 'AGENTS.md';
+ const snippetKey = getLockfileKey(packageId, targetFormat, snippetLocation);
+ if (lockfile.packages[snippetKey]) {
</file context>
User description
Adds ability to install packages directly from GitHub repositories
without requiring a prpm.json manifest. Auto-discovers AI tool
configurations by scanning for known file patterns.
Supported syntax:
Features:
CodeAnt-AI Description
Install AI-tool packages directly from GitHub repositories with caching and discovery
What Changed
Impact
✅ Fewer GitHub API calls✅ Shorter repeated installs from the same commit✅ Reproducible GitHub-sourced installs (commit SHA recorded)💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.