Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
230 changes: 230 additions & 0 deletions .claude/skills/ably-codebase-review/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,230 @@
---
name: ably-codebase-review
description: "Review the entire Ably CLI codebase against the standards and conventions defined in the ably-new-command skill. Use this skill whenever asked to audit, review, check alignment, verify compliance, or find deviations from the command creation patterns — even casually (e.g., 'review the codebase', 'check if commands follow patterns', 'audit the codebase', 'find deviations', 'how well do we follow conventions'). Do NOT use for creating new commands or fixing individual bugs."
---

# Ably Codebase Review

This skill reviews the entire Ably CLI codebase against the conventions defined in the `ably-new-command` skill. It uses a combination of **LSP** for structural analysis (type hierarchies, call chains, symbol definitions) and **grep/read** for text-pattern checks, choosing the right tool for each check.

## Tool selection guide

**Use LSP when you need type-system awareness:**
- `goToDefinition` — resolve what base class a command actually extends (not just the import text)
- `findReferences` — trace all callers of `fail()`, `logJsonResult`, `logJsonEvent` through inheritance
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LSP tool selection guide mentions tracing all callers of fail(), logJsonResult, and logJsonEvent, but those symbols aren't present in the current codebase/skill standards (current base class exposes handleCommandError(...) and formatJsonOutput(...)). This section should reference the real entry points so the audit instructions remain executable.

Suggested change
- `findReferences` — trace all callers of `fail()`, `logJsonResult`, `logJsonEvent` through inheritance
- `findReferences` — trace all callers of `handleCommandError(...)`, `formatJsonOutput(...)`, and other base-class error/result handlers through inheritance

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/coderabbitai can you check if above suggestion is correct? Seems we have updated code to use newer utilities

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

likewise: "these skills were based off the json work which wasn't in at the time, hence this comment is incorrect. Now that PR is in, it is correct when compared against main"

- `hover` — reveal resolved types, inherited members, full type hierarchy
- `documentSymbol` — list all methods/properties in a class structurally

**Use grep/read when a text match is sufficient:**
- `this.error()` calls in command files
- Component string casing in `this.fail()` and `logCliEvent()` calls
- `new Error(...)` wrapping in `this.fail()` calls (unnecessary — accepts strings)
- `shouldOutputJson` guards around `this.log()`
- `formatSuccess()` calls ending with `.`
- `formatProgress()` calls with manual `...`
- `chalk.cyan()` where `formatResource()` should be used
- Quoted resource names instead of `formatResource(name)`
- `--json` / `--pretty-json` in `static examples` arrays
- Flag spreads like `...productApiFlags`, `...durationFlag`, `...clientIdFlag`
- `waitUntilInterruptedOrTimeout` usage
- Missing test files (glob for them)
- `--duration` flag in subscribe test files
- Mock helper imports (`getMockAblyRealtime`, `getMockAblyRest`, `getMockConfigManager`)

## Step 0: Verify LSP prerequisites

Before doing any analysis, check that the LSP tooling is available:

1. **Check for the LSP tool** — verify the `LSP` tool is available in your tool list. If it is not, tell the user:
> The LSP plugin is not installed. Install it by running:
> ```
> claude plugin add anthropic/claude-code-lsp
> ```
> Then restart Claude Code and re-run this skill.
Stop here — do not proceed without the LSP tool.
2. **Check for `typescript-language-server`** — run `which typescript-language-server` or `npx --yes typescript-language-server --version`. If the binary is not found, install it globally for the user:
```
npm install -g typescript-language-server typescript
```
Comment on lines +38 to +49
Copy link

@coderabbitai coderabbitai bot Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add languages to the fenced command blocks.

The prerequisite command snippets should use a language tag such as sh so markdown tooling and rendering treat them correctly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/ably-codebase-review/SKILL.md around lines 36 - 47, In the
"Check for the LSP tool" and "Check for `typescript-language-server`" sections,
the fenced command blocks containing the commands `claude plugin add
anthropic/claude-code-lsp` and `npm install -g typescript-language-server
typescript` (and the `npx --yes typescript-language-server --version`/`which
typescript-language-server` snippets) need a language tag; update each
triple-backtick fence to use a shell tag such as ```sh so markdown renderers and
tooling treat them correctly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't get what you mean here? Like we should say bash at the start of the block here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can ignore this... claude is smart enough to figure this out

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Confirm installation succeeded before proceeding.
Comment on lines +46 to +50
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if the file exists and read the content around lines 44-48
cat -n ".claude/skills/ably-codebase-review/SKILL.md" | sed -n '40,55p'

Repository: ably/ably-cli

Length of output: 941


🏁 Script executed:

# Also check if there's a branch-review skill mentioned in the review for comparison
find . -type f -name "*.md" | xargs grep -l "branch-review" 2>/dev/null | head -5

Repository: ably/ably-cli

Length of output: 39


🏁 Script executed:

# Get broader context of the skill file to understand its purpose
head -100 ".claude/skills/ably-codebase-review/SKILL.md"

Repository: ably/ably-cli

Length of output: 5966


Audit skill should not mutate the system — ask user to install missing prerequisite instead.

Step 0 Check 1 correctly halts execution when the LSP tool is missing, asking the user to install it. Step 0 Check 2 should follow the same pattern. A codebase audit should fail fast on missing prerequisites, not run npm install -g on the user's machine.

Suggested change
-2. **Check for `typescript-language-server`** — run `which typescript-language-server` or `npx --yes typescript-language-server --version`. If the binary is not found, install it globally for the user:
-   ```
-   npm install -g typescript-language-server typescript
-   ```
-   Confirm installation succeeded before proceeding.
+2. **Check for `typescript-language-server`** — run `which typescript-language-server` or `npx --yes typescript-language-server --version`. If the binary is not found, tell the user to install it:
+   ```sh
+   npm install -g typescript-language-server typescript
+   ```
+   Stop here and ask the user to re-run the skill once installation is complete.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
2. **Check for `typescript-language-server`** — run `which typescript-language-server` or `npx --yes typescript-language-server --version`. If the binary is not found, install it globally for the user:
```
npm install -g typescript-language-server typescript
```
Confirm installation succeeded before proceeding.
2. **Check for `typescript-language-server`** — run `which typescript-language-server` or `npx --yes typescript-language-server --version`. If the binary is not found, tell the user to install it:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/ably-codebase-review/SKILL.md around lines 44 - 48, Change
the audit skill so it does not perform system mutations: in the "Check for
`typescript-language-server`" step (the block that currently suggests `npm
install -g typescript-language-server typescript`), replace the imperative
install action with a prompt asking the user to install the prerequisite and to
re-run the skill after installation; specifically, keep the `which
typescript-language-server` / `npx --yes typescript-language-server --version`
checks, remove or convert the auto-install `npm install -g
typescript-language-server typescript` instruction into a user-facing message
that halts the audit and asks the user to install the tool and re-run the skill.

## Step 1: Read the reference skill
Read these files to understand the canonical patterns:
- `.claude/skills/ably-new-command/SKILL.md` — the checklist and conventions
- `.claude/skills/ably-new-command/references/patterns.md` — implementation templates
- `.claude/skills/ably-new-command/references/testing.md` — test scaffolds
## Step 2: Spawn sweep agents
Launch these agents **in parallel**. Each agent gets a focused mandate and uses the appropriate tool (LSP or grep/read) per check.
### Agent 1: Base Class & Inheritance Sweep
**Goal:** Verify every command extends the correct base class per the skill's rules.
**Method (LSP — needs type resolution):**
1. Use `LSP documentSymbol` on each command file in `src/commands/` to find the class export
2. Use `LSP goToDefinition` on the base class reference (e.g., `AblyBaseCommand`, `ControlBaseCommand`, `ChatBaseCommand`, `SpacesBaseCommand`, `StatsBaseCommand`) to confirm it resolves to the correct file
3. Cross-reference: rooms/* commands should extend `ChatBaseCommand`, spaces/* should extend `SpacesBaseCommand`, stats/* should extend `StatsBaseCommand`, control API commands should extend `ControlBaseCommand`
4. Use `LSP hover` on the class declaration to see the full type hierarchy
**What to flag:** Commands extending the wrong base class, or commands that should use a specialized base class but use `AblyBaseCommand` directly (exception: REST-only commands in rooms/spaces that don't need realtime lifecycle). Also verify that topic index files (`src/commands/*/index.ts`, `src/commands/*/*/index.ts`) extend `BaseTopicCommand` with correct `topicName` and `commandGroup` properties.
### Agent 2: Error Handling Sweep
**Goal:** Verify all commands use `fail()` exclusively for errors, not `this.error()` or `chalk.red()`.
**Method (mixed):**
1. **LSP** `findReferences` on `fail` (from base-command.ts) to get every call site — confirms all commands funnel through `fail()`
2. **Grep** for `this\.error\(` in `src/commands/` to find direct error calls — simple text match is sufficient here
3. For any ambiguous matches, use **LSP** `hover` to confirm they're the oclif `this.error()` and not something else
4. **Grep** for `this\.fail\(` and check component strings are camelCase — single-word lowercase (`"room"`, `"auth"`), multi-word camelCase (`"channelPublish"`, `"roomPresenceSubscribe"`). Flag PascalCase like `"ChannelPublish"` or kebab-case like `"web-cli"`.
5. **Grep** for `this\.fail\(\s*new Error\(` — `this.fail()` accepts plain strings, so `new Error(...)` wrapping is unnecessary. Flag as a simplification opportunity.
**Reasoning guidance:**
- Base class files (`*-base-command.ts`) using `this.error()` are deviations — they should use `this.fail()` instead
- `src/commands/interactive.ts` is exempt (REPL mode)
- `src/commands/help.ts` extending `Command` directly is exempt (no `fail` available)
- `chalk.red("✗")` used as visual indicators (not error handling) is exempt
- Component strings must be camelCase for consistency in verbose logs and JSON envelopes
Comment on lines +75 to +92
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agent 2 asserts that base class files using this.error() are deviations and "should throw new Error()" instead, but src/base-command.ts currently calls this.error() in multiple places (consistent with oclif usage) and there is no fail() API in the current standards. This guidance should be updated to reflect the actual error-handling conventions used by the base commands (or the code/standards need to be updated in lockstep).

Copilot uses AI. Check for mistakes.
### Agent 3: Output Formatting Sweep
**Goal:** Verify all human output uses the correct format helpers and is JSON-guarded.
**Method (grep/read — text patterns):**
1. **Grep** for `chalk\.cyan\(` in command files — should use `formatResource()` instead
2. **Grep** for `formatProgress(` and check matches for manual `...` appended
3. **Grep** for `formatSuccess(` and read the lines to check they end with `.`
4. **Grep** for `shouldOutputJson` to find all JSON-aware commands
5. **Read** command files and look for unguarded `this.log()` calls (not inside `if (!this.shouldOutputJson(flags))`)
6. **Grep** for quoted resource names patterns like `"${` or `'${` near `channel`, `name`, `app` variables — should use `formatResource()`
**Method (LSP — for completeness mapping):**
7. Use `LSP findReferences` on `shouldOutputJson` to get the definitive list of all commands that check for JSON mode — cross-reference against the full command list to find commands missing JSON guards
**Reasoning guidance:**
- List commands don't use `formatSuccess()` (no action to confirm) — this is correct, not a deviation
- `chalk.red("✗")` / `chalk.green("✓")` as visual indicators in test/bench output is acceptable
- `chalk.yellow("Warning: ...")` should use `formatWarning()` instead — `formatWarning` adds the `⚠` symbol automatically and "Warning:" prefix is unnecessary
### Agent 4: Flag Architecture Sweep
**Goal:** Verify every command uses the correct flag sets per the skill rules.
**Method (grep — text patterns):**
1. **Grep** for `productApiFlags`, `clientIdFlag`, `durationFlag`, `rewindFlag`, `timeRangeFlags`, `ControlBaseCommand.globalFlags` in command files to map which commands use which flag sets
2. Cross-reference against the rules:
- Product API commands must use `productApiFlags`
- Subscribe/stream commands must have `durationFlag`
- Subscribe commands with replay must have `rewindFlag`
- History/stats commands must have `timeRangeFlags`
- Commands creating realtime connections must have `clientIdFlag`
- Control API commands must use `ControlBaseCommand.globalFlags`
**Method (LSP — for ambiguous cases):**
3. Use `LSP goToDefinition` on flag spread references to confirm they resolve to `src/flags.ts` (not a local redefinition)
**Reasoning guidance:**
- A command that creates a realtime client but doesn't have `clientIdFlag` is a deviation
- A non-subscribe command having `durationFlag` is suspicious but might be valid (e.g., presence enter)
- Control API commands should NOT have `productApiFlags`
### Agent 5: JSON Output & Envelope Sweep
**Goal:** Verify all commands produce correct JSON output when `--json` is used.
**Method (mixed):**
1. **LSP** `findReferences` on `logJsonResult` and `logJsonEvent` to get the definitive list of all call sites — every leaf command should use one of these
2. **Grep** for `formatJsonRecord` in command files (not base-command.ts) — direct usage should be flagged as needing migration to `logJsonResult`/`logJsonEvent`
3. **Grep** for `shouldOutputJson` in command files to find all JSON-aware commands
4. Cross-reference: every leaf command should appear in both the `logJsonResult`/`logJsonEvent` list and the `shouldOutputJson` list
5. **Read** streaming commands to verify they use `logJsonEvent`, one-shot commands use `logJsonResult`
**Reasoning guidance:**
- Commands that ONLY have human output (no JSON path) are deviations
- Direct `formatJsonRecord` usage in command files should use `logJsonResult`/`logJsonEvent` instead
- Topic index commands (showing help) don't need JSON output
Comment on lines +135 to +150
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agent 5's JSON sweep is built around logJsonResult/logJsonEvent and formatJsonRecord, but those helpers don't exist in the current repo (commands use formatJsonOutput(...)). As a result, the suggested LSP/grep workflows won't find the real JSON output code paths; update this section to target the current helpers (or update the codebase + ably-new-command skill to introduce these APIs together).

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai can you check if above suggestion is correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these skills were based off the json work which wasn't in at the time, hence this comment is incorrect. Now that PR is in, it is correct when compared against main

### Agent 6: Test Pattern Sweep
**Goal:** Verify test files follow the skill's testing conventions.
**Method (grep/glob — text patterns and file matching):**
1. **Glob** for each command in `src/commands/` and check if a corresponding test file exists at `test/unit/commands/`
2. **Grep** for `describe(` in test files to check for required describe blocks: `help`, `argument validation`, `functionality`, `flags`, `error handling`
3. **Grep** for `--duration` in subscribe test files
4. **Grep** for `getMockAblyRealtime`, `getMockAblyRest`, `getMockConfigManager` in test files to verify correct mock usage per command type
5. **Grep** for `--api-key`, `--token`, `--access-token` in unit test files — these should not use CLI auth flags
**Reasoning guidance:**
- Missing test files are deviations but may be documented as known gaps
- Missing describe blocks in existing tests are deviations but may be lower priority
- Subscribe tests that auto-exit via mocked callbacks (without `--duration`) may be acceptable
### Agent 7: Lifecycle & Convention Sweep
**Goal:** Verify commands use the correct lifecycle helpers and include required metadata.
**Method (grep/read — text patterns):**
1. **Grep** for `waitUntilInterruptedOrTimeout` in command files — should use `this.waitAndTrackCleanup()` instead
2. **Grep** for `setupChannelStateLogging` in subscribe commands (rooms/*, spaces/*) — flag those that don't call it
3. **Read** command files and check `static examples` arrays for `--json` or `--pretty-json` examples — flag leaf commands that have examples but no JSON variant
4. **Compare** skill templates (`patterns.md`, `SKILL.md`, `testing.md` — already read in Step 1) against actual codebase method names/imports — flag any outdated patterns
**Method (LSP — for base class verification):**
5. If a subscribe command doesn't call `setupChannelStateLogging` directly, use `LSP goToDefinition` on the base class to check if it's handled there
**Reasoning guidance:**
- `waitUntilInterruptedOrTimeout` in bench commands may be acceptable (different lifecycle)
- Missing `setupChannelStateLogging` in rooms/spaces may be handled by `ChatBaseCommand`/`SpacesBaseCommand` — check the base class
- Topic index commands and `help.ts` don't need `--json` examples
- Skill template accuracy issues are low-effort, high-value fixes
## Step 3: Collect and reason about findings
After all agents complete, compile and **output** findings directly to the user. For each deviation:
1. **State the rule** — which skill checklist item or pattern is violated
2. **State the evidence** — which file, which line, what was found (and which tool was used)
3. **Reason about it** — is this a true deviation, an acceptable exception, or a false positive?
4. **Suggest the fix** — what would the ideal change be to align with the skill
### True deviation criteria
A finding is a **true deviation** if:
- The skill explicitly states a rule and the code violates it
- There is no documented exception for this case
- The fix would not break functionality or require architectural changes beyond the scope
A finding is **NOT a true deviation** if:
- The code has a legitimate reason to differ (e.g., REPL mode, help command without base class access)
- The skill doesn't actually cover this case
- The "violation" is in a base class where the pattern appropriately differs from command-level code
## Output format
Present findings directly to the user in this structure:
### Summary table
| Category | Deviations Found | True Deviations | Exceptions |
|----------|-----------------|-----------------|------------|
| Base Class | N | N | N |
| Error Handling | N | N | N |
| Output Formatting | N | N | N |
| Flag Architecture | N | N | N |
| JSON Output | N | N | N |
| Lifecycle & Conventions | N | N | N |
| Test Patterns | N | N | N |
### Findings by category
For each finding:
- **Rule:** [The skill rule being checked]
- **Evidence:** [File:line, tool used (LSP/grep/read), what was found]
- **Reasoning:** [Why this is/isn't a true deviation]
- **Suggested Fix:** [What to change]
- **Verdict:** TRUE DEVIATION | ACCEPTABLE EXCEPTION | FALSE POSITIVE
19 changes: 16 additions & 3 deletions .claude/skills/ably-new-command/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,10 @@ static flags = {
};
```

**Control API helper methods:**
- `await this.requireAppId(flags)` — resolves and validates the app ID, returns `Promise<string>` (non-nullable). Calls `this.fail()` internally if no app found — no manual null check needed.
- `await this.runControlCommand(flags, api => api.method(appId))` — creates the Control API client, executes the call, and handles errors in one step. Returns `Promise<T>` (non-nullable). Useful for single API calls; for multi-step flows, use `this.createControlApi(flags)` directly.

**When to include `clientIdFlag`:** Add `...clientIdFlag` whenever the user might want to control which client identity performs the operation. This includes: presence enter/subscribe, spaces members, typing, cursors, publish, and any mutation where permissions may depend on the client (update, delete, annotate). The reason is that users may want to test auth scenarios — e.g., "can client B update client A's message?" — so they need the ability to set their client ID.

For history commands, also use `timeRangeFlags`:
Expand Down Expand Up @@ -295,7 +299,7 @@ if (!appId) {
}
```

**In base class utility methods** (e.g., `createControlApi`, `createAblyRealtimeClient`, `parseJsonFlag`), use `throw new Error()`. These methods return values, so they can't call `fail`. The thrown error is caught by the command's try-catch and routed through `fail`.
**In base class utility methods** (e.g., `createControlApi`, `createAblyRealtimeClient`, `parseJsonFlag`), use `this.fail()` for fatal errorsthe same pattern as in command `run()` methods. The `fail()` method returns `never`, so TypeScript knows execution stops and won't require a return value after the call.

**Do NOT use `this.error()` directly** — it is an internal implementation detail of `fail`. Calling `this.error()` directly skips event logging and doesn't respect `--json` mode.

Expand Down Expand Up @@ -364,6 +368,15 @@ pnpm exec eslint . # Lint (must be 0 errors)
pnpm test:unit # Run tests
```

## Maintaining this skill

This skill is the **source of truth** for command conventions. The review skills (`ably-review`, `ably-codebase-review`) and `CLAUDE.md` reference the same patterns. If you change conventions hereor if the underlying source code changes (base classes, helpers, flags, error handling) — update all three locations:
1. **This file** (`SKILL.md`) and its `references/` templates
2. **`ably-review/SKILL.md`** and **`ably-codebase-review/SKILL.md`** — review checks must match current patterns
3. **`.claude/CLAUDE.md`** — project-level docs that all agents see

See the "Keeping Skills Up to Date" section in `CLAUDE.md` for the full list of triggers.

## Checklist

- [ ] Command file at correct path under `src/commands/`
Expand All @@ -372,8 +385,8 @@ pnpm test:unit # Run tests
- [ ] `clientIdFlag` only if command needs client identity
- [ ] All human output wrapped in `if (!this.shouldOutputJson(flags))`
- [ ] Output helpers used correctly (`formatProgress`, `formatSuccess`, `formatWarning`, `formatListening`, `formatResource`, `formatTimestamp`, `formatClientId`, `formatEventType`, `formatLabel`, `formatHeading`, `formatIndex`)
- [ ] `success()` messages end with `.` (period)
- [ ] Resource names use `resource(name)`, never quoted
- [ ] `formatSuccess()` messages end with `.` (period)
- [ ] Resource names use `formatResource(name)`, never quoted
- [ ] JSON output uses `logJsonResult()` (one-shot) or `logJsonEvent()` (streaming), not direct `formatJsonRecord()`
- [ ] Subscribe/enter commands use `this.waitAndTrackCleanup(flags, component, flags.duration)` (not `waitUntilInterruptedOrTimeout`)
- [ ] Error handling uses `this.fail()` exclusively, not `this.error()` or `this.log(chalk.red(...))`
Expand Down
60 changes: 24 additions & 36 deletions .claude/skills/ably-new-command/references/patterns.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,26 +141,30 @@ For single-shot publish, REST is preferred (simpler, no connection overhead). Se
async run(): Promise<void> {
const { args, flags } = await this.parse(MyHistoryCommand);

const rest = await this.createAblyRestClient(flags);
if (!rest) return;
try {
const rest = await this.createAblyRestClient(flags);
if (!rest) return;

const channel = rest.channels.get(args.channel);
const channel = rest.channels.get(args.channel);

const historyParams = {
direction: flags.direction,
limit: flags.limit,
...(flags.start && { start: parseTimestamp(flags.start) }),
...(flags.end && { end: parseTimestamp(flags.end) }),
};

const history = await channel.history(historyParams);
const messages = history.items;

if (this.shouldOutputJson(flags)) {
this.logJsonResult({ messages }, flags);
} else {
this.log(formatSuccess(`Found ${messages.length} messages.`));
// Display each message
const historyParams = {
direction: flags.direction,
limit: flags.limit,
...(flags.start && { start: parseTimestamp(flags.start) }),
...(flags.end && { end: parseTimestamp(flags.end) }),
};

const history = await channel.history(historyParams);
const messages = history.items;

if (this.shouldOutputJson(flags)) {
this.logJsonResult({ messages }, flags);
} else {
this.log(formatSuccess(`Found ${messages.length} messages.`));
// Display each message
}
} catch (error) {
this.fail(error, flags, "history", { channel: args.channel });
}
}
```
Expand Down Expand Up @@ -306,15 +310,7 @@ Full Control API list command template:
async run(): Promise<void> {
const { flags } = await this.parse(MyListCommand);

const appId = await this.resolveAppId(flags);

if (!appId) {
this.fail(
'No app specified. Use --app flag or select an app with "ably apps switch"',
flags,
"listItems",
);
}
const appId = await this.requireAppId(flags);

try {
const controlApi = this.createControlApi(flags);
Expand Down Expand Up @@ -352,15 +348,7 @@ Key conventions for list output:
async run(): Promise<void> {
const { args, flags } = await this.parse(MyControlCommand);

const appId = await this.resolveAppId(flags);

if (!appId) {
this.fail(
'No app specified. Use --app flag or select an app with "ably apps switch"',
flags,
"createResource",
);
}
const appId = await this.requireAppId(flags);

try {
const controlApi = this.createControlApi(flags);
Expand Down
Loading
Loading