From 0427bca5bc5b68f18a19b5da148d60c0890e27f6 Mon Sep 17 00:00:00 2001 From: Eric Allam Date: Sun, 11 Jan 2026 21:51:04 +0000 Subject: [PATCH 1/6] Add CLAUDE.md for Claude Code guidance --- CLAUDE.md | 214 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 214 insertions(+) create mode 100644 CLAUDE.md diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 0000000000..bf106b238a --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,214 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## Build and Development Commands + +This is a pnpm 10.23.0 monorepo using Turborepo. Run commands from root with `pnpm run`. + +### Essential Commands + +```bash +# Start Docker services (PostgreSQL, Redis, Electric) +pnpm run docker + +# Run database migrations +pnpm run db:migrate + +# Seed the database (required for reference projects) +pnpm run db:seed + +# Build packages (required before running) +pnpm run build --filter webapp && pnpm run build --filter trigger.dev && pnpm run build --filter @trigger.dev/sdk + +# Run webapp in development mode (http://localhost:3030) +pnpm run dev --filter webapp + +# Build and watch for changes (CLI and packages) +pnpm run dev --filter trigger.dev --filter "@trigger.dev/*" +``` + +### Testing + +We use vitest exclusively. **Never mock anything** - use testcontainers instead. + +```bash +# Run all tests for a package +pnpm run test --filter webapp + +# Run a single test file (preferred - cd into directory first) +cd internal-packages/run-engine +pnpm run test ./src/engine/tests/ttl.test.ts --run + +# May need to build dependencies first +pnpm run build --filter @internal/run-engine +``` + +Test files go next to source files (e.g., `MyService.ts` → `MyService.test.ts`). + +#### Testcontainers for Redis/PostgreSQL + +```typescript +import { redisTest, postgresTest, containerTest } from "@internal/testcontainers"; + +// Redis only +redisTest("should use redis", async ({ redisOptions }) => { + /* ... */ +}); + +// PostgreSQL only +postgresTest("should use postgres", async ({ prisma }) => { + /* ... */ +}); + +// Both Redis and PostgreSQL +containerTest("should use both", async ({ prisma, redisOptions }) => { + /* ... */ +}); +``` + +### Changesets + +When modifying any public package (`packages/*` or `integrations/*`), add a changeset: + +```bash +pnpm run changeset:add +``` + +- Default to **patch** for bug fixes and minor changes +- Confirm with maintainers before selecting **minor** (new features) +- **Never** select major (breaking changes) without explicit approval + +## Architecture Overview + +### Apps + +- **apps/webapp**: Remix 2.1.0 app - main API, dashboard, and Docker image. Uses Express server. +- **apps/supervisor**: Node.js app handling task execution, interfacing with Docker/Kubernetes. + +### Public Packages + +- **packages/trigger-sdk** (`@trigger.dev/sdk`): Main SDK +- **packages/cli-v3** (`trigger.dev`): CLI package +- **packages/core** (`@trigger.dev/core`): Shared code between SDK and webapp. Import subpaths only (never root). +- **packages/build**: Build extensions and types +- **packages/react-hooks**: React hooks for realtime and triggering +- **packages/redis-worker** (`@trigger.dev/redis-worker`): Custom Redis-based background job system + +### Internal Packages + +- **internal-packages/database** (`@trigger.dev/database`): Prisma 6.14.0 client and schema +- **internal-packages/clickhouse** (`@internal/clickhouse`): ClickHouse client and schema migrations +- **internal-packages/run-engine** (`@internal/run-engine`): "Run Engine 2.0" - run lifecycle management +- **internal-packages/redis** (`@internal/redis`): Redis client creation utilities +- **internal-packages/testcontainers** (`@internal/testcontainers`): Test helpers for Redis/PostgreSQL containers +- **internal-packages/zodworker** (`@internal/zodworker`): Graphile-worker wrapper (being replaced by redis-worker) + +### Reference Projects + +The `references/` directory contains test workspaces for developing and testing new SDK and platform features. Use these projects (e.g., `references/hello-world`) to manually test changes to the CLI, SDK, core packages, and webapp before submitting PRs. + +## Webapp Development + +### Key Locations + +- Trigger API: `apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts` +- Batch trigger: `apps/webapp/app/routes/api.v1.tasks.batch.ts` +- Prisma setup: `apps/webapp/app/db.server.ts` +- Run engine config: `apps/webapp/app/v3/runEngine.server.ts` +- Services: `apps/webapp/app/v3/services/**/*.server.ts` +- Presenters: `apps/webapp/app/v3/presenters/**/*.server.ts` +- OTEL endpoints: `apps/webapp/app/routes/otel.v1.logs.ts`, `otel.v1.traces.ts` + +### Environment Variables + +Access via `env` export from `apps/webapp/app/env.server.ts`, never `process.env` directly. + +For testable code, **never import env.server.ts** in test files. Pass configuration as options instead. Example pattern: + +- `realtimeClient.server.ts` (testable service) +- `realtimeClientGlobal.server.ts` (configuration) + +### Legacy vs Run Engine 2.0 + +The codebase is transitioning from the "legacy run engine" (spread across codebase) to "Run Engine 2.0" (`@internal/run-engine`). Focus on Run Engine 2.0 for new work. + +## Database Migrations (PostgreSQL) + +1. Edit `internal-packages/database/prisma/schema.prisma` +2. Create migration: + ```bash + cd internal-packages/database + pnpm run db:migrate:dev:create --name "add_new_column" + ``` +3. **Important**: Generated migration includes extraneous changes. Remove lines related to: + - `_BackgroundWorkerToBackgroundWorkerFile` + - `_BackgroundWorkerToTaskQueue` + - `_TaskRunToTaskRunTag` + - `_WaitpointRunConnections` + - `_completedWaitpoints` + - `SecretStore_key_idx` + - Various `TaskRun` indexes unless you added them +4. Apply migration: + ```bash + pnpm run db:migrate:deploy && pnpm run generate + ``` + +### Index Migration Rules + +- Indexes **must use CONCURRENTLY** to avoid table locks +- **CONCURRENTLY indexes must be in their own separate migration file** - they cannot be combined with other schema changes + +## ClickHouse Migrations + +ClickHouse migrations use Goose format and live in `internal-packages/clickhouse/schema/`. + +1. Create a new numbered SQL file (e.g., `010_add_new_column.sql`) +2. Use Goose markers: + + ```sql + -- +goose Up + ALTER TABLE trigger_dev.your_table + ADD COLUMN new_column String DEFAULT ''; + + -- +goose Down + ALTER TABLE trigger_dev.your_table + DROP COLUMN new_column; + ``` + +Follow naming conventions in `internal-packages/clickhouse/README.md`: + +- `raw_` prefix for input tables +- `_v1`, `_v2` suffixes for versioning +- `_mv_v1` suffix for materialized views + +## Writing Trigger.dev Tasks + +Always import from `@trigger.dev/sdk`. Never use `@trigger.dev/sdk/v3` or deprecated `client.defineJob` pattern. + +```typescript +import { task } from "@trigger.dev/sdk"; + +// Every task must be exported +export const myTask = task({ + id: "my-task", // Unique ID + run: async (payload: { message: string }) => { + // Task logic - no timeouts + }, +}); +``` + +## Testing with hello-world Reference Project + +First-time setup: + +1. Run `pnpm run db:seed` to seed the database (creates the hello-world project) +2. Build CLI: `pnpm run build --filter trigger.dev && pnpm i` +3. Authorize CLI: `cd references/hello-world && pnpm exec trigger login -a http://localhost:3030` + +Running: + +```bash +cd references/hello-world +pnpm exec trigger dev # or with --log-level debug +``` From dd51a197b0b6ddf7a36f2b5bed98de26f4fd560f Mon Sep 17 00:00:00 2001 From: Eric Allam Date: Sun, 11 Jan 2026 22:03:43 +0000 Subject: [PATCH 2/6] Add trigger-dev-tasks skill for writing Trigger.dev tasks --- .claude/skills/trigger-dev-tasks/SKILL.md | 155 ++++++++ .../trigger-dev-tasks/advanced-tasks.md | 344 +++++++++++++++++ .../skills/trigger-dev-tasks/basic-tasks.md | 165 +++++++++ .claude/skills/trigger-dev-tasks/config.md | 346 ++++++++++++++++++ .claude/skills/trigger-dev-tasks/realtime.md | 244 ++++++++++++ .../trigger-dev-tasks/scheduled-tasks.md | 113 ++++++ .gitignore | 2 +- 7 files changed, 1368 insertions(+), 1 deletion(-) create mode 100644 .claude/skills/trigger-dev-tasks/SKILL.md create mode 100644 .claude/skills/trigger-dev-tasks/advanced-tasks.md create mode 100644 .claude/skills/trigger-dev-tasks/basic-tasks.md create mode 100644 .claude/skills/trigger-dev-tasks/config.md create mode 100644 .claude/skills/trigger-dev-tasks/realtime.md create mode 100644 .claude/skills/trigger-dev-tasks/scheduled-tasks.md diff --git a/.claude/skills/trigger-dev-tasks/SKILL.md b/.claude/skills/trigger-dev-tasks/SKILL.md new file mode 100644 index 0000000000..c6221ba287 --- /dev/null +++ b/.claude/skills/trigger-dev-tasks/SKILL.md @@ -0,0 +1,155 @@ +--- +name: trigger-dev-tasks +description: Use this skill when writing, designing, or optimizing Trigger.dev background tasks and workflows. This includes creating reliable async tasks, implementing AI workflows, setting up scheduled jobs, structuring complex task hierarchies with subtasks, configuring build extensions for tools like ffmpeg or Puppeteer/Playwright, and handling task schemas with Zod validation. +allowed-tools: Read, Write, Edit, Glob, Grep, Bash +--- + +# Trigger.dev Task Expert + +You are an expert Trigger.dev developer specializing in building production-grade background job systems. Tasks deployed to Trigger.dev run in Node.js 21+ and use the `@trigger.dev/sdk` package. + +## Critical Rules + +1. **Always use `@trigger.dev/sdk`** - Never use `@trigger.dev/sdk/v3` or deprecated `client.defineJob` pattern +2. **Never use `node-fetch`** - Use the built-in `fetch` function +3. **Export all tasks** - Every task must be exported, including subtasks +4. **Never wrap wait/trigger calls in Promise.all** - `triggerAndWait`, `batchTriggerAndWait`, and `wait.*` calls cannot be wrapped in `Promise.all` or `Promise.allSettled` + +## Basic Task Pattern + +```ts +import { task } from "@trigger.dev/sdk"; + +export const processData = task({ + id: "process-data", + retry: { + maxAttempts: 10, + factor: 1.8, + minTimeoutInMs: 500, + maxTimeoutInMs: 30_000, + }, + run: async (payload: { userId: string; data: any[] }) => { + console.log(`Processing ${payload.data.length} items`); + return { processed: payload.data.length }; + }, +}); +``` + +## Schema Task (with validation) + +```ts +import { schemaTask } from "@trigger.dev/sdk"; +import { z } from "zod"; + +export const validatedTask = schemaTask({ + id: "validated-task", + schema: z.object({ + name: z.string(), + email: z.string().email(), + }), + run: async (payload) => { + // Payload is automatically validated and typed + return { message: `Hello ${payload.name}` }; + }, +}); +``` + +## Triggering Tasks + +### From Backend Code (type-only import to prevent dependency leakage) + +```ts +import { tasks } from "@trigger.dev/sdk"; +import type { processData } from "./trigger/tasks"; + +const handle = await tasks.trigger("process-data", { + userId: "123", + data: [{ id: 1 }], +}); +``` + +### From Inside Tasks + +```ts +export const parentTask = task({ + id: "parent-task", + run: async (payload) => { + // Trigger and wait - returns Result object, NOT direct output + const result = await childTask.triggerAndWait({ data: "value" }); + if (result.ok) { + console.log("Output:", result.output); + } else { + console.error("Failed:", result.error); + } + + // Or unwrap directly (throws on error) + const output = await childTask.triggerAndWait({ data: "value" }).unwrap(); + }, +}); +``` + +## Idempotency (Critical for Retries) + +Always use idempotency keys when triggering tasks from inside other tasks: + +```ts +import { idempotencyKeys } from "@trigger.dev/sdk"; + +export const paymentTask = task({ + id: "process-payment", + run: async (payload: { orderId: string }) => { + // Scoped to current run - survives retries + const key = await idempotencyKeys.create(`payment-${payload.orderId}`); + + await chargeCustomer.trigger(payload, { + idempotencyKey: key, + idempotencyKeyTTL: "24h", + }); + }, +}); +``` + +## Trigger Options + +```ts +await myTask.trigger(payload, { + delay: "1h", // Delay execution + ttl: "10m", // Cancel if not started within TTL + idempotencyKey: key, + queue: "my-queue", + machine: "large-1x", // micro, small-1x, small-2x, medium-1x, medium-2x, large-1x, large-2x + maxAttempts: 3, + tags: ["user_123"], // Max 10 tags +}); +``` + +## Machine Presets + +| Preset | vCPU | Memory | +|-------------|------|--------| +| micro | 0.25 | 0.25GB | +| small-1x | 0.5 | 0.5GB | +| small-2x | 1 | 1GB | +| medium-1x | 1 | 2GB | +| medium-2x | 2 | 4GB | +| large-1x | 4 | 8GB | +| large-2x | 8 | 16GB | + +## Design Principles + +1. **Break complex workflows into subtasks** that can be independently retried and made idempotent +2. **Don't over-complicate** - Sometimes `Promise.allSettled` inside a single task is better than many subtasks (each task has dedicated process and is charged by millisecond) +3. **Always configure retries** - Set appropriate `maxAttempts` based on the operation +4. **Use idempotency keys** - Especially for payment/critical operations +5. **Group related subtasks** - Keep subtasks only used by one parent in the same file, don't export them +6. **Use logger** - Log at key execution points with `logger.info()`, `logger.error()`, etc. + +## Reference Documentation + +For detailed documentation on specific topics, read these files: + +- `basic-tasks.md` - Task basics, triggering, waits +- `advanced-tasks.md` - Tags, queues, concurrency, metadata, error handling +- `scheduled-tasks.md` - Cron schedules, declarative and imperative +- `realtime.md` - Real-time subscriptions, streams, React hooks +- `config.md` - trigger.config.ts, build extensions (Prisma, Playwright, FFmpeg, etc.) diff --git a/.claude/skills/trigger-dev-tasks/advanced-tasks.md b/.claude/skills/trigger-dev-tasks/advanced-tasks.md new file mode 100644 index 0000000000..5024563196 --- /dev/null +++ b/.claude/skills/trigger-dev-tasks/advanced-tasks.md @@ -0,0 +1,344 @@ +# Trigger.dev Advanced Tasks + +**Advanced patterns and features for writing tasks** + +## Tags & Organization + +```ts +import { task, tags } from "@trigger.dev/sdk"; + +export const processUser = task({ + id: "process-user", + run: async (payload: { userId: string; orgId: string }, { ctx }) => { + // Add tags during execution + await tags.add(`user_${payload.userId}`); + await tags.add(`org_${payload.orgId}`); + + return { processed: true }; + }, +}); + +// Trigger with tags +await processUser.trigger( + { userId: "123", orgId: "abc" }, + { tags: ["priority", "user_123", "org_abc"] } // Max 10 tags per run +); + +// Subscribe to tagged runs +for await (const run of runs.subscribeToRunsWithTag("user_123")) { + console.log(`User task ${run.id}: ${run.status}`); +} +``` + +**Tag Best Practices:** + +- Use prefixes: `user_123`, `org_abc`, `video:456` +- Max 10 tags per run, 1-64 characters each +- Tags don't propagate to child tasks automatically + +## Concurrency & Queues + +```ts +import { task, queue } from "@trigger.dev/sdk"; + +// Shared queue for related tasks +const emailQueue = queue({ + name: "email-processing", + concurrencyLimit: 5, // Max 5 emails processing simultaneously +}); + +// Task-level concurrency +export const oneAtATime = task({ + id: "sequential-task", + queue: { concurrencyLimit: 1 }, // Process one at a time + run: async (payload) => { + // Critical section - only one instance runs + }, +}); + +// Per-user concurrency +export const processUserData = task({ + id: "process-user-data", + run: async (payload: { userId: string }) => { + // Override queue with user-specific concurrency + await childTask.trigger(payload, { + queue: { + name: `user-${payload.userId}`, + concurrencyLimit: 2, + }, + }); + }, +}); + +export const emailTask = task({ + id: "send-email", + queue: emailQueue, // Use shared queue + run: async (payload: { to: string }) => { + // Send email logic + }, +}); +``` + +## Error Handling & Retries + +```ts +import { task, retry, AbortTaskRunError } from "@trigger.dev/sdk"; + +export const resilientTask = task({ + id: "resilient-task", + retry: { + maxAttempts: 10, + factor: 1.8, // Exponential backoff multiplier + minTimeoutInMs: 500, + maxTimeoutInMs: 30_000, + randomize: false, + }, + catchError: async ({ error, ctx }) => { + // Custom error handling + if (error.code === "FATAL_ERROR") { + throw new AbortTaskRunError("Cannot retry this error"); + } + + // Log error details + console.error(`Task ${ctx.task.id} failed:`, error); + + // Allow retry by returning nothing + return { retryAt: new Date(Date.now() + 60000) }; // Retry in 1 minute + }, + run: async (payload) => { + // Retry specific operations + const result = await retry.onThrow( + async () => { + return await unstableApiCall(payload); + }, + { maxAttempts: 3 } + ); + + // Conditional HTTP retries + const response = await retry.fetch("https://api.example.com", { + retry: { + maxAttempts: 5, + condition: (response, error) => { + return response?.status === 429 || response?.status >= 500; + }, + }, + }); + + return result; + }, +}); +``` + +## Machines & Performance + +```ts +export const heavyTask = task({ + id: "heavy-computation", + machine: { preset: "large-2x" }, // 8 vCPU, 16 GB RAM + maxDuration: 1800, // 30 minutes timeout + run: async (payload, { ctx }) => { + // Resource-intensive computation + if (ctx.machine.preset === "large-2x") { + // Use all available cores + return await parallelProcessing(payload); + } + + return await standardProcessing(payload); + }, +}); + +// Override machine when triggering +await heavyTask.trigger(payload, { + machine: { preset: "medium-1x" }, // Override for this run +}); +``` + +**Machine Presets:** + +- `micro`: 0.25 vCPU, 0.25 GB RAM +- `small-1x`: 0.5 vCPU, 0.5 GB RAM (default) +- `small-2x`: 1 vCPU, 1 GB RAM +- `medium-1x`: 1 vCPU, 2 GB RAM +- `medium-2x`: 2 vCPU, 4 GB RAM +- `large-1x`: 4 vCPU, 8 GB RAM +- `large-2x`: 8 vCPU, 16 GB RAM + +## Idempotency + +```ts +import { task, idempotencyKeys } from "@trigger.dev/sdk"; + +export const paymentTask = task({ + id: "process-payment", + retry: { + maxAttempts: 3, + }, + run: async (payload: { orderId: string; amount: number }) => { + // Automatically scoped to this task run, so if the task is retried, the idempotency key will be the same + const idempotencyKey = await idempotencyKeys.create(`payment-${payload.orderId}`); + + // Ensure payment is processed only once + await chargeCustomer.trigger(payload, { + idempotencyKey, + idempotencyKeyTTL: "24h", // Key expires in 24 hours + }); + }, +}); + +// Payload-based idempotency +import { createHash } from "node:crypto"; + +function createPayloadHash(payload: any): string { + const hash = createHash("sha256"); + hash.update(JSON.stringify(payload)); + return hash.digest("hex"); +} + +export const deduplicatedTask = task({ + id: "deduplicated-task", + run: async (payload) => { + const payloadHash = createPayloadHash(payload); + const idempotencyKey = await idempotencyKeys.create(payloadHash); + + await processData.trigger(payload, { idempotencyKey }); + }, +}); +``` + +## Metadata & Progress Tracking + +```ts +import { task, metadata } from "@trigger.dev/sdk"; + +export const batchProcessor = task({ + id: "batch-processor", + run: async (payload: { items: any[] }, { ctx }) => { + const totalItems = payload.items.length; + + // Initialize progress metadata + metadata + .set("progress", 0) + .set("totalItems", totalItems) + .set("processedItems", 0) + .set("status", "starting"); + + const results = []; + + for (let i = 0; i < payload.items.length; i++) { + const item = payload.items[i]; + + // Process item + const result = await processItem(item); + results.push(result); + + // Update progress + const progress = ((i + 1) / totalItems) * 100; + metadata + .set("progress", progress) + .increment("processedItems", 1) + .append("logs", `Processed item ${i + 1}/${totalItems}`) + .set("currentItem", item.id); + } + + // Final status + metadata.set("status", "completed"); + + return { results, totalProcessed: results.length }; + }, +}); + +// Update parent metadata from child task +export const childTask = task({ + id: "child-task", + run: async (payload, { ctx }) => { + // Update parent task metadata + metadata.parent.set("childStatus", "processing"); + metadata.root.increment("childrenCompleted", 1); + + return { processed: true }; + }, +}); +``` + +## Logging & Tracing + +```ts +import { task, logger } from "@trigger.dev/sdk"; + +export const tracedTask = task({ + id: "traced-task", + run: async (payload, { ctx }) => { + logger.info("Task started", { userId: payload.userId }); + + // Custom trace with attributes + const user = await logger.trace( + "fetch-user", + async (span) => { + span.setAttribute("user.id", payload.userId); + span.setAttribute("operation", "database-fetch"); + + const userData = await database.findUser(payload.userId); + span.setAttribute("user.found", !!userData); + + return userData; + }, + { userId: payload.userId } + ); + + logger.debug("User fetched", { user: user.id }); + + try { + const result = await processUser(user); + logger.info("Processing completed", { result }); + return result; + } catch (error) { + logger.error("Processing failed", { + error: error.message, + userId: payload.userId, + }); + throw error; + } + }, +}); +``` + +## Hidden Tasks + +```ts +// Hidden task - not exported, only used internally +const internalProcessor = task({ + id: "internal-processor", + run: async (payload: { data: string }) => { + return { processed: payload.data.toUpperCase() }; + }, +}); + +// Public task that uses hidden task +export const publicWorkflow = task({ + id: "public-workflow", + run: async (payload: { input: string }) => { + // Use hidden task internally + const result = await internalProcessor.triggerAndWait({ + data: payload.input, + }); + + if (result.ok) { + return { output: result.output.processed }; + } + + throw new Error("Internal processing failed"); + }, +}); +``` + +## Best Practices + +- **Concurrency**: Use queues to prevent overwhelming external services +- **Retries**: Configure exponential backoff for transient failures +- **Idempotency**: Always use for payment/critical operations +- **Metadata**: Track progress for long-running tasks +- **Machines**: Match machine size to computational requirements +- **Tags**: Use consistent naming patterns for filtering +- **Error Handling**: Distinguish between retryable and fatal errors + +Design tasks to be stateless, idempotent, and resilient to failures. Use metadata for state tracking and queues for resource management. diff --git a/.claude/skills/trigger-dev-tasks/basic-tasks.md b/.claude/skills/trigger-dev-tasks/basic-tasks.md new file mode 100644 index 0000000000..5bf91d60b7 --- /dev/null +++ b/.claude/skills/trigger-dev-tasks/basic-tasks.md @@ -0,0 +1,165 @@ +# Trigger.dev Basic Tasks + +**MUST use `@trigger.dev/sdk`, NEVER `client.defineJob`** + +## Basic Task + +```ts +import { task } from "@trigger.dev/sdk"; + +export const processData = task({ + id: "process-data", + retry: { + maxAttempts: 10, + factor: 1.8, + minTimeoutInMs: 500, + maxTimeoutInMs: 30_000, + randomize: false, + }, + run: async (payload: { userId: string; data: any[] }) => { + // Task logic - runs for long time, no timeouts + console.log(`Processing ${payload.data.length} items for user ${payload.userId}`); + return { processed: payload.data.length }; + }, +}); +``` + +## Schema Task (with validation) + +```ts +import { schemaTask } from "@trigger.dev/sdk"; +import { z } from "zod"; + +export const validatedTask = schemaTask({ + id: "validated-task", + schema: z.object({ + name: z.string(), + age: z.number(), + email: z.string().email(), + }), + run: async (payload) => { + // Payload is automatically validated and typed + return { message: `Hello ${payload.name}, age ${payload.age}` }; + }, +}); +``` + +## Triggering Tasks + +### From Backend Code + +```ts +import { tasks } from "@trigger.dev/sdk"; +import type { processData } from "./trigger/tasks"; + +// Single trigger +const handle = await tasks.trigger("process-data", { + userId: "123", + data: [{ id: 1 }, { id: 2 }], +}); + +// Batch trigger +const batchHandle = await tasks.batchTrigger("process-data", [ + { payload: { userId: "123", data: [{ id: 1 }] } }, + { payload: { userId: "456", data: [{ id: 2 }] } }, +]); +``` + +### From Inside Tasks (with Result handling) + +```ts +export const parentTask = task({ + id: "parent-task", + run: async (payload) => { + // Trigger and continue + const handle = await childTask.trigger({ data: "value" }); + + // Trigger and wait - returns Result object, NOT task output + const result = await childTask.triggerAndWait({ data: "value" }); + if (result.ok) { + console.log("Task output:", result.output); // Actual task return value + } else { + console.error("Task failed:", result.error); + } + + // Quick unwrap (throws on error) + const output = await childTask.triggerAndWait({ data: "value" }).unwrap(); + + // Batch trigger and wait + const results = await childTask.batchTriggerAndWait([ + { payload: { data: "item1" } }, + { payload: { data: "item2" } }, + ]); + + for (const run of results) { + if (run.ok) { + console.log("Success:", run.output); + } else { + console.log("Failed:", run.error); + } + } + }, +}); + +export const childTask = task({ + id: "child-task", + run: async (payload: { data: string }) => { + return { processed: payload.data }; + }, +}); +``` + +> Never wrap triggerAndWait or batchTriggerAndWait calls in a Promise.all or Promise.allSettled as this is not supported in Trigger.dev tasks. + +## Waits + +```ts +import { task, wait } from "@trigger.dev/sdk"; + +export const taskWithWaits = task({ + id: "task-with-waits", + run: async (payload) => { + console.log("Starting task"); + + // Wait for specific duration + await wait.for({ seconds: 30 }); + await wait.for({ minutes: 5 }); + await wait.for({ hours: 1 }); + await wait.for({ days: 1 }); + + // Wait until specific date + await wait.until({ date: new Date("2024-12-25") }); + + // Wait for token (from external system) + await wait.forToken({ + token: "user-approval-token", + timeoutInSeconds: 3600, // 1 hour timeout + }); + + console.log("All waits completed"); + return { status: "completed" }; + }, +}); +``` + +> Never wrap wait calls in a Promise.all or Promise.allSettled as this is not supported in Trigger.dev tasks. + +## Key Points + +- **Result vs Output**: `triggerAndWait()` returns a `Result` object with `ok`, `output`, `error` properties - NOT the direct task output +- **Type safety**: Use `import type` for task references when triggering from backend +- **Waits > 5 seconds**: Automatically checkpointed, don't count toward compute usage + +## NEVER Use (v2 deprecated) + +```ts +// BREAKS APPLICATION +client.defineJob({ + id: "job-id", + run: async (payload, io) => { + /* ... */ + }, +}); +``` + +Use SDK (`@trigger.dev/sdk`), check `result.ok` before accessing `result.output` diff --git a/.claude/skills/trigger-dev-tasks/config.md b/.claude/skills/trigger-dev-tasks/config.md new file mode 100644 index 0000000000..f6a4db1c4b --- /dev/null +++ b/.claude/skills/trigger-dev-tasks/config.md @@ -0,0 +1,346 @@ +# Trigger.dev Configuration + +**Complete guide to configuring `trigger.config.ts` with build extensions** + +## Basic Configuration + +```ts +import { defineConfig } from "@trigger.dev/sdk"; + +export default defineConfig({ + project: "", // Required: Your project reference + dirs: ["./trigger"], // Task directories + runtime: "node", // "node", "node-22", or "bun" + logLevel: "info", // "debug", "info", "warn", "error" + + // Default retry settings + retries: { + enabledInDev: false, + default: { + maxAttempts: 3, + minTimeoutInMs: 1000, + maxTimeoutInMs: 10000, + factor: 2, + randomize: true, + }, + }, + + // Build configuration + build: { + autoDetectExternal: true, + keepNames: true, + minify: false, + extensions: [], // Build extensions go here + }, + + // Global lifecycle hooks + onStartAttempt: async ({ payload, ctx }) => { + console.log("Global task start"); + }, + onSuccess: async ({ payload, output, ctx }) => { + console.log("Global task success"); + }, + onFailure: async ({ payload, error, ctx }) => { + console.log("Global task failure"); + }, +}); +``` + +## Build Extensions + +### Database & ORM + +#### Prisma + +```ts +import { prismaExtension } from "@trigger.dev/build/extensions/prisma"; + +extensions: [ + prismaExtension({ + schema: "prisma/schema.prisma", + version: "5.19.0", // Optional: specify version + migrate: true, // Run migrations during build + directUrlEnvVarName: "DIRECT_DATABASE_URL", + typedSql: true, // Enable TypedSQL support + }), +]; +``` + +#### TypeScript Decorators (for TypeORM) + +```ts +import { emitDecoratorMetadata } from "@trigger.dev/build/extensions/typescript"; + +extensions: [ + emitDecoratorMetadata(), // Enables decorator metadata +]; +``` + +### Scripting Languages + +#### Python + +```ts +import { pythonExtension } from "@trigger.dev/build/extensions/python"; + +extensions: [ + pythonExtension({ + scripts: ["./python/**/*.py"], // Copy Python files + requirementsFile: "./requirements.txt", // Install packages + devPythonBinaryPath: ".venv/bin/python", // Dev mode binary + }), +]; + +// Usage in tasks +const result = await python.runInline(`print("Hello, world!")`); +const output = await python.runScript("./python/script.py", ["arg1"]); +``` + +### Browser Automation + +#### Playwright + +```ts +import { playwright } from "@trigger.dev/build/extensions/playwright"; + +extensions: [ + playwright({ + browsers: ["chromium", "firefox", "webkit"], // Default: ["chromium"] + headless: true, // Default: true + }), +]; +``` + +#### Puppeteer + +```ts +import { puppeteer } from "@trigger.dev/build/extensions/puppeteer"; + +extensions: [puppeteer()]; + +// Environment variable needed: +// PUPPETEER_EXECUTABLE_PATH: "/usr/bin/google-chrome-stable" +``` + +#### Lightpanda + +```ts +import { lightpanda } from "@trigger.dev/build/extensions/lightpanda"; + +extensions: [ + lightpanda({ + version: "latest", // or "nightly" + disableTelemetry: false, + }), +]; +``` + +### Media Processing + +#### FFmpeg + +```ts +import { ffmpeg } from "@trigger.dev/build/extensions/core"; + +extensions: [ + ffmpeg({ version: "7" }), // Static build, or omit for Debian version +]; + +// Automatically sets FFMPEG_PATH and FFPROBE_PATH +// Add fluent-ffmpeg to external packages if using +``` + +#### Audio Waveform + +```ts +import { audioWaveform } from "@trigger.dev/build/extensions/audioWaveform"; + +extensions: [ + audioWaveform(), // Installs Audio Waveform 1.1.0 +]; +``` + +### System & Package Management + +#### System Packages (apt-get) + +```ts +import { aptGet } from "@trigger.dev/build/extensions/core"; + +extensions: [ + aptGet({ + packages: ["ffmpeg", "imagemagick", "curl=7.68.0-1"], // Can specify versions + }), +]; +``` + +#### Additional NPM Packages + +Only use this for installing CLI tools, NOT packages you import in your code. + +```ts +import { additionalPackages } from "@trigger.dev/build/extensions/core"; + +extensions: [ + additionalPackages({ + packages: ["wrangler"], // CLI tools and specific versions + }), +]; +``` + +#### Additional Files + +```ts +import { additionalFiles } from "@trigger.dev/build/extensions/core"; + +extensions: [ + additionalFiles({ + files: ["wrangler.toml", "./assets/**", "./fonts/**"], // Glob patterns supported + }), +]; +``` + +### Environment & Build Tools + +#### Environment Variable Sync + +```ts +import { syncEnvVars } from "@trigger.dev/build/extensions/core"; + +extensions: [ + syncEnvVars(async (ctx) => { + // ctx contains: environment, projectRef, env + return [ + { name: "SECRET_KEY", value: await getSecret(ctx.environment) }, + { name: "API_URL", value: ctx.environment === "prod" ? "api.prod.com" : "api.dev.com" }, + ]; + }), +]; +``` + +#### ESBuild Plugins + +```ts +import { esbuildPlugin } from "@trigger.dev/build/extensions"; +import { sentryEsbuildPlugin } from "@sentry/esbuild-plugin"; + +extensions: [ + esbuildPlugin( + sentryEsbuildPlugin({ + org: process.env.SENTRY_ORG, + project: process.env.SENTRY_PROJECT, + authToken: process.env.SENTRY_AUTH_TOKEN, + }), + { placement: "last", target: "deploy" } // Optional config + ), +]; +``` + +## Custom Build Extensions + +```ts +import { defineConfig } from "@trigger.dev/sdk"; + +const customExtension = { + name: "my-custom-extension", + + externalsForTarget: (target) => { + return ["some-native-module"]; // Add external dependencies + }, + + onBuildStart: async (context) => { + console.log(`Build starting for ${context.target}`); + // Register esbuild plugins, modify build context + }, + + onBuildComplete: async (context, manifest) => { + console.log("Build complete, adding layers"); + // Add build layers, modify deployment + context.addLayer({ + id: "my-layer", + files: [{ source: "./custom-file", destination: "/app/custom" }], + commands: ["chmod +x /app/custom"], + }); + }, +}; + +export default defineConfig({ + project: "my-project", + build: { + extensions: [customExtension], + }, +}); +``` + +## Advanced Configuration + +### Telemetry + +```ts +import { PrismaInstrumentation } from "@prisma/instrumentation"; +import { OpenAIInstrumentation } from "@langfuse/openai"; + +export default defineConfig({ + // ... other config + telemetry: { + instrumentations: [new PrismaInstrumentation(), new OpenAIInstrumentation()], + exporters: [customExporter], // Optional custom exporters + }, +}); +``` + +### Machine & Performance + +```ts +export default defineConfig({ + // ... other config + defaultMachine: "large-1x", // Default machine for all tasks + maxDuration: 300, // Default max duration (seconds) + enableConsoleLogging: true, // Console logging in development +}); +``` + +## Common Extension Combinations + +### Full-Stack Web App + +```ts +extensions: [ + prismaExtension({ schema: "prisma/schema.prisma", migrate: true }), + additionalFiles({ files: ["./public/**", "./assets/**"] }), + syncEnvVars(async (ctx) => [...envVars]), +]; +``` + +### AI/ML Processing + +```ts +extensions: [ + pythonExtension({ + scripts: ["./ai/**/*.py"], + requirementsFile: "./requirements.txt", + }), + ffmpeg({ version: "7" }), + additionalPackages({ packages: ["wrangler"] }), +]; +``` + +### Web Scraping + +```ts +extensions: [ + playwright({ browsers: ["chromium"] }), + puppeteer(), + additionalFiles({ files: ["./selectors.json", "./proxies.txt"] }), +]; +``` + +## Best Practices + +- **Use specific versions**: Pin extension versions for reproducible builds +- **External packages**: Add modules with native addons to the `build.external` array +- **Environment sync**: Use `syncEnvVars` for dynamic secrets +- **File paths**: Use glob patterns for flexible file inclusion +- **Debug builds**: Use `--log-level debug --dry-run` for troubleshooting + +Extensions only affect deployment, not local development. Use `external` array for packages that shouldn't be bundled. diff --git a/.claude/skills/trigger-dev-tasks/realtime.md b/.claude/skills/trigger-dev-tasks/realtime.md new file mode 100644 index 0000000000..c1c4c5821a --- /dev/null +++ b/.claude/skills/trigger-dev-tasks/realtime.md @@ -0,0 +1,244 @@ +# Trigger.dev Realtime + +**Real-time monitoring and updates for runs** + +## Core Concepts + +Realtime allows you to: + +- Subscribe to run status changes, metadata updates, and streams +- Build real-time dashboards and UI updates +- Monitor task progress from frontend and backend + +## Authentication + +### Public Access Tokens + +```ts +import { auth } from "@trigger.dev/sdk"; + +// Read-only token for specific runs +const publicToken = await auth.createPublicToken({ + scopes: { + read: { + runs: ["run_123", "run_456"], + tasks: ["my-task-1", "my-task-2"], + }, + }, + expirationTime: "1h", // Default: 15 minutes +}); +``` + +### Trigger Tokens (Frontend only) + +```ts +// Single-use token for triggering tasks +const triggerToken = await auth.createTriggerPublicToken("my-task", { + expirationTime: "30m", +}); +``` + +## Backend Usage + +### Subscribe to Runs + +```ts +import { runs, tasks } from "@trigger.dev/sdk"; + +// Trigger and subscribe +const handle = await tasks.trigger("my-task", { data: "value" }); + +// Subscribe to specific run +for await (const run of runs.subscribeToRun(handle.id)) { + console.log(`Status: ${run.status}, Progress: ${run.metadata?.progress}`); + if (run.status === "COMPLETED") break; +} + +// Subscribe to runs with tag +for await (const run of runs.subscribeToRunsWithTag("user-123")) { + console.log(`Tagged run ${run.id}: ${run.status}`); +} + +// Subscribe to batch +for await (const run of runs.subscribeToBatch(batchId)) { + console.log(`Batch run ${run.id}: ${run.status}`); +} +``` + +### Realtime Streams v2 + +```ts +import { streams, InferStreamType } from "@trigger.dev/sdk"; + +// 1. Define streams (shared location) +export const aiStream = streams.define({ + id: "ai-output", +}); + +export type AIStreamPart = InferStreamType; + +// 2. Pipe from task +export const streamingTask = task({ + id: "streaming-task", + run: async (payload) => { + const completion = await openai.chat.completions.create({ + model: "gpt-4", + messages: [{ role: "user", content: payload.prompt }], + stream: true, + }); + + const { waitUntilComplete } = aiStream.pipe(completion); + await waitUntilComplete(); + }, +}); + +// 3. Read from backend +const stream = await aiStream.read(runId, { + timeoutInSeconds: 300, + startIndex: 0, // Resume from specific chunk +}); + +for await (const chunk of stream) { + console.log("Chunk:", chunk); // Fully typed +} +``` + +## React Frontend Usage + +### Installation + +```bash +npm add @trigger.dev/react-hooks +``` + +### Triggering Tasks + +```tsx +"use client"; +import { useTaskTrigger, useRealtimeTaskTrigger } from "@trigger.dev/react-hooks"; +import type { myTask } from "../trigger/tasks"; + +function TriggerComponent({ accessToken }: { accessToken: string }) { + // Basic trigger + const { submit, handle, isLoading } = useTaskTrigger("my-task", { + accessToken, + }); + + // Trigger with realtime updates + const { + submit: realtimeSubmit, + run, + isLoading: isRealtimeLoading, + } = useRealtimeTaskTrigger("my-task", { accessToken }); + + return ( +
+ + + + + {run &&
Status: {run.status}
} +
+ ); +} +``` + +### Subscribing to Runs + +```tsx +"use client"; +import { useRealtimeRun, useRealtimeRunsWithTag } from "@trigger.dev/react-hooks"; +import type { myTask } from "../trigger/tasks"; + +function SubscribeComponent({ runId, accessToken }: { runId: string; accessToken: string }) { + // Subscribe to specific run + const { run, error } = useRealtimeRun(runId, { + accessToken, + onComplete: (run) => { + console.log("Task completed:", run.output); + }, + }); + + // Subscribe to tagged runs + const { runs } = useRealtimeRunsWithTag("user-123", { accessToken }); + + if (error) return
Error: {error.message}
; + if (!run) return
Loading...
; + + return ( +
+
Status: {run.status}
+
Progress: {run.metadata?.progress || 0}%
+ {run.output &&
Result: {JSON.stringify(run.output)}
} + +

Tagged Runs:

+ {runs.map((r) => ( +
+ {r.id}: {r.status} +
+ ))} +
+ ); +} +``` + +### Realtime Streams with React + +```tsx +"use client"; +import { useRealtimeStream } from "@trigger.dev/react-hooks"; +import { aiStream } from "../trigger/streams"; + +function StreamComponent({ runId, accessToken }: { runId: string; accessToken: string }) { + // Pass defined stream directly for type safety + const { parts, error } = useRealtimeStream(aiStream, runId, { + accessToken, + timeoutInSeconds: 300, + throttleInMs: 50, // Control re-render frequency + }); + + if (error) return
Error: {error.message}
; + if (!parts) return
Loading...
; + + const text = parts.join(""); // parts is typed as AIStreamPart[] + + return
Streamed Text: {text}
; +} +``` + +### Wait Tokens + +```tsx +"use client"; +import { useWaitToken } from "@trigger.dev/react-hooks"; + +function WaitTokenComponent({ tokenId, accessToken }: { tokenId: string; accessToken: string }) { + const { complete } = useWaitToken(tokenId, { accessToken }); + + return ; +} +``` + +## Run Object Properties + +Key properties available in run subscriptions: + +- `id`: Unique run identifier +- `status`: `QUEUED`, `EXECUTING`, `COMPLETED`, `FAILED`, `CANCELED`, etc. +- `payload`: Task input data (typed) +- `output`: Task result (typed, when completed) +- `metadata`: Real-time updatable data +- `createdAt`, `updatedAt`: Timestamps +- `costInCents`: Execution cost + +## Best Practices + +- **Use Realtime over SWR**: Recommended for most use cases due to rate limits +- **Scope tokens properly**: Only grant necessary read/trigger permissions +- **Handle errors**: Always check for errors in hooks and subscriptions +- **Type safety**: Use task types for proper payload/output typing +- **Cleanup subscriptions**: Backend subscriptions auto-complete, frontend hooks auto-cleanup diff --git a/.claude/skills/trigger-dev-tasks/scheduled-tasks.md b/.claude/skills/trigger-dev-tasks/scheduled-tasks.md new file mode 100644 index 0000000000..b314753497 --- /dev/null +++ b/.claude/skills/trigger-dev-tasks/scheduled-tasks.md @@ -0,0 +1,113 @@ +# Scheduled Tasks (Cron) + +Recurring tasks using cron. For one-off future runs, use the **delay** option. + +## Define a Scheduled Task + +```ts +import { schedules } from "@trigger.dev/sdk"; + +export const task = schedules.task({ + id: "first-scheduled-task", + run: async (payload) => { + payload.timestamp; // Date (scheduled time, UTC) + payload.lastTimestamp; // Date | undefined + payload.timezone; // IANA, e.g. "America/New_York" (default "UTC") + payload.scheduleId; // string + payload.externalId; // string | undefined + payload.upcoming; // Date[] + + payload.timestamp.toLocaleString("en-US", { timeZone: payload.timezone }); + }, +}); +``` + +> Scheduled tasks need at least one schedule attached to run. + +## Attach Schedules + +**Declarative (sync on dev/deploy):** + +```ts +schedules.task({ + id: "every-2h", + cron: "0 */2 * * *", // UTC + run: async () => {}, +}); + +schedules.task({ + id: "tokyo-5am", + cron: { pattern: "0 5 * * *", timezone: "Asia/Tokyo", environments: ["PRODUCTION", "STAGING"] }, + run: async () => {}, +}); +``` + +**Imperative (SDK or dashboard):** + +```ts +await schedules.create({ + task: task.id, + cron: "0 0 * * *", + timezone: "America/New_York", // DST-aware + externalId: "user_123", + deduplicationKey: "user_123-daily", // updates if reused +}); +``` + +### Dynamic / Multi-tenant Example + +```ts +// /trigger/reminder.ts +export const reminderTask = schedules.task({ + id: "todo-reminder", + run: async (p) => { + if (!p.externalId) throw new Error("externalId is required"); + const user = await db.getUser(p.externalId); + await sendReminderEmail(user); + }, +}); +``` + +```ts +// app/reminders/route.ts +export async function POST(req: Request) { + const data = await req.json(); + return Response.json( + await schedules.create({ + task: reminderTask.id, + cron: "0 8 * * *", + timezone: data.timezone, + externalId: data.userId, + deduplicationKey: `${data.userId}-reminder`, + }) + ); +} +``` + +## Cron Syntax (no seconds) + +``` +* * * * * +| | | | └ day of week (0–7 or 1L–7L; 0/7=Sun; L=last) +| | | └── month (1–12) +| | └──── day of month (1–31 or L) +| └────── hour (0–23) +└──────── minute (0–59) +``` + +## When Schedules Won't Trigger + +- **Dev:** only when the dev CLI is running. +- **Staging/Production:** only for tasks in the **latest deployment**. + +## SDK Management + +```ts +await schedules.retrieve(id); +await schedules.list(); +await schedules.update(id, { cron: "0 0 1 * *", externalId: "ext", deduplicationKey: "key" }); +await schedules.deactivate(id); +await schedules.activate(id); +await schedules.del(id); +await schedules.timezones(); // list of IANA timezones +``` diff --git a/.gitignore b/.gitignore index 8267c9fbab..46ea0e45ea 100644 --- a/.gitignore +++ b/.gitignore @@ -61,6 +61,6 @@ apps/**/public/build /packages/core/src/package.json /packages/trigger-sdk/src/package.json /packages/python/src/package.json -.claude +**/.claude/settings.local.json .mcp.log .cursor/debug.log \ No newline at end of file From 36c16f93fb224a7b162cf7ff42e3e3ab210b2b03 Mon Sep 17 00:00:00 2001 From: Eric Allam Date: Sun, 11 Jan 2026 22:08:19 +0000 Subject: [PATCH 3/6] Add 4.3.0 rules with batch trigger v2 and debouncing features --- .claude/skills/trigger-dev-tasks/SKILL.md | 45 ++ .../trigger-dev-tasks/advanced-tasks.md | 143 +++++- .../skills/trigger-dev-tasks/basic-tasks.md | 38 +- rules/4.3.0/advanced-tasks.md | 485 ++++++++++++++++++ rules/4.3.0/basic-tasks.md | 199 +++++++ rules/manifest.json | 51 +- 6 files changed, 957 insertions(+), 4 deletions(-) create mode 100644 rules/4.3.0/advanced-tasks.md create mode 100644 rules/4.3.0/basic-tasks.md diff --git a/.claude/skills/trigger-dev-tasks/SKILL.md b/.claude/skills/trigger-dev-tasks/SKILL.md index c6221ba287..791c22c27e 100644 --- a/.claude/skills/trigger-dev-tasks/SKILL.md +++ b/.claude/skills/trigger-dev-tasks/SKILL.md @@ -120,9 +120,54 @@ await myTask.trigger(payload, { machine: "large-1x", // micro, small-1x, small-2x, medium-1x, medium-2x, large-1x, large-2x maxAttempts: 3, tags: ["user_123"], // Max 10 tags + debounce: { // Consolidate rapid triggers + key: "unique-key", + delay: "5s", + mode: "trailing", // "leading" (default) or "trailing" + }, }); ``` +## Debouncing + +Consolidate multiple triggers into a single execution: + +```ts +// Rapid triggers with same key = single execution +await myTask.trigger({ userId: "123" }, { + debounce: { + key: "user-123-update", + delay: "5s", + }, +}); + +// Trailing mode: use payload from LAST trigger +await myTask.trigger({ data: "latest" }, { + debounce: { + key: "my-key", + delay: "10s", + mode: "trailing", + }, +}); +``` + +Use cases: user activity updates, webhook deduplication, search indexing, notification batching. + +## Batch Triggering + +Up to 1,000 items per batch, 3MB per payload: + +```ts +const results = await myTask.batchTriggerAndWait([ + { payload: { userId: "1" } }, + { payload: { userId: "2" } }, +]); + +for (const result of results) { + if (result.ok) console.log(result.output); +} +``` + ## Machine Presets | Preset | vCPU | Memory | diff --git a/.claude/skills/trigger-dev-tasks/advanced-tasks.md b/.claude/skills/trigger-dev-tasks/advanced-tasks.md index 5024563196..32a00337f8 100644 --- a/.claude/skills/trigger-dev-tasks/advanced-tasks.md +++ b/.claude/skills/trigger-dev-tasks/advanced-tasks.md @@ -1,4 +1,4 @@ -# Trigger.dev Advanced Tasks +# Trigger.dev Advanced Tasks (v4) **Advanced patterns and features for writing tasks** @@ -36,6 +36,145 @@ for await (const run of runs.subscribeToRunsWithTag("user_123")) { - Max 10 tags per run, 1-64 characters each - Tags don't propagate to child tasks automatically +## Batch Triggering v2 + +Enhanced batch triggering with larger payloads and streaming ingestion. + +### Limits + +- **Maximum batch size**: 1,000 items (increased from 500) +- **Payload per item**: 3MB each (increased from 1MB combined) +- Payloads > 512KB automatically offload to object storage + +### Rate Limiting (per environment) + +| Tier | Bucket Size | Refill Rate | +|------|-------------|-------------| +| Free | 1,200 runs | 100 runs/10 sec | +| Hobby | 5,000 runs | 500 runs/5 sec | +| Pro | 5,000 runs | 500 runs/5 sec | + +### Concurrent Batch Processing + +| Tier | Concurrent Batches | +|------|-------------------| +| Free | 1 | +| Hobby | 10 | +| Pro | 10 | + +### Usage + +```ts +import { myTask } from "./trigger/myTask"; + +// Basic batch trigger (up to 1,000 items) +const runs = await myTask.batchTrigger([ + { payload: { userId: "user-1" } }, + { payload: { userId: "user-2" } }, + { payload: { userId: "user-3" } }, +]); + +// Batch trigger with wait +const results = await myTask.batchTriggerAndWait([ + { payload: { userId: "user-1" } }, + { payload: { userId: "user-2" } }, +]); + +for (const result of results) { + if (result.ok) { + console.log("Result:", result.output); + } +} + +// With per-item options +const batchHandle = await myTask.batchTrigger([ + { + payload: { userId: "123" }, + options: { + idempotencyKey: "user-123-batch", + tags: ["priority"], + }, + }, + { + payload: { userId: "456" }, + options: { + idempotencyKey: "user-456-batch", + }, + }, +]); +``` + +## Debouncing + +Consolidate multiple triggers into a single execution by debouncing task runs with a unique key and delay window. + +### Use Cases + +- **User activity updates**: Batch rapid user actions into a single run +- **Webhook deduplication**: Handle webhook bursts without redundant processing +- **Search indexing**: Combine document updates instead of processing individually +- **Notification batching**: Group notifications to prevent user spam + +### Basic Usage + +```ts +await myTask.trigger( + { userId: "123" }, + { + debounce: { + key: "user-123-update", // Unique identifier for debounce group + delay: "5s", // Wait duration ("5s", "1m", or milliseconds) + }, + } +); +``` + +### Execution Modes + +**Leading Mode** (default): Uses payload/options from the first trigger; subsequent triggers only reschedule execution time. + +```ts +// First trigger sets the payload +await myTask.trigger({ action: "first" }, { + debounce: { key: "my-key", delay: "10s" } +}); + +// Second trigger only reschedules - payload remains "first" +await myTask.trigger({ action: "second" }, { + debounce: { key: "my-key", delay: "10s" } +}); +// Task executes with { action: "first" } +``` + +**Trailing Mode**: Uses payload/options from the most recent trigger. + +```ts +await myTask.trigger( + { data: "latest-value" }, + { + debounce: { + key: "trailing-example", + delay: "10s", + mode: "trailing", + }, + } +); +``` + +In trailing mode, these options update with each trigger: +- `payload` — task input data +- `metadata` — run metadata +- `tags` — run tags (replaces existing) +- `maxAttempts` — retry attempts +- `maxDuration` — maximum compute time +- `machine` — machine preset + +### Important Notes + +- Idempotency keys take precedence over debounce settings +- Compatible with `triggerAndWait()` — parent runs block correctly on debounced execution +- Debounce key is scoped to the task + ## Concurrency & Queues ```ts @@ -339,6 +478,8 @@ export const publicWorkflow = task({ - **Metadata**: Track progress for long-running tasks - **Machines**: Match machine size to computational requirements - **Tags**: Use consistent naming patterns for filtering +- **Debouncing**: Use for user activity, webhooks, and notification batching +- **Batch triggering**: Use for bulk operations up to 1,000 items - **Error Handling**: Distinguish between retryable and fatal errors Design tasks to be stateless, idempotent, and resilient to failures. Use metadata for state tracking and queues for resource management. diff --git a/.claude/skills/trigger-dev-tasks/basic-tasks.md b/.claude/skills/trigger-dev-tasks/basic-tasks.md index 5bf91d60b7..56bff34076 100644 --- a/.claude/skills/trigger-dev-tasks/basic-tasks.md +++ b/.claude/skills/trigger-dev-tasks/basic-tasks.md @@ -1,4 +1,4 @@ -# Trigger.dev Basic Tasks +# Trigger.dev Basic Tasks (v4) **MUST use `@trigger.dev/sdk`, NEVER `client.defineJob`** @@ -58,13 +58,46 @@ const handle = await tasks.trigger("process-data", { data: [{ id: 1 }, { id: 2 }], }); -// Batch trigger +// Batch trigger (up to 1,000 items, 3MB per payload) const batchHandle = await tasks.batchTrigger("process-data", [ { payload: { userId: "123", data: [{ id: 1 }] } }, { payload: { userId: "456", data: [{ id: 2 }] } }, ]); ``` +### Debounced Triggering + +Consolidate multiple triggers into a single execution: + +```ts +// Multiple rapid triggers with same key = single execution +await myTask.trigger( + { userId: "123" }, + { + debounce: { + key: "user-123-update", // Unique key for debounce group + delay: "5s", // Wait before executing + }, + } +); + +// Trailing mode: use payload from LAST trigger +await myTask.trigger( + { data: "latest-value" }, + { + debounce: { + key: "trailing-example", + delay: "10s", + mode: "trailing", // Default is "leading" (first payload) + }, + } +); +``` + +**Debounce modes:** +- `leading` (default): Uses payload from first trigger, subsequent triggers only reschedule +- `trailing`: Uses payload from most recent trigger + ### From Inside Tasks (with Result handling) ```ts @@ -149,6 +182,7 @@ export const taskWithWaits = task({ - **Result vs Output**: `triggerAndWait()` returns a `Result` object with `ok`, `output`, `error` properties - NOT the direct task output - **Type safety**: Use `import type` for task references when triggering from backend - **Waits > 5 seconds**: Automatically checkpointed, don't count toward compute usage +- **Debounce + idempotency**: Idempotency keys take precedence over debounce settings ## NEVER Use (v2 deprecated) diff --git a/rules/4.3.0/advanced-tasks.md b/rules/4.3.0/advanced-tasks.md new file mode 100644 index 0000000000..32a00337f8 --- /dev/null +++ b/rules/4.3.0/advanced-tasks.md @@ -0,0 +1,485 @@ +# Trigger.dev Advanced Tasks (v4) + +**Advanced patterns and features for writing tasks** + +## Tags & Organization + +```ts +import { task, tags } from "@trigger.dev/sdk"; + +export const processUser = task({ + id: "process-user", + run: async (payload: { userId: string; orgId: string }, { ctx }) => { + // Add tags during execution + await tags.add(`user_${payload.userId}`); + await tags.add(`org_${payload.orgId}`); + + return { processed: true }; + }, +}); + +// Trigger with tags +await processUser.trigger( + { userId: "123", orgId: "abc" }, + { tags: ["priority", "user_123", "org_abc"] } // Max 10 tags per run +); + +// Subscribe to tagged runs +for await (const run of runs.subscribeToRunsWithTag("user_123")) { + console.log(`User task ${run.id}: ${run.status}`); +} +``` + +**Tag Best Practices:** + +- Use prefixes: `user_123`, `org_abc`, `video:456` +- Max 10 tags per run, 1-64 characters each +- Tags don't propagate to child tasks automatically + +## Batch Triggering v2 + +Enhanced batch triggering with larger payloads and streaming ingestion. + +### Limits + +- **Maximum batch size**: 1,000 items (increased from 500) +- **Payload per item**: 3MB each (increased from 1MB combined) +- Payloads > 512KB automatically offload to object storage + +### Rate Limiting (per environment) + +| Tier | Bucket Size | Refill Rate | +|------|-------------|-------------| +| Free | 1,200 runs | 100 runs/10 sec | +| Hobby | 5,000 runs | 500 runs/5 sec | +| Pro | 5,000 runs | 500 runs/5 sec | + +### Concurrent Batch Processing + +| Tier | Concurrent Batches | +|------|-------------------| +| Free | 1 | +| Hobby | 10 | +| Pro | 10 | + +### Usage + +```ts +import { myTask } from "./trigger/myTask"; + +// Basic batch trigger (up to 1,000 items) +const runs = await myTask.batchTrigger([ + { payload: { userId: "user-1" } }, + { payload: { userId: "user-2" } }, + { payload: { userId: "user-3" } }, +]); + +// Batch trigger with wait +const results = await myTask.batchTriggerAndWait([ + { payload: { userId: "user-1" } }, + { payload: { userId: "user-2" } }, +]); + +for (const result of results) { + if (result.ok) { + console.log("Result:", result.output); + } +} + +// With per-item options +const batchHandle = await myTask.batchTrigger([ + { + payload: { userId: "123" }, + options: { + idempotencyKey: "user-123-batch", + tags: ["priority"], + }, + }, + { + payload: { userId: "456" }, + options: { + idempotencyKey: "user-456-batch", + }, + }, +]); +``` + +## Debouncing + +Consolidate multiple triggers into a single execution by debouncing task runs with a unique key and delay window. + +### Use Cases + +- **User activity updates**: Batch rapid user actions into a single run +- **Webhook deduplication**: Handle webhook bursts without redundant processing +- **Search indexing**: Combine document updates instead of processing individually +- **Notification batching**: Group notifications to prevent user spam + +### Basic Usage + +```ts +await myTask.trigger( + { userId: "123" }, + { + debounce: { + key: "user-123-update", // Unique identifier for debounce group + delay: "5s", // Wait duration ("5s", "1m", or milliseconds) + }, + } +); +``` + +### Execution Modes + +**Leading Mode** (default): Uses payload/options from the first trigger; subsequent triggers only reschedule execution time. + +```ts +// First trigger sets the payload +await myTask.trigger({ action: "first" }, { + debounce: { key: "my-key", delay: "10s" } +}); + +// Second trigger only reschedules - payload remains "first" +await myTask.trigger({ action: "second" }, { + debounce: { key: "my-key", delay: "10s" } +}); +// Task executes with { action: "first" } +``` + +**Trailing Mode**: Uses payload/options from the most recent trigger. + +```ts +await myTask.trigger( + { data: "latest-value" }, + { + debounce: { + key: "trailing-example", + delay: "10s", + mode: "trailing", + }, + } +); +``` + +In trailing mode, these options update with each trigger: +- `payload` — task input data +- `metadata` — run metadata +- `tags` — run tags (replaces existing) +- `maxAttempts` — retry attempts +- `maxDuration` — maximum compute time +- `machine` — machine preset + +### Important Notes + +- Idempotency keys take precedence over debounce settings +- Compatible with `triggerAndWait()` — parent runs block correctly on debounced execution +- Debounce key is scoped to the task + +## Concurrency & Queues + +```ts +import { task, queue } from "@trigger.dev/sdk"; + +// Shared queue for related tasks +const emailQueue = queue({ + name: "email-processing", + concurrencyLimit: 5, // Max 5 emails processing simultaneously +}); + +// Task-level concurrency +export const oneAtATime = task({ + id: "sequential-task", + queue: { concurrencyLimit: 1 }, // Process one at a time + run: async (payload) => { + // Critical section - only one instance runs + }, +}); + +// Per-user concurrency +export const processUserData = task({ + id: "process-user-data", + run: async (payload: { userId: string }) => { + // Override queue with user-specific concurrency + await childTask.trigger(payload, { + queue: { + name: `user-${payload.userId}`, + concurrencyLimit: 2, + }, + }); + }, +}); + +export const emailTask = task({ + id: "send-email", + queue: emailQueue, // Use shared queue + run: async (payload: { to: string }) => { + // Send email logic + }, +}); +``` + +## Error Handling & Retries + +```ts +import { task, retry, AbortTaskRunError } from "@trigger.dev/sdk"; + +export const resilientTask = task({ + id: "resilient-task", + retry: { + maxAttempts: 10, + factor: 1.8, // Exponential backoff multiplier + minTimeoutInMs: 500, + maxTimeoutInMs: 30_000, + randomize: false, + }, + catchError: async ({ error, ctx }) => { + // Custom error handling + if (error.code === "FATAL_ERROR") { + throw new AbortTaskRunError("Cannot retry this error"); + } + + // Log error details + console.error(`Task ${ctx.task.id} failed:`, error); + + // Allow retry by returning nothing + return { retryAt: new Date(Date.now() + 60000) }; // Retry in 1 minute + }, + run: async (payload) => { + // Retry specific operations + const result = await retry.onThrow( + async () => { + return await unstableApiCall(payload); + }, + { maxAttempts: 3 } + ); + + // Conditional HTTP retries + const response = await retry.fetch("https://api.example.com", { + retry: { + maxAttempts: 5, + condition: (response, error) => { + return response?.status === 429 || response?.status >= 500; + }, + }, + }); + + return result; + }, +}); +``` + +## Machines & Performance + +```ts +export const heavyTask = task({ + id: "heavy-computation", + machine: { preset: "large-2x" }, // 8 vCPU, 16 GB RAM + maxDuration: 1800, // 30 minutes timeout + run: async (payload, { ctx }) => { + // Resource-intensive computation + if (ctx.machine.preset === "large-2x") { + // Use all available cores + return await parallelProcessing(payload); + } + + return await standardProcessing(payload); + }, +}); + +// Override machine when triggering +await heavyTask.trigger(payload, { + machine: { preset: "medium-1x" }, // Override for this run +}); +``` + +**Machine Presets:** + +- `micro`: 0.25 vCPU, 0.25 GB RAM +- `small-1x`: 0.5 vCPU, 0.5 GB RAM (default) +- `small-2x`: 1 vCPU, 1 GB RAM +- `medium-1x`: 1 vCPU, 2 GB RAM +- `medium-2x`: 2 vCPU, 4 GB RAM +- `large-1x`: 4 vCPU, 8 GB RAM +- `large-2x`: 8 vCPU, 16 GB RAM + +## Idempotency + +```ts +import { task, idempotencyKeys } from "@trigger.dev/sdk"; + +export const paymentTask = task({ + id: "process-payment", + retry: { + maxAttempts: 3, + }, + run: async (payload: { orderId: string; amount: number }) => { + // Automatically scoped to this task run, so if the task is retried, the idempotency key will be the same + const idempotencyKey = await idempotencyKeys.create(`payment-${payload.orderId}`); + + // Ensure payment is processed only once + await chargeCustomer.trigger(payload, { + idempotencyKey, + idempotencyKeyTTL: "24h", // Key expires in 24 hours + }); + }, +}); + +// Payload-based idempotency +import { createHash } from "node:crypto"; + +function createPayloadHash(payload: any): string { + const hash = createHash("sha256"); + hash.update(JSON.stringify(payload)); + return hash.digest("hex"); +} + +export const deduplicatedTask = task({ + id: "deduplicated-task", + run: async (payload) => { + const payloadHash = createPayloadHash(payload); + const idempotencyKey = await idempotencyKeys.create(payloadHash); + + await processData.trigger(payload, { idempotencyKey }); + }, +}); +``` + +## Metadata & Progress Tracking + +```ts +import { task, metadata } from "@trigger.dev/sdk"; + +export const batchProcessor = task({ + id: "batch-processor", + run: async (payload: { items: any[] }, { ctx }) => { + const totalItems = payload.items.length; + + // Initialize progress metadata + metadata + .set("progress", 0) + .set("totalItems", totalItems) + .set("processedItems", 0) + .set("status", "starting"); + + const results = []; + + for (let i = 0; i < payload.items.length; i++) { + const item = payload.items[i]; + + // Process item + const result = await processItem(item); + results.push(result); + + // Update progress + const progress = ((i + 1) / totalItems) * 100; + metadata + .set("progress", progress) + .increment("processedItems", 1) + .append("logs", `Processed item ${i + 1}/${totalItems}`) + .set("currentItem", item.id); + } + + // Final status + metadata.set("status", "completed"); + + return { results, totalProcessed: results.length }; + }, +}); + +// Update parent metadata from child task +export const childTask = task({ + id: "child-task", + run: async (payload, { ctx }) => { + // Update parent task metadata + metadata.parent.set("childStatus", "processing"); + metadata.root.increment("childrenCompleted", 1); + + return { processed: true }; + }, +}); +``` + +## Logging & Tracing + +```ts +import { task, logger } from "@trigger.dev/sdk"; + +export const tracedTask = task({ + id: "traced-task", + run: async (payload, { ctx }) => { + logger.info("Task started", { userId: payload.userId }); + + // Custom trace with attributes + const user = await logger.trace( + "fetch-user", + async (span) => { + span.setAttribute("user.id", payload.userId); + span.setAttribute("operation", "database-fetch"); + + const userData = await database.findUser(payload.userId); + span.setAttribute("user.found", !!userData); + + return userData; + }, + { userId: payload.userId } + ); + + logger.debug("User fetched", { user: user.id }); + + try { + const result = await processUser(user); + logger.info("Processing completed", { result }); + return result; + } catch (error) { + logger.error("Processing failed", { + error: error.message, + userId: payload.userId, + }); + throw error; + } + }, +}); +``` + +## Hidden Tasks + +```ts +// Hidden task - not exported, only used internally +const internalProcessor = task({ + id: "internal-processor", + run: async (payload: { data: string }) => { + return { processed: payload.data.toUpperCase() }; + }, +}); + +// Public task that uses hidden task +export const publicWorkflow = task({ + id: "public-workflow", + run: async (payload: { input: string }) => { + // Use hidden task internally + const result = await internalProcessor.triggerAndWait({ + data: payload.input, + }); + + if (result.ok) { + return { output: result.output.processed }; + } + + throw new Error("Internal processing failed"); + }, +}); +``` + +## Best Practices + +- **Concurrency**: Use queues to prevent overwhelming external services +- **Retries**: Configure exponential backoff for transient failures +- **Idempotency**: Always use for payment/critical operations +- **Metadata**: Track progress for long-running tasks +- **Machines**: Match machine size to computational requirements +- **Tags**: Use consistent naming patterns for filtering +- **Debouncing**: Use for user activity, webhooks, and notification batching +- **Batch triggering**: Use for bulk operations up to 1,000 items +- **Error Handling**: Distinguish between retryable and fatal errors + +Design tasks to be stateless, idempotent, and resilient to failures. Use metadata for state tracking and queues for resource management. diff --git a/rules/4.3.0/basic-tasks.md b/rules/4.3.0/basic-tasks.md new file mode 100644 index 0000000000..56bff34076 --- /dev/null +++ b/rules/4.3.0/basic-tasks.md @@ -0,0 +1,199 @@ +# Trigger.dev Basic Tasks (v4) + +**MUST use `@trigger.dev/sdk`, NEVER `client.defineJob`** + +## Basic Task + +```ts +import { task } from "@trigger.dev/sdk"; + +export const processData = task({ + id: "process-data", + retry: { + maxAttempts: 10, + factor: 1.8, + minTimeoutInMs: 500, + maxTimeoutInMs: 30_000, + randomize: false, + }, + run: async (payload: { userId: string; data: any[] }) => { + // Task logic - runs for long time, no timeouts + console.log(`Processing ${payload.data.length} items for user ${payload.userId}`); + return { processed: payload.data.length }; + }, +}); +``` + +## Schema Task (with validation) + +```ts +import { schemaTask } from "@trigger.dev/sdk"; +import { z } from "zod"; + +export const validatedTask = schemaTask({ + id: "validated-task", + schema: z.object({ + name: z.string(), + age: z.number(), + email: z.string().email(), + }), + run: async (payload) => { + // Payload is automatically validated and typed + return { message: `Hello ${payload.name}, age ${payload.age}` }; + }, +}); +``` + +## Triggering Tasks + +### From Backend Code + +```ts +import { tasks } from "@trigger.dev/sdk"; +import type { processData } from "./trigger/tasks"; + +// Single trigger +const handle = await tasks.trigger("process-data", { + userId: "123", + data: [{ id: 1 }, { id: 2 }], +}); + +// Batch trigger (up to 1,000 items, 3MB per payload) +const batchHandle = await tasks.batchTrigger("process-data", [ + { payload: { userId: "123", data: [{ id: 1 }] } }, + { payload: { userId: "456", data: [{ id: 2 }] } }, +]); +``` + +### Debounced Triggering + +Consolidate multiple triggers into a single execution: + +```ts +// Multiple rapid triggers with same key = single execution +await myTask.trigger( + { userId: "123" }, + { + debounce: { + key: "user-123-update", // Unique key for debounce group + delay: "5s", // Wait before executing + }, + } +); + +// Trailing mode: use payload from LAST trigger +await myTask.trigger( + { data: "latest-value" }, + { + debounce: { + key: "trailing-example", + delay: "10s", + mode: "trailing", // Default is "leading" (first payload) + }, + } +); +``` + +**Debounce modes:** +- `leading` (default): Uses payload from first trigger, subsequent triggers only reschedule +- `trailing`: Uses payload from most recent trigger + +### From Inside Tasks (with Result handling) + +```ts +export const parentTask = task({ + id: "parent-task", + run: async (payload) => { + // Trigger and continue + const handle = await childTask.trigger({ data: "value" }); + + // Trigger and wait - returns Result object, NOT task output + const result = await childTask.triggerAndWait({ data: "value" }); + if (result.ok) { + console.log("Task output:", result.output); // Actual task return value + } else { + console.error("Task failed:", result.error); + } + + // Quick unwrap (throws on error) + const output = await childTask.triggerAndWait({ data: "value" }).unwrap(); + + // Batch trigger and wait + const results = await childTask.batchTriggerAndWait([ + { payload: { data: "item1" } }, + { payload: { data: "item2" } }, + ]); + + for (const run of results) { + if (run.ok) { + console.log("Success:", run.output); + } else { + console.log("Failed:", run.error); + } + } + }, +}); + +export const childTask = task({ + id: "child-task", + run: async (payload: { data: string }) => { + return { processed: payload.data }; + }, +}); +``` + +> Never wrap triggerAndWait or batchTriggerAndWait calls in a Promise.all or Promise.allSettled as this is not supported in Trigger.dev tasks. + +## Waits + +```ts +import { task, wait } from "@trigger.dev/sdk"; + +export const taskWithWaits = task({ + id: "task-with-waits", + run: async (payload) => { + console.log("Starting task"); + + // Wait for specific duration + await wait.for({ seconds: 30 }); + await wait.for({ minutes: 5 }); + await wait.for({ hours: 1 }); + await wait.for({ days: 1 }); + + // Wait until specific date + await wait.until({ date: new Date("2024-12-25") }); + + // Wait for token (from external system) + await wait.forToken({ + token: "user-approval-token", + timeoutInSeconds: 3600, // 1 hour timeout + }); + + console.log("All waits completed"); + return { status: "completed" }; + }, +}); +``` + +> Never wrap wait calls in a Promise.all or Promise.allSettled as this is not supported in Trigger.dev tasks. + +## Key Points + +- **Result vs Output**: `triggerAndWait()` returns a `Result` object with `ok`, `output`, `error` properties - NOT the direct task output +- **Type safety**: Use `import type` for task references when triggering from backend +- **Waits > 5 seconds**: Automatically checkpointed, don't count toward compute usage +- **Debounce + idempotency**: Idempotency keys take precedence over debounce settings + +## NEVER Use (v2 deprecated) + +```ts +// BREAKS APPLICATION +client.defineJob({ + id: "job-id", + run: async (payload, io) => { + /* ... */ + }, +}); +``` + +Use SDK (`@trigger.dev/sdk`), check `result.ok` before accessing `result.output` diff --git a/rules/manifest.json b/rules/manifest.json index 1c481d9277..64d9a86139 100644 --- a/rules/manifest.json +++ b/rules/manifest.json @@ -1,7 +1,7 @@ { "name": "trigger.dev", "description": "Trigger.dev coding agent rules", - "currentVersion": "4.1.0", + "currentVersion": "4.3.0", "versions": { "4.0.0": { "options": [ @@ -100,6 +100,55 @@ "installStrategy": "claude-code-subagent" } ] + }, + "4.3.0": { + "options": [ + { + "name": "basic", + "title": "Basic tasks", + "label": "Only the most important rules for writing basic Trigger.dev tasks", + "path": "4.3.0/basic-tasks.md", + "tokens": 1400 + }, + { + "name": "advanced-tasks", + "title": "Advanced tasks", + "label": "Comprehensive rules to help you write advanced Trigger.dev tasks", + "path": "4.3.0/advanced-tasks.md", + "tokens": 3500 + }, + { + "name": "config", + "title": "Configuring Trigger.dev", + "label": "Configure your Trigger.dev project with a trigger.config.ts file", + "path": "4.1.0/config.md", + "tokens": 1900, + "applyTo": "**/trigger.config.ts" + }, + { + "name": "scheduled-tasks", + "title": "Scheduled Tasks", + "label": "How to write and use scheduled Trigger.dev tasks", + "path": "4.0.0/scheduled-tasks.md", + "tokens": 780 + }, + { + "name": "realtime", + "title": "Realtime", + "label": "How to use realtime in your Trigger.dev tasks and your frontend", + "path": "4.1.0/realtime.md", + "tokens": 1700 + }, + { + "name": "claude-code-agent", + "title": "Claude Code Agent", + "label": "An expert Trigger.dev developer as a Claude Code subagent", + "path": "4.0.0/claude-code-agent.md", + "tokens": 2700, + "client": "claude-code", + "installStrategy": "claude-code-subagent" + } + ] } } } \ No newline at end of file From 5f355789232ff99891f325ac1948d6ac4940b1d0 Mon Sep 17 00:00:00 2001 From: Eric Allam Date: Sun, 11 Jan 2026 22:10:11 +0000 Subject: [PATCH 4/6] Document SDK rules and Claude Code skill in CLAUDE.md --- CLAUDE.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/CLAUDE.md b/CLAUDE.md index bf106b238a..223e01b6e8 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -198,6 +198,25 @@ export const myTask = task({ }); ``` +### SDK Documentation Rules + +The `rules/` directory contains versioned documentation for writing Trigger.dev tasks, distributed to users via the SDK installer. Current version is defined in `rules/manifest.json`. + +- `rules/4.3.0/` - Latest: batch trigger v2 (1,000 items, 3MB payloads), debouncing +- `rules/4.1.0/` - Realtime streams v2, updated config +- `rules/4.0.0/` - Base v4 SDK documentation + +When adding new SDK features, create a new version directory with only the files that changed from the previous version. Update `manifest.json` to point unchanged files to previous versions. + +### Claude Code Skill + +The `.claude/skills/trigger-dev-tasks/` skill provides Claude Code with Trigger.dev task expertise. It includes: + +- `SKILL.md` - Core instructions and patterns +- Reference files for basic tasks, advanced tasks, scheduled tasks, realtime, and config + +Keep the skill in sync with the latest rules version when SDK features change. + ## Testing with hello-world Reference Project First-time setup: From a245e6a8879824fda85b0467fe020fd8f7284ef6 Mon Sep 17 00:00:00 2001 From: Eric Allam Date: Mon, 12 Jan 2026 10:20:15 +0000 Subject: [PATCH 5/6] Add instructions for how to run the webapp and the hello world reference project together and trigger tasks and test them --- CLAUDE.md | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/CLAUDE.md b/CLAUDE.md index 223e01b6e8..ccfc26d3fd 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -231,3 +231,64 @@ Running: cd references/hello-world pnpm exec trigger dev # or with --log-level debug ``` + +## Local Task Testing Workflow + +This workflow enables Claude Code to run the webapp and trigger dev simultaneously, trigger tasks, and inspect results for testing code changes. + +### Step 1: Start Webapp in Background + +```bash +# Run from repo root with run_in_background: true +pnpm run dev --filter webapp +``` + +Verify webapp is running: + +```bash +curl -s http://localhost:3030/healthcheck # Should return 200 +``` + +### Step 2: Start Trigger Dev in Background + +```bash +# Run from hello-world directory with run_in_background: true +cd references/hello-world && pnpm exec trigger dev +``` + +The worker will build and register tasks. Check output for "Local worker ready [node]" message. + +### Step 3: Trigger and Monitor Tasks via MCP + +Use the Trigger.dev MCP tools to interact with tasks: + +``` +# Get current worker and registered tasks +mcp__trigger__get_current_worker(projectRef: "proj_rrkpdguyagvsoktglnod", environment: "dev") + +# Trigger a task +mcp__trigger__trigger_task( + projectRef: "proj_rrkpdguyagvsoktglnod", + environment: "dev", + taskId: "hello-world", + payload: {"message": "Hello from Claude"} +) + +# List runs to see status +mcp__trigger__list_runs( + projectRef: "proj_rrkpdguyagvsoktglnod", + environment: "dev", + taskIdentifier: "hello-world", + limit: 5 +) +``` + +### Step 4: Monitor Execution + +- Check trigger dev output file for real-time execution logs +- Successful runs show: `Task | Run ID | Success (Xms)` +- Dashboard available at: http://localhost:3030/orgs/references-9dfd/projects/hello-world-97DT/env/dev/runs + +### Key Project Refs + +- hello-world: `proj_rrkpdguyagvsoktglnod` From de72a1af65a8b2c5b6ae7d70e3a362861ba61137 Mon Sep 17 00:00:00 2001 From: Eric Allam Date: Mon, 12 Jan 2026 10:58:02 +0000 Subject: [PATCH 6/6] ignore the local .mcp.json file --- .gitignore | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitignore b/.gitignore index 46ea0e45ea..d0dfea89c5 100644 --- a/.gitignore +++ b/.gitignore @@ -63,4 +63,5 @@ apps/**/public/build /packages/python/src/package.json **/.claude/settings.local.json .mcp.log +.mcp.json .cursor/debug.log \ No newline at end of file