Skip to content

janole/ai-sdk-provider-codex-asp

Repository files navigation

@janole/ai-sdk-provider-codex-asp

@janole/ai-sdk-provider-codex-asp is a Vercel AI SDK v6 custom provider for the Codex App Server Protocol.

Status: POC feature-complete for language model usage. Currently tested with codex-cli 0.125.0.

  • LanguageModelV3 provider implementation
  • Streaming (streamText) and non-streaming (generateText)
  • Standard AI SDK tool() support via Codex dynamic tools injection
  • Provider-executed tool protocol for Codex command executions
  • stdio and websocket transports
  • Persistent worker pool with thread management

Installation

npm install @janole/ai-sdk-provider-codex-asp ai

Quick Start

1. Non-streaming (generateText)

import { generateText } from 'ai';
import { createCodexAppServer } from '@janole/ai-sdk-provider-codex-asp';

const codex = createCodexAppServer({
  defaultModel: 'gpt-5.3-codex',
  clientInfo: { name: 'my-app', version: '0.1.0' },
});

const result = await generateText({
  model: codex.languageModel('gpt-5.3-codex'),
  prompt: 'Write a short release note title for websocket support.',
});

console.log(result.text);

2. Streaming (streamText)

import { streamText } from 'ai';
import { createCodexAppServer } from '@janole/ai-sdk-provider-codex-asp';

const codex = createCodexAppServer({
  defaultModel: 'gpt-5.3-codex',
  clientInfo: { name: 'my-app', version: '0.1.0' },
});

const result = streamText({
  model: codex('gpt-5.3-codex'),
  prompt: 'Explain JSON-RPC in one paragraph.',
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

Tools

Use standard AI SDK tool() definitions — the provider automatically injects them into Codex as dynamic tools and routes results back. No Codex-specific API needed.

Requires a persistent transport so tool results can be fed back within the same session:

import { stepCountIs, streamText, tool } from 'ai';
import { z } from 'zod';
import { createCodexAppServer } from '@janole/ai-sdk-provider-codex-asp';

const codex = createCodexAppServer({
  persistent: { scope: 'global', poolSize: 1, idleTimeoutMs: 60_000 },
});

const result = streamText({
  model: codex('gpt-5.3-codex'),
  prompt: 'Can you check ticket 15 and also the weather in Berlin?',
  tools: {
    lookup_ticket: tool({
      description: 'Look up the current status of a support ticket by its ID.',
      inputSchema: z.object({
        id: z.string().describe('The ticket ID, e.g. "TICK-42".'),
      }),
      execute: async ({ id }) => `Ticket ${id} is open and assigned to team Alpha.`,
    }),
    check_weather: tool({
      description: 'Get the current weather for a given location.',
      inputSchema: z.object({
        location: z.string().describe('City name or coordinates.'),
      }),
      execute: async ({ location }) => `Weather in ${location}: 22°C, sunny`,
    }),
  },
  stopWhen: stepCountIs(5),
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

await codex.shutdown();

API Reference

const codex = createCodexAppServer({
  defaultModel?: string,
  clientInfo?: { name, version, title? },  // defaults to package.json
  transport?: { type: 'stdio' | 'websocket', stdio?, websocket? },
  persistent?: { poolSize?, idleTimeoutMs?, scope?, key? },
  compaction?: { shouldCompactOnResume?, strict? }, // optional thread/compact/start before resumed turns
  debug?: { logPackets?, logger? },         // packet-level JSON-RPC debug logging
  defaultThreadSettings?: { cwd?, approvalPolicy?, approvalsReviewer?, sandbox? },
  defaultTurnSettings?: { cwd?, approvalPolicy?, approvalsReviewer?, sandboxPolicy?, model?, effort?, summary? },
  approvals?: { onCommandApproval?, onFileChangeApproval? },
  toolTimeoutMs?: number,                  // default: 30000
  interruptTimeoutMs?: number,             // default: 10000
});

codex(modelId)                // returns a language model instance
codex.languageModel(modelId)  // explicit alias
codex.chat(modelId)           // explicit alias
codex.shutdown()              // clean up persistent workers

Approval callback notes:

  • approvals.onCommandApproval(request) receives the raw generated Codex protocol payload: CommandExecutionRequestApprovalParams.
  • approvals.onFileChangeApproval(request) receives the raw generated Codex protocol payload: FileChangeRequestApprovalParams.

Rich sandbox policy example (turn-level):

const codex = createCodexAppServer({
  defaultTurnSettings: {
    approvalPolicy: "on-request",
    sandboxPolicy: {
      type: "externalSandbox",
      networkAccess: "enabled",
    },
  },
});

Approval reviewer example:

approvalsReviewer supports:

  • "user": ask the human user directly
  • "auto_review": use Codex's risk-based automatic reviewer flow
  • "guardian_subagent": route review to Codex's reviewer subagent
const codex = createCodexAppServer({
  defaultThreadSettings: {
    approvalsReviewer: "auto_review",
  },
});

await streamText({
  model: codex("gpt-5.3-codex"),
  prompt: "Delete the old generated protocol files under src/protocol/app-server-protocol if they are no longer referenced, then regenerate the current ones.",
  providerOptions: codexCallOptions({
    approvalsReviewer: "user",
    approvals: {
      onCommandApproval: async () => "decline",
    },
  }),
});

See src/provider.ts for full type definitions.

Examples

See the examples/ directory:

Run any example with:

npx tsx examples/stream-text.ts

Troubleshooting

  • No such file or command: codex:
    • Install Codex CLI and ensure codex is in PATH.
  • WebSocket is not available in this runtime:
    • Use Node.js 18+ with global WebSocket support, or use stdio transport.
  • Request timeouts:
    • Increase toolTimeoutMs for long-running dynamic tools.
    • Increase interruptTimeoutMs if turn/interrupt acks are slow under heavy load.
  • Empty generated text:
    • Verify Codex emits item/agentMessage/delta and turn/completed notifications.
  • Compaction fails on resumed threads:
    • Set compaction.shouldCompactOnResume: true to always compact resumed threads.
    • Or provide compaction.shouldCompactOnResume: (ctx) => boolean | Promise<boolean> for dynamic decisions.
    • Leave compaction.strict unset/false to continue the turn when thread/compact/start fails.
    • Set compaction.strict: true if you want compaction failures to fail fast.

Development

npm install
npm run build        # ESM + CJS + .d.ts via tsup
npm run qa           # lint + typecheck + test (all-in-one)

Generated Protocol Types

src/protocol/app-server-protocol/ is gitignored, but selected generated files are intentionally tracked with git add -f so protocol shape changes stay visible in PRs.

Important: for every tracked generated file, all imported generated type dependencies (direct + transitive) must also be tracked. Use the local skill .codex/skills/codex-protocol-type-upgrade/SKILL.md for the exact workflow.

When protocol shapes change, clean and regenerate:

rm -rf src/protocol/app-server-protocol
npm run codex:generate-types

Then follow the skill workflow to:

  • adapt runtime mappings if needed
  • add missing generated dependencies with git add -f
  • run npm run typecheck (and focused tests)

License

MIT

About

Vercel AI SDK v6 provider for the Codex App Server Protocol — streaming, tools, and thread management

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors