Universal AI adapter service for Jodit Editor AI Assistant Pro using Vercel AI SDK.
This service provides a secure, server-side proxy for AI providers (OpenAI, DeepSeek, Claude, etc.) that can be used with Jodit Editor's AI Assistant Pro plugin. It handles API key management, authentication, and request routing to various AI providers.
- 🔒 Secure API Key Management - API keys stored server-side, not exposed to clients
- 🔑 Authentication - Validates API keys (36 characters, UUID format) and referer headers
- 🌐 Multi-Provider Support - OpenAI, DeepSeek, Anthropic, Google (extensible)
- 📡 Streaming Support - Real-time streaming responses using Server-Sent Events (SSE)
- 🛠️ Tool Calling - Full support for function/tool calling
- 🚦 Rate Limiting - Configurable rate limiting with in-memory or Redis backend
- 🔄 Distributed Support - Redis-based rate limiting for multi-instance deployments
- 🚀 Production Ready - Docker support, TypeScript, comprehensive error handling
- 📊 Logging - Winston-based logging with different levels
- 🧪 Testing - Jest with comprehensive test coverage
┌─────────────┐ ┌──────────────────┐ ┌─────────────┐
│ Jodit │ HTTPS │ Adapter Service │ HTTPS │ AI Provider│
│ AI Plugin ├────────►│ (This repo) ├────────►│ (OpenAI) │
└─────────────┘ └──────────────────┘ └─────────────┘
Client Server External
npm install jodit-ai-adapterdocker build -t jodit-ai-adapter .
docker run -p 8082:8082 --env-file .env jodit-ai-adapterCopy the example environment file:
cp .env.example .envEdit .env and add your API keys:
PORT=8082
NODE_ENV=development
# OpenAI Configuration
OPENAI_API_KEY=sk-your-openai-api-key-here
OPENAI_DEFAULT_MODEL=gpt-5.2
# CORS (use specific origins in production)
CORS_ORIGIN=*npm installnpm run devThe service will be available at http://localhost:8082
npm run build
npm start| Variable | Description | Default |
|---|---|---|
PORT |
Server port | 8082 |
NODE_ENV |
Environment mode | development |
LOG_LEVEL |
Logging level | debug (dev), info (prod) |
CORS_ORIGIN |
CORS allowed origins | * |
OPENAI_API_KEY |
OpenAI API key | - |
OPENAI_DEFAULT_MODEL |
Default OpenAI model | gpt-5.2 |
HTTP_PROXY |
HTTP/SOCKS5 proxy URL | - |
RATE_LIMIT_ENABLED |
Enable rate limiting | false |
RATE_LIMIT_TYPE |
Rate limiter type (memory or redis) |
memory |
RATE_LIMIT_MAX_REQUESTS |
Max requests per window | 100 |
RATE_LIMIT_WINDOW_MS |
Time window in ms | 60000 |
REDIS_URL |
Redis connection URL | - |
REDIS_PASSWORD |
Redis password | - |
REDIS_DB |
Redis database number | 0 |
CONFIG_FILE |
Path to JSON config file | - |
You can use a JSON configuration file instead of environment variables:
{
"port": 8082,
"debug": true,
"requestTimeout": 120000,
"maxRetries": 3,
"corsOrigin": "*",
"requireReferer": false,
"providers": {
"openai": {
"type": "openai",
"defaultModel": "gpt-5.2",
"apiKey": "sk-..."
}
}
}Load it with:
CONFIG_FILE=./config.json npm startGET /ai/healthReturns service status and available providers.
Response:
{
"status": "ok",
"timestamp": "2025-01-22T10:30:00.000Z",
"providers": ["openai"]
}POST /ai/request
Content-Type: application/json
Authorization: Bearer 12345678-1234-1234-1234-123456789abcRequest Body:
{
"provider": "openai",
"context": {
"mode": "full",
"messages": [
{
"id": "msg_1",
"role": "user",
"content": "Hello!",
"timestamp": 1234567890
}
],
"tools": [],
"conversationOptions": {
"model": "gpt-5.2",
"temperature": 0.7
},
"instructions": "You are a helpful assistant."
}
}Streaming Response (SSE):
event: created
data: {"type":"created","response":{"responseId":"resp_123","content":"","finished":false}}
event: text-delta
data: {"type":"text-delta","delta":"Hello"}
event: text-delta
data: {"type":"text-delta","delta":"!"}
event: completed
data: {"type":"completed","response":{"responseId":"resp_123","content":"Hello!","finished":true}}
POST /ai/image/generate
Content-Type: application/json
Authorization: Bearer 12345678-1234-1234-1234-123456789abcRequest Body:
{
"provider": "openai",
"request": {
"prompt": "A white siamese cat with blue eyes",
"model": "dall-e-3",
"size": "1024x1024",
"quality": "standard"
}
}Response:
{
"success": true,
"result": {
"images": [
{
"url": "https://example.com/generated-image.png",
"revisedPrompt": "A white siamese cat with striking blue eyes..."
}
],
"created": 1700000000,
"metadata": {
"model": "dall-e-3",
"prompt": "A white siamese cat with blue eyes"
}
}
}GET /ai/providers
Authorization: Bearer 12345678-1234-1234-1234-123456789abcReturns configured providers and their settings.
The service validates:
- API Key Format: Must be 36 characters in UUID format (A-F, 0-9, and hyphens)
- API Key Header: Sent via
Authorization: Bearer <key>orx-api-key: <key> - Custom Validation: Optional
checkAuthenticationcallback
import { start } from 'jodit-ai-adapter';
await start({
port: 8082,
checkAuthentication: async (apiKey, referer, request) => {
// Validate API key against your database
const user = await db.users.findByApiKey(apiKey);
if (!user || !user.active) {
return null; // Reject
}
return user.id; // Accept and return user ID
}
});Track AI usage (tokens, costs) with a callback:
import { start } from 'jodit-ai-adapter';
await start({
port: 8082,
checkAuthentication: async (apiKey, referer) => {
const user = await db.users.findByApiKey(apiKey);
return user?.id || null;
},
onUsage: async (stats) => {
// Save usage statistics to database
await db.usage.create({
userId: stats.userId,
provider: stats.provider,
model: stats.model,
promptTokens: stats.promptTokens,
completionTokens: stats.completionTokens,
totalTokens: stats.totalTokens,
duration: stats.duration,
timestamp: new Date(stats.timestamp)
});
// Update user's token balance
if (stats.totalTokens) {
await db.users.decrementTokens(stats.userId, stats.totalTokens);
}
console.log(`User ${stats.userId} used ${stats.totalTokens} tokens`);
}
});Usage Stats Interface:
interface UsageStats {
userId: string; // User ID from authentication
apiKey: string; // API key used
provider: string; // AI provider (openai, deepseek, etc.)
model: string; // Model used (gpt-5.2, etc.)
responseId: string; // Response ID
promptTokens?: number; // Input tokens
completionTokens?: number; // Output tokens
totalTokens?: number; // Total tokens
timestamp: number; // Request timestamp (ms)
duration: number; // Request duration (ms)
metadata?: Record<string, unknown>; // Additional data
}The service includes built-in rate limiting to prevent abuse and manage resource usage. Rate limiting can be configured to use either in-memory storage (for single-instance deployments) or Redis (for distributed/multi-instance deployments).
RATE_LIMIT_ENABLED=true
RATE_LIMIT_TYPE=memory
RATE_LIMIT_MAX_REQUESTS=100
RATE_LIMIT_WINDOW_MS=60000This configuration allows 100 requests per minute per user/IP address.
For production deployments with multiple instances, use Redis:
RATE_LIMIT_ENABLED=true
RATE_LIMIT_TYPE=redis
RATE_LIMIT_MAX_REQUESTS=100
RATE_LIMIT_WINDOW_MS=60000
REDIS_URL=redis://localhost:6379
REDIS_PASSWORD=your-password
REDIS_DB=0For development, use the provided Docker Compose configuration:
# Start Redis only
docker-compose -f docker-compose.dev.yml up -d
# Start Redis with monitoring UI
docker-compose -f docker-compose.dev.yml up -d
# Access Redis Commander at http://localhost:8081Then configure your app to use Redis:
RATE_LIMIT_ENABLED=true
RATE_LIMIT_TYPE=redis
REDIS_URL=redis://localhost:6379import { start } from 'jodit-ai-adapter';
await start({
port: 8082,
rateLimit: {
enabled: true,
type: 'redis',
maxRequests: 100,
windowMs: 60000, // 1 minute
redisUrl: 'redis://localhost:6379',
keyPrefix: 'rl:'
},
providers: {
openai: {
type: 'openai',
apiKey: process.env.OPENAI_API_KEY
}
}
});When rate limiting is enabled, the following headers are included in responses:
X-RateLimit-Limit: Maximum requests allowed in the windowX-RateLimit-Remaining: Remaining requests in current windowX-RateLimit-Reset: ISO 8601 timestamp when the rate limit resetsRetry-After: Seconds to wait before retrying (only when limit exceeded)
When rate limit is exceeded, the service returns a 429 Too Many Requests error:
{
"success": false,
"error": {
"code": 429,
"message": "Too many requests, please try again later",
"details": {
"limit": 100,
"current": 101,
"resetTime": 45000
}
}
}By default, rate limiting uses:
- User ID (if authenticated via
checkAuthenticationcallback) - IP Address (fallback if no user ID)
This means authenticated users are tracked by their user ID, while anonymous requests are tracked by IP address.
You can implement custom rate limiting logic:
import { start, MemoryRateLimiter } from 'jodit-ai-adapter';
// Create custom rate limiter with skip function
const rateLimiter = new MemoryRateLimiter({
maxRequests: 100,
windowMs: 60000,
skip: (key) => {
// Skip rate limiting for admin users
return key.startsWith('user:admin-');
}
});
await start({
port: 8082,
// ... other config
});import { Jodit } from 'jodit-pro';
const editor = Jodit.make('#editor', {
aiAssistantPro: {
apiRequest: async (context, signal) => {
const response = await fetch('http://localhost:8082/ai/request', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer 12345678-1234-1234-1234-123456789abc'
},
body: JSON.stringify({
provider: 'openai',
context
}),
signal
});
// Handle streaming
const reader = response.body.getReader();
const decoder = new TextDecoder();
// ... streaming logic (see full example in docs)
}
}
});See docs/client-integration.md for complete examples.
jodit-ai-adapter/
├── src/
│ ├── adapters/ # AI provider adapters
│ │ ├── base-adapter.ts
│ │ ├── openai-adapter.ts
│ │ └── adapter-factory.ts
│ ├── routes/ # Route handlers
│ │ ├── ai-request/ # POST /ai/request
│ │ ├── ai-providers/ # GET /ai/providers
│ │ ├── image-generate/ # POST /ai/image/generate
│ │ └── health/ # GET /ai/health
│ ├── middlewares/ # Express middlewares
│ │ ├── auth.ts
│ │ └── cors.ts
│ ├── rate-limiter/ # Rate limiting
│ │ ├── memory-rate-limiter.ts
│ │ └── redis-rate-limiter.ts
│ ├── types/ # TypeScript types
│ │ ├── jodit-ai.ts
│ │ ├── config.ts
│ │ └── index.ts
│ ├── helpers/ # Utility functions
│ │ └── logger.ts
│ ├── config/ # Configuration
│ │ └── default-config.ts
│ ├── app.ts # Express app setup
│ ├── index.ts # Main entry point
│ └── run.ts # CLI runner
├── docs/ # Documentation
├── Dockerfile
├── package.json
└── tsconfig.json
npm run dev # Start development server with hot reload
npm run build # Build for production
npm start # Start production server
npm test # Run tests
npm run test:watch # Run tests in watch mode
npm run test:coverage # Run tests with coverage
npm run lint # Lint code
npm run lint:fix # Lint and fix code
npm run format # Format code with Prettier
npm run docker:build # Build Docker image
npm run docker:run # Run Docker container- Create adapter class extending
BaseAdapter:
// src/adapters/deepseek-adapter.ts
import { BaseAdapter } from './base-adapter';
export class DeepSeekAdapter extends BaseAdapter {
protected async processRequest(context, signal) {
// Implementation using Vercel AI SDK
}
}- Register in factory:
// src/adapters/adapter-factory.ts
AdapterFactory.adapters.set('deepseek', DeepSeekAdapter);- Add configuration:
// src/config/default-config.ts
providers: {
deepseek: {
type: 'deepseek',
apiKey: process.env.DEEPSEEK_API_KEY,
defaultModel: 'deepseek-chat'
}
}npm testimport nock from 'nock';
import { OpenAIAdapter } from '../adapters/openai-adapter';
describe('OpenAIAdapter', () => {
it('should handle streaming response', async () => {
// Mock OpenAI API
nock('https://api.openai.com')
.post('/v1/responses')
.reply(200, {
// Mock response
});
const adapter = new OpenAIAdapter({
apiKey: 'test-key'
});
// Test adapter
const result = await adapter.handleRequest(context, signal);
expect(result.mode).toBe('stream');
});
});- Never expose API keys in client-side code
- Use HTTPS in production
- Configure CORS properly - Don't use
*in production - Implement rate limiting (e.g., using express-rate-limit)
- Validate referer headers when
requireReferer: true - Use environment variables for sensitive data
- Implement custom authentication for production use
# Build
docker build -t jodit-ai-adapter .
# Run
docker run -d \
-p 8082:8082 \
-e OPENAI_API_KEY=sk-... \
--name jodit-ai-adapter \
jodit-ai-adapterversion: '3.8'
services:
jodit-ai-adapter:
build: .
ports:
- "8082:8082"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- NODE_ENV=production
restart: unless-stoppedAPI Key Invalid Format
- Ensure your API key is exactly 36 characters (UUID format)
- Must contain only A-F, 0-9, and hyphens
CORS Errors
- Check
CORS_ORIGINconfiguration - Ensure client origin is allowed
Streaming Not Working
- Check that client properly handles SSE
- Verify
Content-Type: text/event-streamheader
Provider Not Found
- Ensure provider is configured in
providersobject - Check provider name matches exactly (case-sensitive)
Contributions are welcome! Please see CONTRIBUTING.md for details.
MIT License - see LICENSE for details.
Chupurnov Valeriy chupurnov@gmail.com