Multi-LLM powered n8n workflow generator, validator, and repair tool. Built with Next.js 16, Monaco Editor, and shadcn/ui.
- z.ai Gateway - GLM-5, GLM-4.7 models
- OpenAI - GPT-4o, GPT-4-turbo, GPT-3.5
- Google Gemini - Gemini 2.0 Flash, Gemini 1.5 Pro
- OpenRouter - 500+ models (Claude, Llama, Mixtral, etc.)
- Groq - Ultra-fast inference (Llama 3.3, Mixtral)
- GLM-5 Local - Local GLM-5 via FastAPI bridge
- Generate - Create n8n workflows from natural language descriptions
- Validate - Multi-stage validation with structural and semantic checks
- Repair - AI-powered automatic fixing of broken workflows
- Enhance - Optimize workflows with error handling and best practices
The Skills System enhances prompts with n8n best practices:
- Intent Detection - Automatically detects workflow intent (trigger, integration, transform, output)
- Pattern Matching - Matches prompts to 10+ workflow patterns
- Best Practices Injection - Adds error handling, validation, and optimization tips
- Complexity Estimation - Estimates node count and workflow complexity
- Integration Detection - Identifies required external services
- Real-time Monaco Editor with JSON validation
- Side-by-side diff viewer for before/after comparison
- Operation history tracking
- Responsive design (mobile, tablet, desktop)
- BYOK (Bring Your Own Key) - API keys never leave browser
# Clone the repository
git clone https://github.com/turtir-ai/n8n-workflow-studio.git
cd n8n-workflow-studio
# Install dependencies
npm install
# Start development server
npm run dev
# Open http://localhost:3011Keys are stored locally in browser localStorage and never sent to external servers except the respective LLM providers.
| Provider | Get API Key |
|---|---|
| z.ai | Configured via gateway |
| OpenAI | https://platform.openai.com/api-keys |
| Google Gemini | https://aistudio.google.com/apikey |
| OpenRouter | https://openrouter.ai/keys |
| Groq | https://console.groq.com/keys |
To use GLM-5 locally, create a FastAPI bridge:
# bridge.py
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class ChatRequest(BaseModel):
model: str
messages: list
temperature: float = 0.7
max_tokens: int = 8192
@app.post("/v1/chat/completions")
async def chat(request: ChatRequest):
# Your GLM-5 inference code here
response = glm5_generate(request.messages, request.temperature, request.max_tokens)
return {"choices": [{"message": {"content": response}}]}
# Run: uvicorn bridge:app --host 0.0.0.0 --port 8765- Go to Generate tab
- Describe your workflow in natural language
- Example: "Create an RSS feed reader that sends new items to Slack"
- The Skills System will analyze your prompt and show:
- Detected intents (trigger, integration, transform)
- Suggested patterns (RSS Feed Trigger, HTTP Request, Slack Output)
- Estimated complexity and node count
- Click Generate to create the workflow
- Validate and download the JSON
- Go to Repair tab
- Upload or paste your n8n workflow JSON
- Click Validate to check for errors
- Review validation results (errors, warnings, suggestions)
- If errors found, click Fix with AI
- Review the diff and download the fixed JSON
- View configured API keys (masked)
- Adjust temperature (0-1)
- Set max tokens (256-32768)
- Clear all keys
portfolyo2/
├── app/
│ ├── api/
│ │ ├── llm/generate/route.ts # Unified LLM generation
│ │ ├── providers/models/route.ts # Dynamic model fetching
│ │ ├── repair/route.ts # Workflow repair
│ │ ├── skills/analyze/route.ts # Skills analysis API
│ │ └── validate/route.ts # JSON validation
│ ├── layout.tsx
│ ├── page.tsx # Main application
│ └── globals.css
├── components/
│ ├── editor/
│ │ ├── MonacoEditor.tsx # Code editor
│ │ └── DiffViewer.tsx # Diff comparison
│ ├── sidebar/
│ │ ├── Sidebar.tsx # Main sidebar
│ │ └── ProviderSelector.tsx # LLM provider UI
│ ├── tabs/
│ │ ├── GenerateTab.tsx # Generation UI
│ │ └── RepairTab.tsx # Repair UI
│ └── ui/ # shadcn/ui components
├── lib/
│ ├── providers/
│ │ ├── index.ts # Provider factory
│ │ ├── base.ts # Base types & utilities
│ │ ├── openrouter.ts
│ │ ├── gemini.ts
│ │ ├── groq.ts
│ │ ├── openai.ts
│ │ ├── zai.ts
│ │ └── glm5.ts # Local GLM-5 bridge
│ ├── skills/
│ │ ├── index.ts # Skills exports
│ │ ├── intent.ts # Intent detection
│ │ ├── patterns.ts # Workflow patterns
│ │ ├── types.ts # Type definitions
│ │ └── executor.ts # Prompt enhancement
│ ├── store.ts # Zustand state
│ └── utils.ts
└── package.json
Unified LLM generation endpoint with skills enhancement.
interface GenerateRequest {
provider: 'openrouter' | 'gemini' | 'groq' | 'openai' | 'zai' | 'glm5';
model: string;
mode: 'generate_workflow' | 'repair_workflow' | 'enhance_workflow' | 'custom';
input: {
prompt?: string;
workflow?: object;
description?: string;
errors?: Array<{ message: string; path?: string }>;
};
apiKey: string;
temperature?: number; // default: 0.7
maxTokens?: number; // default: 8192
useSkills?: boolean; // default: true
}Validate n8n workflow JSON.
interface ValidateRequest {
jsonString: string;
fullValidation?: boolean;
}Repair broken workflow using LLM.
interface RepairRequest {
workflow: object;
errors: ValidationError[];
provider: string;
model: string;
apiKey: string;
}Analyze prompt for skills and patterns.
interface SkillsRequest {
prompt: string;
maxPatterns?: number; // default: 5
}| Layer | Technology |
|---|---|
| Framework | Next.js 16 (App Router, Turbopack) |
| Language | TypeScript |
| Styling | Tailwind CSS |
| UI Components | shadcn/ui |
| State | Zustand |
| Editor | Monaco Editor |
| Icons | Lucide React |
# Development server (port 3011)
npm run dev
# Production build
npm run build
# Start production
npm start
# Lint
npm run lint- Create
/lib/providers/newprovider.ts:
import { LLMProvider, GenerateParams, GenerateResult } from './base';
export class NewProvider implements LLMProvider {
name = 'newprovider';
models = ['model-1', 'model-2'];
async generate(params: GenerateParams): Promise<GenerateResult> {
// Implementation
}
}- Register in
/lib/providers/index.ts - Add to
PROVIDERSarray in/lib/store.ts - Add icon in
/components/sidebar/ProviderSelector.tsx
- API keys stored in browser localStorage only
- Keys never logged or sent to external servers
- All LLM calls made directly from browser
- No server-side key storage
- HTTPS only in production
- Chrome 90+
- Firefox 90+
- Safari 14+
- Edge 90+
MIT
TT - Security Researcher & Workflow Automation Specialist
- n8n - The workflow automation platform
- shadcn/ui - Beautiful UI components
- Monaco Editor - Code editor
- Tailwind CSS - Utility-first CSS