AI decides what to show, not just what to say.
An open-source Generative UI Agent framework β the AI doesn't just return text, it autonomously decides which UI components to render.
Plug in any REST API via YAML config. No backend code changes needed.
Quick Start Β· Architecture Β· AG-UI Protocol Β· Add Services Β· Contributing
LOOM is a Generative UI (GenUI) Agent framework. Traditional AI apps return plain text. LOOM's AI backend generates text responses and autonomously decides which UI components to render β data lists, comparison tables, charts, weather cards, trip cards, and more β all orchestrated by the AI in real time.
User: "What's the weather like in Beijing tomorrow?"
Traditional AI β A paragraph describing the weather
LOOM AI β Short summary + WeatherCard (temp/humidity/wind) + Graph (7-day trend)
User: "Find me a train from Beijing to Shanghai"
Traditional AI β A paragraph listing trains
LOOM AI β Brief summary + TripCard (train/time/price) + DataList (available options)
| Feature | Description |
|---|---|
| GenUI Dynamic Rendering | AI returns { name, props } instructions, frontend dynamically renders registered components |
| Declarative Service Integration | Add any REST API via YAML config β 5 lines to connect a new data source |
| Multi-Intent Parallel Processing | Single message with multiple needs β automatic task decomposition and parallel execution |
| Emotional Memory System | Time-aware context + behavioral signals + user memory extraction β AI remembers your preferences |
| Narrative Flow | Not just "text + card" stitching, but story-driven information delivery with mood and rhythm |
| Streaming SSE | Real-time streaming responses with interrupt and retry support |
| AG-UI Protocol | Compatible with AG-UI standard protocol, works with CopilotKit and other AG-UI clients |
| Mobile Ready | Capacitor for iOS packaging, mobile-first UI design |
| Layer | Technologies |
|---|---|
| Frontend | Next.js 15 Β· React 19 Β· TypeScript Β· Tailwind CSS v4 |
| Backend | Python Β· FastAPI Β· LangGraph |
| LLM Gateway | LiteLLM β unified interface for any LLM provider |
| Protocol | AG-UI (Agent-User Interaction Protocol) |
| Database | MongoDB |
| Mobile | Capacitor (iOS) |
| Testing | Vitest Β· React Testing Library Β· Pytest |
Powered by LiteLLM, LOOM works with any LLM provider out of the box. Just set LLM_MODEL and LLM_API_KEY in your .env:
| Provider | Example LLM_MODEL |
Notes |
|---|---|---|
| OpenRouter | openrouter/google/gemini-2.5-pro |
Access 200+ models through one API key |
| OpenAI | openai/gpt-4o |
|
| Anthropic | anthropic/claude-sonnet-4-20250514 |
|
gemini/gemini-2.5-pro |
||
| DashScope (Qwen) | dashscope/qwen3.5-plus |
Recommended for Chinese users |
| DeepSeek | deepseek/deepseek-chat |
|
| Any OpenAI-compatible | Set LLM_BASE_URL |
Works with any provider that supports the OpenAI API format |
You can also set LLM_FAST_MODEL separately for lightweight tasks (intent recognition, memory extraction) to reduce cost.
LOOM supports web search via YAML-configured REST APIs:
| Service | Best For | Env Var |
|---|---|---|
| Zhipu AI Web Search | Chinese content β better results for Chinese queries | ZHIPU_API_KEY |
| Tavily | International content β deep search with extracted content | TAVILY_API_KEY |
Both can be enabled simultaneously β the AI will choose the most appropriate one based on the query language and context.
git clone https://github.com/qingkongzhiqian/GenUI-LoomAgent.git
cd GenUI-LoomAgent
cp backend/.env.example backend/.env
# Edit backend/.env β fill in your LLM API key
docker compose upOpen http://localhost:3000 β frontend, backend, and MongoDB are all running.
- Node.js 20+
- Python 3.10+
- MongoDB (local or cloud)
cd frontend
npm install
cp example.env.local .env.local
npm run devcd backend
pip install -r requirements.txt
cp .env.example .env
# Edit .env β fill in your LLM API key (DashScope or OpenAI)
python run.pyBackend runs at http://localhost:8000
Frontend (frontend/.env.local):
| Variable | Description |
|---|---|
BACKEND_URL |
Backend address for SSR proxy |
NEXT_PUBLIC_BACKEND_URL |
Client-side direct URL (for Capacitor) |
Backend (backend/.env):
| Variable | Description |
|---|---|
LLM_MODEL |
Model identifier (e.g. openrouter/google/gemini-2.5-pro, dashscope/qwen3.5-plus) |
LLM_API_KEY |
API key for your LLM provider |
LLM_BASE_URL |
Custom endpoint (optional, for OpenAI-compatible providers) |
LLM_FAST_MODEL |
Lightweight model for fast tasks (optional, defaults to LLM_MODEL) |
MONGODB_URI |
MongoDB connection string |
JWT_SECRET |
JWT signing key |
TAVILY_API_KEY |
Tavily search API key (optional) |
ZHIPU_API_KEY |
Zhipu AI search API key (optional) |
Generate a JWT secret: python -c "import secrets; print(secrets.token_urlsafe(32))"
Frontend β Vercel (one click):
Set BACKEND_URL to your backend's public URL. Set Root Directory to frontend.
Backend β Any Python host (Railway, Render, fly.io, etc.):
cd backend
pip install -r requirements.txt
python run.py| Node | Responsibility |
|---|---|
| Initializer | Loads chat history, user memory, environmental context, and emotional context in parallel |
| Planner | Intent recognition and task decomposition β splits complex requests into dependency-ordered execution plans |
| Executor | Runs plan steps β independent steps execute in parallel, dependent steps wait for prerequisites |
| Evaluator | Conditional routing β if steps remain, loop back to Executor; otherwise proceed to Synthesizer (max 5 iterations) |
| Synthesizer | Generates final text response + GenUI component instructions; emits AG-UI events |
User Input
β Initializer (load history, memory, emotional context)
β Planner (intent recognition + task decomposition)
β Executor (parallel sub-task execution)
ββ Chat intent β mark as complete
ββ Service intent β REST API call via adapter
β Evaluator (check completion, loop or proceed)
β Synthesizer (refine results β generate text + components)
β AG-UI Event Stream β Frontend
ββ TEXT_MESSAGE_CHUNK (streaming text)
ββ TOOL_CALL_START / ARGS / END / RESULT (service calls)
ββ CUSTOM genui:components ({ name, props })
ββ CUSTOM genui:narrative (mood, opener, insight, next_actions)
ββ CUSTOM genui:sources (reference links)
β Component Registry β Dynamic UI Rendering
The AI can dynamically render any of these registered components:
| Component | Use Case |
|---|---|
| DataList | Lists, bullet points, resource collections |
| DetailPanel | Knowledge cards, entity details |
| DataTable | Comparisons, rankings, parameter tables |
| Graph | Bar / Line / Pie charts |
| TripCard | Travel and transportation info |
| WeatherCard | Weather forecasts |
| MetricCard | KPIs and numeric indicators |
| StepCard | Step-by-step processes |
| QuoteCard | Quotes, definitions, facts |
| POIList | Points of interest |
| LinkPreview | URL previews |
| ClarifyCard | Clarification questions |
GenUI-LoomAgent is compatible with AG-UI (Agent-User Interaction Protocol) β an open standard that defines how AI agents interact with frontend applications in real time.
AG-UI complements MCP and A2A to form a complete Agent protocol stack:
| Protocol | Role |
|---|---|
| MCP | Gives agents access to tools |
| A2A | Agent-to-agent communication |
| AG-UI | Agent-to-user interface (this project) |
The backend sends standard AG-UI events via SSE:
RUN_STARTED β STEP_STARTED β ACTIVITY_SNAPSHOT (execution plan)
β TOOL_CALL_START β TOOL_CALL_ARGS β TOOL_CALL_END β TOOL_CALL_RESULT
β TEXT_MESSAGE_CHUNK β CUSTOM("genui:components")
β CUSTOM("genui:narrative") β RUN_FINISHED
On top of AG-UI standard events, this project uses CUSTOM events for GenUI-specific capabilities:
| Event Name | Purpose |
|---|---|
genui:components |
AI-generated UI component list ([{ name, props }]) |
genui:narrative |
Narrative flow data (mood, insight, suggested actions) |
genui:clarify |
Clarification questions when user intent is ambiguous |
genui:sources |
Reference links and data sources |
Any AG-UI compatible frontend client (e.g. CopilotKit, @ag-ui/client) can connect directly to the backend:
import { HttpAgent } from "@ag-ui/client";
const agent = new HttpAgent({
url: "http://localhost:8000/api/chat/stream",
});
const result = await agent.runAgent({
messages: [{ id: "1", role: "user", content: "What's the weather in Beijing?" }],
});Services are configured declaratively in backend/services.yaml β no backend code changes required.
services:
- id: "my-search"
type: "rest"
name: "My Search API"
description: "Search products from my backend"
endpoint: "https://api.example.com/search"
method: "POST"
headers:
Authorization: "Bearer ${MY_API_KEY}"
parameters_schema:
type: "object"
properties:
query:
type: "string"
description: "Search keyword"
required: ["query"]
ui_hint:
component: "ProductList"
formatter: "format_products"| Field | Description |
|---|---|
id |
Unique service identifier |
description |
The AI reads this to decide when to invoke the service β be specific |
parameters_schema |
JSON Schema format β the AI extracts parameters from user input based on this |
requires_env |
Optional β service only enabled when all listed env vars exist |
payload_defaults |
Optional β default fields included in every request |
timeout |
Optional β request timeout in seconds |
See services.example.yaml for more examples.
GenUI-LoomAgent/
βββ frontend/ # Next.js frontend
β βββ src/
β βββ app/chat/ # Chat page
β βββ components/
β β βββ custom-chat/ # Component registry & renderer
β β βββ charts/ # Chart components (Recharts)
β β βββ primitives/ # GenUI components (DataList, TripCard, etc.)
β βββ hooks/ # useCustomChat and other hooks
β βββ contexts/ # Auth, language contexts
β βββ i18n/ # Internationalization (zh/en)
β βββ lib/ # API client, utilities
β βββ types/ # Shared TypeScript types
β
βββ backend/ # FastAPI + LangGraph backend
β βββ app/
β βββ agent/
β β βββ nodes/ # LangGraph nodes (initializer β planner β executor β evaluator β synthesizer)
β β βββ services/ # Service registry, REST adapter
β β βββ memory/ # User memory extraction & storage
β β βββ emotional/ # Emotional context builder
β β βββ prompts/ # LLM prompt templates
β βββ auth/ # JWT authentication
β βββ crud/ # MongoDB operations
β βββ models/ # Data models
β
βββ .github/ # CI/CD, issue templates, assets
βββ docker-compose.yml # One-command full-stack startup
βββ CONTRIBUTING.md
βββ CHANGELOG.md
βββ LICENSE # Apache 2.0
# Frontend (from frontend/)
npm run dev # Dev server
npm run build # Production build
npm run start # Production server
npm run lint # ESLint
npm run typecheck # TypeScript type check
npm test # Vitest
npm run analyze # Bundle size analysis
# Backend (from backend/)
python run.py # Start serverContributions welcome! See CONTRIBUTING.md for guidelines on:
- Adding new GenUI components
- Adding new REST API services
- Improving the Agent workflow