Skip to content

qingkongzhiqian/GenUI-LoomAgent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
LOOM Logo

GenUI-LoomAgent

AI decides what to show, not just what to say.

An open-source Generative UI Agent framework β€” the AI doesn't just return text, it autonomously decides which UI components to render.

Plug in any REST API via YAML config. No backend code changes needed.

AG-UI Compatible License: Apache 2.0 Python Next.js

English Β· δΈ­ζ–‡

GenUI-LoomAgent Demo

Quick Start Β· Architecture Β· AG-UI Protocol Β· Add Services Β· Contributing


What is LOOM?

LOOM is a Generative UI (GenUI) Agent framework. Traditional AI apps return plain text. LOOM's AI backend generates text responses and autonomously decides which UI components to render β€” data lists, comparison tables, charts, weather cards, trip cards, and more β€” all orchestrated by the AI in real time.

User: "What's the weather like in Beijing tomorrow?"

Traditional AI  β†’  A paragraph describing the weather
LOOM AI         β†’  Short summary + WeatherCard (temp/humidity/wind) + Graph (7-day trend)

User: "Find me a train from Beijing to Shanghai"

Traditional AI  β†’  A paragraph listing trains
LOOM AI         β†’  Brief summary + TripCard (train/time/price) + DataList (available options)

Core Features

Feature Description
GenUI Dynamic Rendering AI returns { name, props } instructions, frontend dynamically renders registered components
Declarative Service Integration Add any REST API via YAML config β€” 5 lines to connect a new data source
Multi-Intent Parallel Processing Single message with multiple needs β†’ automatic task decomposition and parallel execution
Emotional Memory System Time-aware context + behavioral signals + user memory extraction β€” AI remembers your preferences
Narrative Flow Not just "text + card" stitching, but story-driven information delivery with mood and rhythm
Streaming SSE Real-time streaming responses with interrupt and retry support
AG-UI Protocol Compatible with AG-UI standard protocol, works with CopilotKit and other AG-UI clients
Mobile Ready Capacitor for iOS packaging, mobile-first UI design

Tech Stack

Layer Technologies
Frontend Next.js 15 Β· React 19 Β· TypeScript Β· Tailwind CSS v4
Backend Python Β· FastAPI Β· LangGraph
LLM Gateway LiteLLM β€” unified interface for any LLM provider
Protocol AG-UI (Agent-User Interaction Protocol)
Database MongoDB
Mobile Capacitor (iOS)
Testing Vitest Β· React Testing Library Β· Pytest

Supported LLM Providers

Powered by LiteLLM, LOOM works with any LLM provider out of the box. Just set LLM_MODEL and LLM_API_KEY in your .env:

Provider Example LLM_MODEL Notes
OpenRouter openrouter/google/gemini-2.5-pro Access 200+ models through one API key
OpenAI openai/gpt-4o
Anthropic anthropic/claude-sonnet-4-20250514
Google gemini/gemini-2.5-pro
DashScope (Qwen) dashscope/qwen3.5-plus Recommended for Chinese users
DeepSeek deepseek/deepseek-chat
Any OpenAI-compatible Set LLM_BASE_URL Works with any provider that supports the OpenAI API format

You can also set LLM_FAST_MODEL separately for lightweight tasks (intent recognition, memory extraction) to reduce cost.

Search Services

LOOM supports web search via YAML-configured REST APIs:

Service Best For Env Var
Zhipu AI Web Search Chinese content β€” better results for Chinese queries ZHIPU_API_KEY
Tavily International content β€” deep search with extracted content TAVILY_API_KEY

Both can be enabled simultaneously β€” the AI will choose the most appropriate one based on the query language and context.


πŸš€ Quick Start

Option A: Docker (Recommended)

git clone https://github.com/qingkongzhiqian/GenUI-LoomAgent.git
cd GenUI-LoomAgent

cp backend/.env.example backend/.env
# Edit backend/.env β€” fill in your LLM API key

docker compose up

Open http://localhost:3000 β€” frontend, backend, and MongoDB are all running.

Option B: Manual Setup

Prerequisites

  • Node.js 20+
  • Python 3.10+
  • MongoDB (local or cloud)

1. Frontend

cd frontend
npm install
cp example.env.local .env.local
npm run dev

Open http://localhost:3000

2. Backend

cd backend
pip install -r requirements.txt
cp .env.example .env
# Edit .env β€” fill in your LLM API key (DashScope or OpenAI)

python run.py

Backend runs at http://localhost:8000

Configuration

Frontend (frontend/.env.local):

Variable Description
BACKEND_URL Backend address for SSR proxy
NEXT_PUBLIC_BACKEND_URL Client-side direct URL (for Capacitor)

Backend (backend/.env):

Variable Description
LLM_MODEL Model identifier (e.g. openrouter/google/gemini-2.5-pro, dashscope/qwen3.5-plus)
LLM_API_KEY API key for your LLM provider
LLM_BASE_URL Custom endpoint (optional, for OpenAI-compatible providers)
LLM_FAST_MODEL Lightweight model for fast tasks (optional, defaults to LLM_MODEL)
MONGODB_URI MongoDB connection string
JWT_SECRET JWT signing key
TAVILY_API_KEY Tavily search API key (optional)
ZHIPU_API_KEY Zhipu AI search API key (optional)

Generate a JWT secret: python -c "import secrets; print(secrets.token_urlsafe(32))"

Deploy

Frontend β†’ Vercel (one click):

Deploy with Vercel

Set BACKEND_URL to your backend's public URL. Set Root Directory to frontend.

Backend β†’ Any Python host (Railway, Render, fly.io, etc.):

cd backend
pip install -r requirements.txt
python run.py

LangGraph Nodes

Node Responsibility
Initializer Loads chat history, user memory, environmental context, and emotional context in parallel
Planner Intent recognition and task decomposition β€” splits complex requests into dependency-ordered execution plans
Executor Runs plan steps β€” independent steps execute in parallel, dependent steps wait for prerequisites
Evaluator Conditional routing β€” if steps remain, loop back to Executor; otherwise proceed to Synthesizer (max 5 iterations)
Synthesizer Generates final text response + GenUI component instructions; emits AG-UI events

Data Flow

User Input
  β†’ Initializer (load history, memory, emotional context)
  β†’ Planner (intent recognition + task decomposition)
  β†’ Executor (parallel sub-task execution)
    β”œβ”€ Chat intent β†’ mark as complete
    └─ Service intent β†’ REST API call via adapter
  β†’ Evaluator (check completion, loop or proceed)
  β†’ Synthesizer (refine results β†’ generate text + components)
  β†’ AG-UI Event Stream β†’ Frontend
    β”œβ”€ TEXT_MESSAGE_CHUNK  (streaming text)
    β”œβ”€ TOOL_CALL_START / ARGS / END / RESULT  (service calls)
    β”œβ”€ CUSTOM genui:components  ({ name, props })
    β”œβ”€ CUSTOM genui:narrative  (mood, opener, insight, next_actions)
    └─ CUSTOM genui:sources  (reference links)
  β†’ Component Registry β†’ Dynamic UI Rendering

GenUI Components

The AI can dynamically render any of these registered components:

Component Use Case
DataList Lists, bullet points, resource collections
DetailPanel Knowledge cards, entity details
DataTable Comparisons, rankings, parameter tables
Graph Bar / Line / Pie charts
TripCard Travel and transportation info
WeatherCard Weather forecasts
MetricCard KPIs and numeric indicators
StepCard Step-by-step processes
QuoteCard Quotes, definitions, facts
POIList Points of interest
LinkPreview URL previews
ClarifyCard Clarification questions

πŸ”Œ AG-UI Protocol

GenUI-LoomAgent is compatible with AG-UI (Agent-User Interaction Protocol) β€” an open standard that defines how AI agents interact with frontend applications in real time.

Why AG-UI?

AG-UI complements MCP and A2A to form a complete Agent protocol stack:

Protocol Role
MCP Gives agents access to tools
A2A Agent-to-agent communication
AG-UI Agent-to-user interface (this project)

Event Stream

The backend sends standard AG-UI events via SSE:

RUN_STARTED β†’ STEP_STARTED β†’ ACTIVITY_SNAPSHOT (execution plan)
β†’ TOOL_CALL_START β†’ TOOL_CALL_ARGS β†’ TOOL_CALL_END β†’ TOOL_CALL_RESULT
β†’ TEXT_MESSAGE_CHUNK β†’ CUSTOM("genui:components")
β†’ CUSTOM("genui:narrative") β†’ RUN_FINISHED

GenUI Extension Events

On top of AG-UI standard events, this project uses CUSTOM events for GenUI-specific capabilities:

Event Name Purpose
genui:components AI-generated UI component list ([{ name, props }])
genui:narrative Narrative flow data (mood, insight, suggested actions)
genui:clarify Clarification questions when user intent is ambiguous
genui:sources Reference links and data sources

Third-Party Integration

Any AG-UI compatible frontend client (e.g. CopilotKit, @ag-ui/client) can connect directly to the backend:

import { HttpAgent } from "@ag-ui/client";

const agent = new HttpAgent({
  url: "http://localhost:8000/api/chat/stream",
});

const result = await agent.runAgent({
  messages: [{ id: "1", role: "user", content: "What's the weather in Beijing?" }],
});

πŸ”§ Add Services

Services are configured declaratively in backend/services.yaml β€” no backend code changes required.

REST API Example

services:
  - id: "my-search"
    type: "rest"
    name: "My Search API"
    description: "Search products from my backend"
    endpoint: "https://api.example.com/search"
    method: "POST"
    headers:
      Authorization: "Bearer ${MY_API_KEY}"
    parameters_schema:
      type: "object"
      properties:
        query:
          type: "string"
          description: "Search keyword"
      required: ["query"]
    ui_hint:
      component: "ProductList"
      formatter: "format_products"

Configuration Fields

Field Description
id Unique service identifier
description The AI reads this to decide when to invoke the service β€” be specific
parameters_schema JSON Schema format β€” the AI extracts parameters from user input based on this
requires_env Optional β€” service only enabled when all listed env vars exist
payload_defaults Optional β€” default fields included in every request
timeout Optional β€” request timeout in seconds

See services.example.yaml for more examples.


πŸ“ Project Structure

GenUI-LoomAgent/
β”œβ”€β”€ frontend/                       # Next.js frontend
β”‚   └── src/
β”‚       β”œβ”€β”€ app/chat/               # Chat page
β”‚       β”œβ”€β”€ components/
β”‚       β”‚   β”œβ”€β”€ custom-chat/        # Component registry & renderer
β”‚       β”‚   β”œβ”€β”€ charts/             # Chart components (Recharts)
β”‚       β”‚   └── primitives/         # GenUI components (DataList, TripCard, etc.)
β”‚       β”œβ”€β”€ hooks/                  # useCustomChat and other hooks
β”‚       β”œβ”€β”€ contexts/               # Auth, language contexts
β”‚       β”œβ”€β”€ i18n/                   # Internationalization (zh/en)
β”‚       β”œβ”€β”€ lib/                    # API client, utilities
β”‚       └── types/                  # Shared TypeScript types
β”‚
β”œβ”€β”€ backend/                        # FastAPI + LangGraph backend
β”‚   └── app/
β”‚       β”œβ”€β”€ agent/
β”‚       β”‚   β”œβ”€β”€ nodes/              # LangGraph nodes (initializer β†’ planner β†’ executor β†’ evaluator β†’ synthesizer)
β”‚       β”‚   β”œβ”€β”€ services/           # Service registry, REST adapter
β”‚       β”‚   β”œβ”€β”€ memory/             # User memory extraction & storage
β”‚       β”‚   β”œβ”€β”€ emotional/          # Emotional context builder
β”‚       β”‚   └── prompts/            # LLM prompt templates
β”‚       β”œβ”€β”€ auth/                   # JWT authentication
β”‚       β”œβ”€β”€ crud/                   # MongoDB operations
β”‚       └── models/                 # Data models
β”‚
β”œβ”€β”€ .github/                        # CI/CD, issue templates, assets
β”œβ”€β”€ docker-compose.yml              # One-command full-stack startup
β”œβ”€β”€ CONTRIBUTING.md
β”œβ”€β”€ CHANGELOG.md
└── LICENSE                         # Apache 2.0

πŸ“œ Available Scripts

# Frontend (from frontend/)
npm run dev              # Dev server
npm run build            # Production build
npm run start            # Production server
npm run lint             # ESLint
npm run typecheck        # TypeScript type check
npm test                 # Vitest
npm run analyze          # Bundle size analysis

# Backend (from backend/)
python run.py            # Start server

🀝 Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines on:

  • Adding new GenUI components
  • Adding new REST API services
  • Improving the Agent workflow

πŸ“„ License

Apache 2.0

About

🧢 Open-source GenUI Agent framework β€” AI autonomously decides which UI components to render. Powered by LangGraph + AG-UI protocol. Plug in any LLM via LiteLLM, any API via YAML config.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors