Skip to content

NGirchev/open-daimon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

63 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

😈 OpenDaimon — Spring Boot Java AI Agent Framework

OpenDaimon is a self-hosted Java/Spring Boot framework for building extensible AI agents on top of Spring AI. It runs an FSM-based ReAct loop, integrates OpenRouter and Ollama models, and exposes the agent through Telegram, REST API, and Web UI. Use it as a library, a Spring Boot starter, or a ready-to-run Telegram AI assistant for personal use and group chats.

Who it's for

Java/Spring teams building conversational AI or internal bots; developers who want one backend with Telegram, REST, and Web UI; users who prefer to run a chat agent on their own infrastructure with local or OpenRouter models and no external subscriptions; anyone who needs trusted group access (e.g. family or team) without per-user signups elsewhere.

OpenDaimon Logo OpenDaimon Text

Build Status Maven Central SonarCloud Lines of Code SonarCloud Maintainability SonarCloud Code Smells SonarCloud Security SonarCloud Bugs SonarCloud Duplicated Lines SonarCloud Reliability SonarCloud Technical Debt

Java 21 Spring Boot 3.5.13 License: Apache-2.0

Screen Recording 2026-05-03 at 23.28.03.gif

Quick Setup

Option 1 — One command (recommended):

mkdir my-bot && cd my-bot
npx @ngirchev/open-daimon

The wizard will:

  • Configure .env with your credentials
  • Let you choose AI provider (OpenRouter or Ollama)
  • For Ollama — check the connection and pull qwen3.5:4b automatically
  • Generate ready-to-run docker-compose.yml and application-local.yml
  • Offer to start the stack immediately

Dependencies

1. Docker (required) — install Docker Desktop and start it. Node.js 18+ required for the npx wizard.

2. Ollama (optional — local AI models) — install from ollama.com. The wizard checks the connection and pulls a model automatically.

3. OpenRouter (optional — cloud AI, free models available):

  1. Sign up at openrouter.ai (GitHub OAuth or email)
  2. Go to openrouter.ai/keysCreate Key → copy the key (starts with sk-or-v1-...) — this is your OPENROUTER_KEY

You need at least one of Ollama or OpenRouter. Both can be active simultaneously.

Telegram bot — see setup-telegram.md: get a token from @BotFather and your user ID from @userinfobot.

After the wizard completes, check that the app started:

docker compose logs -f opendaimon-app

Option 2 — Manual setup (after git clone): See Quick start below.


Why OpenDaimon?

For developers and teams

  • Spring AI as a library — Integrate conversational AI into your apps with agent-style capabilities; plug in only the modules you need (Telegram, REST, UI, Spring AI).
  • FSM-based ReAct agent runtime — The agent path has an explicit FSM loop: think, call tools, observe results, iterate, and produce a final answer.
  • Spring Boot starter — External applications can use opendaimon-spring-boot-starter to get OpenDaimon defaults without importing module configuration manually; the standalone opendaimon-starter-consumer-example shows the consumer setup.
  • Easy to customize for business — Configure the chat agent (prompts, roles, memory, RAG) via properties and optional extensions; no need to fork the whole project.
  • Resilience and prioritization — Built-in bulkhead (Resilience4j) and two user tiers: VIP and regular (plus admin), with configurable concurrency and wait limits.
  • Custom dialog summarization — Long conversations are summarized automatically; context window and triggers are configurable.
  • Open, modular architecture — Spring Boot auto-configurations let you enable/disable features and replace components without touching core code.
  • Ready-made interfaces — Telegram bot, REST API, and Web UI out of the box; two UI languages supported; default and custom system roles for the assistant.
  • Foundation for pipelines — Solid base for building pipelines and integrations with various systems and AI providers for chatbots and automation.

For end users (self-hosted)

  • Your data stays with you — Run the agent on your own machine or server. Use OpenRouter or Ollama ( local models); all conversations are stored locally in your database. No need to send private data to third-party APIs or pay for external chat subscriptions.
  • Trusted Telegram groups — Add Telegram groups (e.g. family, friends) as trusted; members get access without signing up on other services and without dealing with per-user limits on external platforms.

Technical highlights

  • Streaming — SSE for REST and Web UI; Telegram receives replies as they are generated (chunk-by-chunk).
  • Telegram agent UX — Agent mode has separate progress/status rendering and final-answer delivery, plus per-user /mode and /thinking controls.
  • Model selection UX — Large Telegram model lists are grouped into Recent, Local/Ollama, Vision, Free, and All categories; dialog menus include localized cancel/close buttons.
  • OpenRouter intelligence — Automatic retry with model switch on rate limits (429) or errors; capability-based model selection (chat, tool calling, web, vision); optional free-model rotation with scheduled registry refresh so VIP/regular users can use free OpenRouter models without manual switching.
  • Multimodal — Images from Telegram (or REST) stored in MinIO and sent to vision-capable models; optional RAG pipeline for PDFs (chunking, embeddings, similarity search).
  • Production-ready — Published to Maven Central; CI (GitHub Actions), SonarCloud, Testcontainers, Flyway migrations, Docker Compose; API keys only in environment variables (no secrets in config files).
  • Observability — Micrometer, Prometheus, Grafana, optional Elasticsearch/Kibana; custom metrics for request timing, bulkhead usage, and OpenRouter stream retries.

Table of contents

Features

  • Multiple interfaces: Telegram bot, REST API, Web UI
  • Agent runtime: FSM-based ReAct loop, tool calls, observations, streaming progress, and final-answer cleanup
  • Spring AI integration: OpenRouter, Ollama, chat memory, optional RAG; OpenRouter retry and free-model rotation
  • Spring Boot starter: starter dependency with OpenDaimon defaults for external Spring Boot applications, plus a standalone consumer example
  • Streaming: SSE (REST/UI) and chunk-by-chunk replies in Telegram
  • Telegram UX: per-user /mode and /thinking, grouped model selection, recent models, and cancel/close buttons
  • Multimodal: image uploads (MinIO + vision models), optional PDF RAG (embeddings, similarity search)
  • Modular architecture: enable only the modules you need; extensible via Spring auto-configurations
  • Request prioritization: bulkhead (ADMIN/VIP/REGULAR) and per-user concurrency; trusted Telegram groups for shared access
  • Dialog summarization: configurable long-conversation summarization and context window
  • Roles and i18n: default and custom system roles; two UI languages
  • Observability: Prometheus, Grafana, Elasticsearch, Kibana; custom metrics
  • Distribution: Maven Central, Docker images, CI and SonarCloud

Security and Compatibility Caveats

⚠️ Warning: opendaimon-rest is not secure by design in the current baseline.
Do not expose it directly to end users or as a public API without a dedicated security-hardening pass.

User Priorities and Bulkhead

The system uses a Bulkhead pattern to manage AI request limits based on user priority.

Priority Levels

Priority Description Max Concurrent Requests Max Wait Time
ADMIN Bot administrators 10 (configurable) 1s
VIP Paid users or channel members 5 (configurable) 1s
REGULAR Free users in whitelist 1 (configurable) 500ms
BLOCKED Not in whitelist — access denied 0

How Priority is Determined

Priority is checked in this order (first match wins):

  1. ADMIN — in config list (admin.ids or admin.channels) OR isAdmin = true in database
  2. BLOCKED — not in whitelist, not in any configured channel
  3. VIP — in config list (vip.ids) OR isPremium = true (Telegram Premium) OR in vip.channels
  4. REGULAR — all other users in whitelist

Configuration via Environment Variables

User access is configured via environment variables (not hardcoded in YAML):

Telegram

# Admin users by Telegram ID
TELEGRAM_ACCESS_ADMIN_IDS=123456789,987654321

# Admin channel (members get ADMIN)
TELEGRAM_ACCESS_ADMIN_CHANNELS=-1000000000000,@admins

# VIP users by Telegram ID
TELEGRAM_ACCESS_VIP_IDS=111111111,222222222

# VIP channels (members get VIP)
TELEGRAM_ACCESS_VIP_CHANNELS=-1002000000000,@vipgroup

# Regular users by Telegram ID
TELEGRAM_ACCESS_REGULAR_IDS=333333333

# Regular channels (members get REGULAR)
TELEGRAM_ACCESS_REGULAR_CHANNELS=-1003000000000,@community

REST API

# Admin emails
REST_ACCESS_ADMIN_EMAILS=admin@example.com

# VIP emails
REST_ACCESS_VIP_EMAILS=vip@example.com,premium@example.com

# Regular emails
REST_ACCESS_REGULAR_EMAILS=user@example.com,test@example.com

Bulkhead Configuration (application.yml)

Edit application.yml to change request limits:

open-daimon:
  common:
    bulkhead:
      enabled: true
      instances:
        ADMIN:
          maxConcurrentCalls: 10
          maxWaitDuration: 1s
        VIP:
          maxConcurrentCalls: 5
          maxWaitDuration: 1s
        REGULAR:
          maxConcurrentCalls: 1
          maxWaitDuration: 500ms

Managing Users

  • Add admin: Set TELEGRAM_ACCESS_ADMIN_IDS or REST_ACCESS_ADMIN_EMAILS env variable
  • Add VIP: Set TELEGRAM_ACCESS_VIP_IDS or REST_ACCESS_VIP_EMAILS env variable
  • Add to whitelist (REGULAR): Use TelegramWhitelistService or DB table telegram_whitelist
  • Database fields: isAdmin, isPremium in user tables (legacy, config takes priority)

Startup initialization of direct users: On application startup, all users listed in REST_ACCESS_*_EMAILS and TELEGRAM_ACCESS_*_IDS (admin, vip, regular) are created or updated in the database with flags set by level. If a user appears in more than one level, the highest level wins (ADMIN > VIP > REGULAR). Groups/channels are not used for this; only the direct ids/emails from config are initialized. For Telegram, when the bot is available, the initializer calls the getChat API for each configured id to fetch real username, first name, and last name; new users are then created with these values instead of a placeholder (e.g. id_<telegramId>). If getChat fails (e.g. user never chatted with the bot), the placeholder is used.

Related Files

  • UserPriority.java — enum with priority levels
  • TelegramUserPriorityService.java — Telegram priority logic
  • RestUserPriorityService.java — REST priority logic
  • PriorityRequestExecutor.java — bulkhead execution
  • application.yml — bulkhead limits
  • TelegramProperties.java, RestProperties.java — access configuration

Requirements

  • Java 21 (LTS)
  • Maven 3.6+
  • Docker & Docker Compose (for PostgreSQL, Prometheus, Grafana; optional Elasticsearch, Kibana)

Tech stack

  • Java 21 (LTS), Spring Boot 3.5.13
  • PostgreSQL 17.0 with Flyway migrations
  • Prometheus + Grafana for metrics, Elasticsearch + Kibana for logging

Modules

You can add only the modules you need. All modules use groupId io.github.ngirchev; set opendaimon.version in your POM or use a concrete version.

Module dependency graph

graph TD
    common[opendaimon-common]
    telegram[opendaimon-telegram] --> common
    rest[opendaimon-rest] --> common
    ui[opendaimon-ui] --> rest
    springai[opendaimon-spring-ai] --> common
    starter[opendaimon-spring-boot-starter] --> common
    starter --> springai
    mock[opendaimon-gateway-mock] --> common
Loading

Module overview

Module Description Depends on
opendaimon-common Core: entities, services, request prioritization
opendaimon-telegram Telegram Bot interface opendaimon-common
opendaimon-rest REST API (controllers, Swagger) opendaimon-common
opendaimon-ui Web UI (Thymeleaf) opendaimon-rest
opendaimon-spring-ai Spring AI (OpenRouter, Ollama, chat memory, RAG) opendaimon-common
opendaimon-spring-boot-starter Starter with OpenDaimon defaults for external Spring Boot apps opendaimon-common, opendaimon-spring-ai
opendaimon-gateway-mock Mock AI provider for tests opendaimon-common
opendaimon-app Bundled runnable application Telegram, REST, UI, Spring AI, mock gateway

opendaimon-starter-consumer-example is a standalone consumer project, not a published OpenDaimon module and not part of the root Maven reactor. It exists to verify that a normal external Spring Boot app can consume the starter without manual OpenDaimon configuration imports.

Use the Spring Boot starter in another app

Recommended dependency for external Spring Boot applications that want common OpenDaimon defaults and Spring AI wiring:

<dependency>
    <groupId>io.github.ngirchev</groupId>
    <artifactId>opendaimon-spring-boot-starter</artifactId>
    <version>${opendaimon.version}</version>
</dependency>

Add delivery modules such as opendaimon-rest or opendaimon-telegram explicitly when your application needs those interfaces:

<dependency>
    <groupId>io.github.ngirchev</groupId>
    <artifactId>opendaimon-rest</artifactId>
    <version>${opendaimon.version}</version>
</dependency>

The starter brings opendaimon-common, opendaimon-spring-ai, OpenDaimon auto-configuration imports, and low-priority defaults from META-INF/opendaimon/opendaimon-defaults.yml. Your application still owns normal Spring Boot infrastructure such as web, JPA, validation, datasource, and secret configuration. For a complete external-app setup, see opendaimon-starter-consumer-example.

Example: Telegram bot + Spring AI

Minimal setup for a Telegram bot with AI:

<dependency>
    <groupId>io.github.ngirchev</groupId>
    <artifactId>opendaimon-telegram</artifactId>
    <version>${opendaimon.version}</version>
</dependency>
<dependency>
    <groupId>io.github.ngirchev</groupId>
    <artifactId>opendaimon-spring-ai</artifactId>
    <version>${opendaimon.version}</version>
</dependency>

Example: REST API + Web UI + Spring AI

No Telegram; REST and browser UI only:

<dependency>
    <groupId>io.github.ngirchev</groupId>
    <artifactId>opendaimon-ui</artifactId>
    <version>${opendaimon.version}</version>
</dependency>
<dependency>
    <groupId>io.github.ngirchev</groupId>
    <artifactId>opendaimon-spring-ai</artifactId>
    <version>${opendaimon.version}</version>
</dependency>

Example: All modules

Use the assembled application module (includes Telegram, REST, UI, Spring AI, gateway-mock):

<dependency>
    <groupId>io.github.ngirchev</groupId>
    <artifactId>opendaimon-app</artifactId>
    <version>${opendaimon.version}</version>
</dependency>

Quick start

Docker (fastest)

Pull and run the latest published image — no build needed:

# Pull the image
docker pull ghcr.io/ngirchev/open-daimon:latest

# Run with your environment variables
docker run -p 8080:8080 --env-file .env ghcr.io/ngirchev/open-daimon:latest

Specific version: docker pull ghcr.io/ngirchev/open-daimon:1.2.3

Note: The app requires PostgreSQL, MinIO, and other services. Use docker-compose.yml for a full local setup (see below).

Running the app (no Java experience)

If you are new to Java, follow these steps. You will need a terminal (command line): on Windows use PowerShell or Command Prompt; on macOS/Linux use Terminal.

1. Install Java 21

The app runs on Java (a runtime). You need Java 21 specifically.

  • Windows / macOS / Linux: download and install from Eclipse Temurin (Adoptium) — choose your OS and install the JDK 21.
  • After installation, open a new terminal and run: java -version. You should see something like openjdk version "21.x.x".

2. Install Docker

The app uses PostgreSQL (a database). The easiest way is to run it in Docker.

  • Install Docker Desktop (includes Docker Compose). Start Docker so it is running in the background.

3. Prepare configuration

  • In the project folder, copy the example config: copy .env.example to a new file named .env.
  • Open .env in a text editor and set at least: TELEGRAM_USERNAME, TELEGRAM_TOKEN, OPENROUTER_KEY, POSTGRES_PASSWORD. Do not commit .env (it contains secrets).

4. Start the database

In the terminal, from the project folder:

docker-compose up -d postgres prometheus grafana

5. Build and run

  • If you have the source code and want to build yourself: install Maven ( build tool for Java). Then in the project folder run:
    mvn clean install
    java -jar opendaimon-app/target/opendaimon-app-<version>.jar
  • If someone gave you a ready JAR file: put the JAR in a folder, put your .env in the same folder (or set the same variables in the environment), then run:
    java -jar opendaimon-app-<version>.jar

The app will start. You can open the Web UI or use the Telegram bot according to your configuration. For more options ( e.g. run everything in Docker), see the sections below.

Environment variables

Create a .env file in the project root (do not commit it; add .env to .gitignore). Use .env.example as a template:

cp .env.example .env
# Edit .env and set TELEGRAM_USERNAME, TELEGRAM_TOKEN, OPENROUTER_KEY, POSTGRES_PASSWORD, etc.

For local run without Docker Compose you can also export variables in the shell.

Local run (for development)

  1. Start infrastructure:

    docker-compose up -d postgres prometheus grafana
  2. Build the project:

    mvn clean install
  3. Run the application:

    mvn spring-boot:run -pl opendaimon-app

Run with Docker Compose (recommended)

  1. Create .env from .env.example and set required values ( see Environment variables above).

    Create application-local.yml for app overrides (optional but recommended):

    cp application-local.yml.example application-local.yml
  2. Build the project:

    mvn clean package -DskipTests
  3. Start all services:

    docker-compose up -d

    Or with image rebuild: docker-compose up -d --build

  4. Check status:

    docker-compose ps
    docker-compose logs -f opendaimon-app

Build and run

Prerequisites

  • Java 21: java -version
  • Maven 3.6+: mvn -version
  • Docker (for DB and monitoring): docker --version

Start infrastructure

# PostgreSQL, Prometheus, Grafana, Elasticsearch, Kibana
docker-compose up -d
docker-compose ps

Build project

mvn clean install
mvn clean install -DskipTests              # without tests
mvn clean install -pl opendaimon-telegram       # single module
mvn clean install -pl opendaimon-app -am        # module and dependencies

Run application

Option 1: Maven (development)

mvn spring-boot:run -pl opendaimon-app

Option 2: Run the built JAR

After mvn clean install (or mvn clean package -pl opendaimon-app -am), run the executable JAR. Set environment variables or use a .env file in the current directory (see Environment variables).

java -jar opendaimon-app/target/opendaimon-app-<version>.jar

JAR name follows the Maven revision property from the parent POM. Use Java 21: java -version.

DB migrations

mvn flyway:migrate
mvn flyway:info
mvn flyway:clean   # use with caution

Server deployment

Detailed production deployment guide: DEPLOYMENT.md

Useful links

After starting the application:

Service URL
Swagger UI http://localhost:8080/swagger-ui/index.html
Actuator Health http://localhost:8080/actuator/health
Prometheus metrics http://localhost:8080/actuator/prometheus
Prometheus UI http://localhost:9090
Grafana http://localhost:3000 (admin/admin123456)
Kibana http://localhost:5601

Testing

Run all tests

mvn test

Run tests for a specific module

mvn test -pl opendaimon-common
mvn test -pl opendaimon-telegram

Run a specific test

# Example from README
mvn test -Dtest=repository.telegram.io.github.ngirchev.opendaimon.common.TelegramUserRepositoryTest -pl opendaimon-app

# Specific method
mvn test "-Dtest=repository.telegram.io.github.ngirchev.opendaimon.common.TelegramUserRepositoryTest#whenSaveUser_thenUserIsSaved" -pl opendaimon-app

# SpringAIGatewayIT (streaming)
mvn test -pl opendaimon-spring-ai -Dtest=SpringAIGatewayIT

Running tests on Windows

  • mvnw.cmd requires JAVA_HOME (JDK 21). Common path: C:\Users\<user>\.jdks\corretto-21.0.10 (IDEA) or File → Project Structure → SDKs.
  • PowerShell from project root:
    $env:JAVA_HOME = "C:\Users\<user>\.jdks\corretto-21.0.10"; cd c:\path\to\open-daimon; .\mvnw.cmd test -pl opendaimon-spring-ai -Dtest=SpringAIGatewayIT
    (replace <user> and path with your JDK and project location).
  • If a single-module test fails with "Could not find artifact opendaimon-common", run .\mvnw.cmd install -DskipTests first, then the test command.
  • From IntelliJ IDEA: right-click SpringAIGatewayIT → Run 'SpringAIGatewayIT'.

Integration tests

Uses Testcontainers for PostgreSQL:

  • Docker container with PostgreSQL is started automatically
  • Flyway migrations are applied
  • Container is removed after tests
  • TelegramMockGatewayIntegrationTest — main test for the Telegram part
  • SpringAIGatewayOpenRouterIntegrationTest — main test for the Spring AI part
  • SpringAIGatewayIT — streaming test (no Ollama, mocked Flux with delays)

Monitoring and debugging

Endpoints

Logging (Elasticsearch + Kibana)

Logs are sent to Elasticsearch via Logstash (TCP on port 5044). Index pattern: opendaimon-logs-*. The application also writes logs to a local file: logs/opendaimon.log (overwritten on every app start). You can override the file path with environment variable LOG_FILE_PATH.

Quick check for file logs:

tail -f logs/opendaimon.log

To view logs in Kibana:

  1. Open Kibana (http://localhost:5601)
  2. Stack ManagementData ViewsCreate data view
  3. Configure:
    • Name: opendaimon-logs
    • Index pattern: opendaimon-logs-*
    • Timestamp field: @timestamp
  4. Save, then go to ObservabilityLogs

Query logs via Dev Tools:

GET opendaimon-logs-*/_search?size=10

Check log count:

curl "http://localhost:9200/opendaimon-logs-*/_count"

Metrics

Metrics are sent to Prometheus and visualized in Grafana. See Monitoring and debugging section above.

Troubleshooting

Flyway migrations not applying

# Check status
mvn flyway:info

# Force apply
mvn flyway:migrate

# Baseline if needed
mvn flyway:baseline

Tests fail with DB error

  • Ensure Docker is running
  • Testcontainers starts PostgreSQL automatically
  • Check logs: docker logs open-daimon-postgres

"Could not find a valid Docker environment" / Status 400 (Windows)

On Windows, Docker Desktop may return 400 over npipe and Testcontainers cannot connect. Enable TCP access to the daemon:

  1. Docker Desktop → Settings → General → enable "Expose daemon on tcp://localhost:2375 without TLS" → Apply & Restart.
  2. Before running tests, set (PowerShell):
    $env:DOCKER_HOST = "tcp://localhost:2375"
  3. Run tests:
    .\mvnw.cmd verify -q
    Or in one line: $env:DOCKER_HOST = "tcp://localhost:2375"; .\mvnw.cmd verify -q

Module cannot see dependencies

# Rebuild with dependencies
mvn clean install -am

# Refresh IDE (IntelliJ IDEA)
File -> Invalidate Caches / Restart

Metrics not showing in Grafana

Logs not appearing in Kibana

  • Verify Elasticsearch has logs: curl "http://localhost:9200/opendaimon-logs-*/_count"
  • Create a Data View in Kibana (see Kibana Setup for Logs)
  • Check Logstash is running: docker compose logs logstash

Documentation

Setup guides

Project docs

Project structure

open-daimon/
├── opendaimon-common/        # Core module with shared logic
├── opendaimon-spring-ai/     # Spring AI integration
├── opendaimon-spring-boot-starter/
│                             # Starter for external Spring Boot applications
├── opendaimon-telegram/      # Telegram Bot interface
├── opendaimon-rest/          # REST API interface
├── opendaimon-ui/            # Web UI interface
├── opendaimon-gateway-mock/  # Mock provider for tests
├── opendaimon-app/           # Bundled runnable application
└── opendaimon-starter-consumer-example/
                              # Standalone consumer example, outside the root reactor

Additional commands

Web UI for Ollama

docker run -d \
  --name open-webui \
  -p 3000:8080 \
  --add-host=host.docker.internal:host-gateway \
  -e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
  -v open-webui:/app/backend/data \
  ghcr.io/open-webui/open-webui:main

Port forwarding (example)

ssh -N -L 23750:/var/run/docker.sock user@your-server.local

Teardown and full bring-up

docker-compose -H tcp://localhost:23750 down -v
docker-compose -H tcp://localhost:23750 up -d

License

OpenDaimon is licensed under the Apache License, Version 2.0. See LICENSE and NOTICE for details.

The Apache License does not grant trademark rights. If you distribute a fork, modified version, hosted service, or commercial product based on OpenDaimon, use a distinct product name and preserve the required attribution notices. See TRADEMARKS.md.

About

Self-hosted Java/Spring Boot AI agent framework with FSM-based ReAct loop, extensible modules, Telegram assistant for personal/group chats, REST/Web UI, OpenRouter/Ollama, and Spring Boot starter.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors