This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
ByteChef is an open-source, low-code API integration and workflow automation platform built on Spring Boot. It serves as both an automation solution and an embedded iPaaS (Integration Platform as a Service) for SaaS products.
All server commands should be run from the project root directory:
# Build and compile the project
./gradlew clean compileJava
# Run the server locally (requires Docker infrastructure)
cd server
docker compose -f docker-compose.dev.infra.yml up -d
cd ..
./gradlew -p server/apps/server-app bootRun
# Code formatting (must run before commits)
./gradlew spotlessApply
# Run checks and tests
./gradlew check
./gradlew test && ./gradlew testIntegration
# Generate component documentation
./gradlew generateDocumentationClient commands should be run from the client/ directory:
# Install dependencies
npm install
# Development server
npm run dev
# Code formatting
npm run format
# Linting and type checking
npm run lint
npm run typecheck
# Full check (lint + typecheck)
npm run check
# Build for production
npm run build
# Run tests
npm run test# Start PostgreSQL, Redis, and other services
cd server
docker compose -f docker-compose.dev.infra.yml up -d
# Or run everything in Docker
docker compose -f docker-compose.dev.server.yml up -d- Backend: Java 25 with Spring Boot 3.5.6
- Frontend: React 19 with TypeScript, Vite, TailwindCSS
- Database: PostgreSQL with Liquibase migrations
- Message Broker: Redis (default), supports RabbitMQ, Kafka, JMS, AMQP
- Build System: Gradle with Kotlin DSL
- Code Execution: GraalVM Polyglot (Java, JavaScript, Python, Ruby)
-
atlas/- Workflow engine coreatlas-coordinator/- Orchestrates workflow executionatlas-execution/- Manages workflow execution lifecycleatlas-worker/- Task execution workersatlas-configuration/- Workflow configuration management
-
automation/- iPaaS automation implementationautomation-configuration/- Project and workflow configurationautomation-connection/- Connection managementautomation-workflow/- Workflow coordination and executionautomation-task/- Task management services
-
platform/- Core infrastructure servicesplatform-component/- Component definition and managementplatform-connection/- Connection handlingplatform-workflow/- Workflow managementplatform-scheduler/- Scheduling servicesplatform-oauth2/- OAuth2 authenticationplatform-webhook/- Webhook handlingplatform-ai/- AI integration services
-
core/- Foundational utilitiesevaluator/- Expression evaluationfile-storage/- File storage abstractionencryption/- Encryption servicesmessage/- Message broker abstraction
Components are located in server/libs/modules/components/ and follow this pattern:
- Each component has a
ComponentHandlerclass with@AutoServiceannotation - Components define actions (operations) and triggers (event initiators)
- Connection definitions handle authentication and configuration
- OpenAPI specifications are often included for API-based components
The server/ee/ directory contains microservices for distributed deployment:
api-gateway-app/- API Gateway with routingconfiguration-app/- Configuration management serviceexecution-app/- Workflow execution serviceworker-app/- Task execution workersruntime-job-app/- Runtime job execution
When working on components in server/libs/modules/components/:
- Component Definition Pattern:
@AutoService(ComponentHandler.class)
public class ExampleComponentHandler implements ComponentHandler {
private static final ComponentDefinition COMPONENT_DEFINITION = component("example")
.title("Example Component")
.connection(CONNECTION_DEFINITION)
.actions(/* actions */)
.triggers(/* triggers */);
}-
Testing Pattern:
- Component tests are in
./src/test/java/com/bytechef/component/ - Running tests auto-generates
.jsondefinition files in./src/test/resources/definition/ - Delete existing
.jsonfiles before running tests to regenerate them
- Component tests are in
-
Documentation:
- Component documentation goes in
./src/main/resources/README.md - Run
./gradlew generateDocumentationto update docs
- Component documentation goes in
- Spotless: Code formatting is enforced. Run
./gradlew spotlessApplybefore commits - Checkstyle, PMD, SpotBugs: Static analysis tools are configured
- Tests: All new code should include appropriate tests
- Documentation: Update component documentation when adding features
- Configuration files are in
server/libs/config/ - SDK components are in
sdks/backend/java/ - CLI tools are in
cli/ - Documentation source is in
docs/
- Avoid method chaining except when the builder pattern is applicable
-
Prefer Constructor Injection over Field/Setter Injection
- Declare all mandatory dependencies as
finalfields and inject them through the constructor - Spring will auto-detect if there is only one constructor, no need to add
@Autowired - Constructor injection ensures proper initialization and enables easier unit testing
- Declare all mandatory dependencies as
-
Prefer package-private over public for Spring components
- Declare Controllers,
@Configurationclasses and@Beanmethods with default (package-private) visibility whenever possible - Reinforces encapsulation while still allowing Spring's classpath scanning to work
- Declare Controllers,
-
Organize Configuration with Typed Properties
- Group application-specific configuration properties with a common prefix
- Bind them to
@ConfigurationPropertiesclasses with validation annotations - Prefer environment variables instead of profiles for different environments
-
Define Clear Transaction Boundaries
- Define each Service-layer method as a transactional unit
- Annotate query-only methods with
@Transactional(readOnly = true) - Annotate data-modifying methods with
@Transactional - Keep transactions as brief as possible
-
Disable Open Session in View Pattern
- Set
spring.jpa.open-in-view=falsein application properties - Prevents N+1 select problems and forces explicit fetching strategies
- Set
-
Separate Web Layer from Persistence Layer
- Don't expose entities directly as responses in controllers
- Define explicit request and response record (DTO) classes
- Apply Jakarta Validation annotations on request records
-
Follow REST API Design Principles
- Use versioned, resource-oriented URLs:
/api/v{version}/resources - Consistent patterns for collections and sub-resources
- Use
ResponseEntity<T>for explicit HTTP status codes - Use pagination for unbounded collections
- Use snake_case or camelCase consistently in JSON
- Use versioned, resource-oriented URLs:
-
Use Command Objects for Business Operations
- Create purpose-built command records (e.g.,
CreateOrderCommand) to wrap input data - Clearly communicates expected input data to callers
- Create purpose-built command records (e.g.,
-
Centralize Exception Handling
- Use
@ControllerAdviceor@RestControllerAdvicewith@ExceptionHandlermethods - Return consistent error responses using ProblemDetails format (RFC 9457)
- Use
-
Actuator Security
- Expose only essential actuator endpoints (
/health,/info,/metrics) without authentication - Secure all other actuator endpoints
- Expose only essential actuator endpoints (
-
Internationalization with ResourceBundles
- Externalize all user-facing text into ResourceBundles rather than embedding in code
- Enables proper localization support
-
Use Testcontainers for Integration Tests
- Spin up real services (databases, message brokers) in integration tests
- Use specific Docker image versions, not
latesttag
-
Use Random Port for Integration Tests
- Annotate test classes with
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT) - Avoids port conflicts in CI/CD environments
- Annotate test classes with
-
Integration Test Naming Convention
- All integration test classes must end with "IntTest" suffix (e.g.,
WorkflowFacadeIntTest.java) - Ensures consistency and clarity between unit tests and integration tests
- All integration test classes must end with "IntTest" suffix (e.g.,
-
Logging Best Practices
- Use SLF4J logging framework, never
System.out.println() - Protect sensitive data - no credentials or personal information in logs
- Guard expensive log calls with level checks or suppliers:
if (logger.isDebugEnabled()) { logger.debug("Detailed state: {}", computeExpensiveDetails()); }
- Use SLF4J logging framework, never
- Admin: admin@localhost.com / admin
- User: user@localhost.com / user
- Server: 8080 (main application)
- API: 9555 (backend API server)
- Client: 3000 (development server)
- PostgreSQL: 5432
- Redis: 6379
- Mailhog: 1025
- Create component directory in
server/libs/modules/components/ - Add component to
settings.gradle.kts - Implement
ComponentHandlerwith actions/triggers - Add tests and run to generate JSON definition
- Add documentation in README.md
- Run
./gradlew generateDocumentation
- Workflows are defined in JSON format
- Visual editor is available in the client application
- Workflow execution is handled by the Atlas engine
- Test workflows through the UI or API endpoints
- Use Liquibase for schema migrations
- Migration files are in
server/libs/config/liquibase-config/ - Database changes are applied automatically on startup
Dockerfilefor server applicationdocker-compose.ymlfor full stackdocker-compose.dev.infra.ymlfor development infrastructuredocker-compose.dev.server.ymlfor server-only development
- Helm charts are in
kubernetes/helm/bytechef-monolith/ - Supports both monolith and microservices deployments
- GitHub Actions workflows for build and test
- Automated component documentation generation
- Code quality checks are enforced
- Port conflicts: Check if ports 5432, 6379, 1025 are in use
- Java version: Ensure Java 25+ is installed and JAVA_HOME is set
- Docker: Make sure Docker is running for infrastructure services
- Database schema: Use
docker compose down -vto reset database
- Gradle JVM is configured with 4GB heap in
gradle.properties - Parallel builds are disabled by default but can be enabled
- Use
./gradlew --build-cachefor faster builds
This architecture provides a solid foundation for building scalable integration workflows while maintaining flexibility for both embedded and standalone deployment scenarios.