| Authority | CANONICAL |
|---|---|
| Version | v1 |
| Last Updated | 2026-03-26 |
| Owner | Sebastian (Architect) |
| Scope | SDK-first Ergo project convention, crate layout, and profile model |
| Change Rule | Tracks implementation |
This document explains the current v1 Ergo project model.
The primary product surface is the Rust SDK. A production Ergo
application is a Rust crate that depends on ergo-sdk-rust,
may register custom primitives in-process, and runs named profiles from
either ergo.toml on disk or an SDK-owned in-memory project snapshot.
The CLI remains a support tool for validation, replay, fixture runs, and other development conveniences. It is not the defining production surface.
ergo init now scaffolds this project shape directly.
Today that scaffold uses Python 3 sample channel programs for the generated live-profile examples. Projects can replace those sample boundary programs while keeping the same manifest layout.
An Ergo project has two top-level manifests:
Cargo.tomlRust build, dependencies, and binary/library configuration.ergo.tomlErgo project profiles, graph/adapter/channel wiring, and capture defaults.
These must stay separate.
Cargo.toml answers:
- how the Rust crate is built
- which crates it depends on
- which binary is executed
ergo.toml answers:
- which graph a profile runs
- which adapter a profile binds
- which ingress source a profile uses
- which egress config a profile references
- where capture output should go
The v1 project layout is:
my-project/
├── README.md
├── Cargo.toml
├── ergo.toml
├── src/
│ ├── main.rs
│ └── implementations/
├── graphs/
├── clusters/
├── adapters/
├── channels/
│ ├── ingress/
│ └── egress/
├── egress/
├── fixtures/
└── captures/
Purpose of each area:
README.mdGenerated quick-start guide for the scaffolded project, including commands, profiles, and first-edit locations.src/main.rsUser-owned application entrypoint that builds an Ergo engine through the SDK.src/implementations/Optional custom Source, Compute, Trigger, and Action implementations registered throughCatalogBuilder/ SDK builder.graphs/Graph YAML entrypoints.clusters/Reusable cluster definitions. These are first-class authored artifacts and are discovered automatically by project resolution.adapters/Adapter manifests defining accepted event/effect contracts.channels/ingress/User-authored ingress channel programs.channels/egress/User-authored egress channel programs.egress/StandaloneEgressConfigTOML files referenced by profiles.fixtures/Deterministic input event streams.captures/Replay artifacts produced by runs.
The intended ergonomic shape is:
let ergo = Ergo::builder()
.project_root(".")
.add_source(MySource::new())
.add_action(MyAction::new())
.build()?;
let outcome = ergo.run_profile("live")?;Custom registration is optional. CatalogBuilder seeds the core primitive inventory by default, so projects can run with only stdlib primitives.
The SDK now supports two truthful project sources behind that same profile-facing surface:
- filesystem project resolution through
.project_root(...) - in-memory project snapshots through
.in_memory_project(...)
Filesystem project/profile discovery remains loader-owned. The in-memory project/profile model is SDK-owned. Both resolve into the same host-owned execution, replay, validation, and manual-runner orchestration.
That means:
- user code owns primitive registration
- filesystem project/profile resolution is shared loader infrastructure consumed by the SDK
- in-memory project/profile resolution is an SDK-native product model with the same host-owned execution semantics
- canonical execution still delegates to host
- replay still delegates to host strict replay
The SDK should wrap host + loader ergonomically. It should not invent a second execution model.
run_profile() remains the raw blocking call. User-facing application
entrypoints should wire a StopHandle and use
run_profile_with_stop(...) when they need graceful operator stop,
including Ctrl-C handling for long-running profiles.
That is the default shape generated by ergo init: scaffolded apps use
the stop-aware path, while the lower-level raw call remains available
for callers that want to own signal handling and stop policy
themselves.
The built Ergo handle is same-thread reusable: run, run_profile,
replay, and validation operations borrow it, so one handle can back
multiple operations. That reuse also keeps the same registered
primitive instances alive behind the handle under the current
in-process trust model.
The SDK also exposes runner_for_profile(...) as a low-level manual
stepping surface over resolved profile assets. It still resolves a
normal run profile, so the profile must declare exactly one ingress
source even though manual stepping does not launch that ingress path.
Manual stepping honors graph, cluster paths, adapter, and egress,
but it ignores ingress, max_duration, and max_events.
finish() returns a CaptureBundle; explicit capture-file writing is a
separate SDK call, and only that explicit path applies
capture_output / pretty_capture.
Filesystem profiles live in ergo.toml.
SDK callers may also define profiles programmatically through
InMemoryProjectSnapshot::builder(...) and
InMemoryProfileConfig::{process, fixture_items}(...).
Filesystem and in-memory profiles share the same product-facing operations, but their resolved shapes differ:
- Filesystem profiles resolve path-based fields from
ergo.toml:graph, implicit projectclusters/, optionaladapter, exactly one ingress source (fixtureoringressprocess command), optionalegress, and optionalcapture_output/pretty_capture. - In-memory profiles resolve SDK-owned execution assets:
PreparedGraphAssets,InMemoryIngress::{Process, FixtureItems}, optionalAdapterInput, optionalEgressConfig, andProfileCapture::{InMemory, File}.
For SDK in-memory profiles, the same profile-facing operations
(run_profile, validate_project, runner_for_profile,
replay_profile_bundle) work identically. The transport difference is
internal to SDK resolution, not a second orchestration model.
The project clusters/ directory is always added to cluster search
paths automatically. Users should not repeat it in every profile.
One ingress channel per profile is the v1 limit. If a project needs multiple live feeds, it must multiplex them upstream into one ingress channel.
Custom implementations use the same trait surface as stdlib primitives.
The v1 loading mechanism is in-process Rust crate registration
through CatalogBuilder. That means:
- no dynamic library loading
- no WASM loading in v1
- no separate runtime plugin boundary
The project binary links the user’s primitives directly and registers them before run/validation/replay surfaces are built.
See:
Clusters are not an advanced or deferred feature.
They are already part of the live runtime/loader path:
- loader discovers cluster files from search paths
- host loads the cluster tree before expansion
- runtime expands clusters away before execution
So the scaffold generated by ergo init includes:
- a sample cluster in
clusters/ - a sample graph in
graphs/that references that cluster
The CLI still matters, but as supporting tooling:
ergo init- validate
- replay
- fixture runs
- optional project-mode convenience commands
The CLI may consume the same project-resolution surface as the SDK, but it does not define the product model.