The flowctl init command creates Stellar data pipeline configurations through an interactive wizard or via command-line flags.
flowctl init [flags]The init command guides you through creating a v1 pipeline configuration with automatic component downloads. Components are automatically downloaded from Docker Hub when you run the pipeline.
| Flag | Type | Default | Description |
|---|---|---|---|
--network |
string | (interactive) | Stellar network: testnet or mainnet |
--destination |
string | (interactive) | Sink type: postgres, duckdb, or csv |
--output, -o |
string | stellar-pipeline.yaml |
Output filename |
--non-interactive |
bool | false | Skip interactive prompts (requires --network and --destination) |
-h, --help |
bool | false | Show help |
| Flag | Type | Default | Description |
|---|---|---|---|
--config |
string | $HOME/.config/flowctl/flowctl.yaml |
Config file path |
--log-level |
string | info |
Log level: debug, info, warn, error |
Run without flags for an interactive wizard:
./bin/flowctl initThe wizard prompts for:
-
Network Selection
testnet- Stellar test network (recommended for development)mainnet- Stellar production network
-
Destination Selection
duckdb- Embedded analytics database (easiest setup)postgres- PostgreSQL database (production-ready)csv- CSV file output (simplest format)
For automation, CI/CD, or scripting:
# Basic usage
./bin/flowctl init --non-interactive --network testnet --destination duckdb
# Custom output file
./bin/flowctl init --non-interactive --network mainnet --destination postgres -o prod-pipeline.yaml./bin/flowctl init --non-interactive --network testnet --destination duckdbGenerates stellar-pipeline.yaml:
apiVersion: flowctl/v1
kind: Pipeline
metadata:
name: stellar-pipeline
description: Process stellar contract events on testnet
spec:
driver: process
sources:
- id: stellar-source
type: stellar-live-source@v1.0.0
config:
network_passphrase: "Test SDF Network ; September 2015"
backend_type: RPC
rpc_endpoint: https://soroban-testnet.stellar.org
start_ledger: 54000000
processors:
- id: contract-events
type: contract-events-processor@v1.0.0
config:
network_passphrase: "Test SDF Network ; September 2015"
inputs: ["stellar-source"]
sinks:
- id: duckdb-sink
type: duckdb-consumer@v1.0.0
config:
database_path: ./stellar-pipeline.duckdb
inputs: ["contract-events"]./bin/flowctl init --non-interactive --network mainnet --destination postgres -o mainnet.yaml./bin/flowctl init --non-interactive --network testnet --destination csv -o export.yamlWhen you run a pipeline created by flowctl init, components are automatically downloaded from Docker Hub on first run.
| Component | Image | Type | Description |
|---|---|---|---|
| Stellar Live Source | docker.io/withobsrvr/stellar-live-source:v1.0.0 |
Source | Streams Stellar ledger data in real-time |
| DuckDB Consumer | docker.io/withobsrvr/duckdb-consumer:v1.0.0 |
Sink | Writes data to embedded DuckDB database |
| PostgreSQL Sink | docker.io/withobsrvr/postgres-sink:v1.0.0 |
Sink | Writes data to PostgreSQL database |
| CSV Sink | docker.io/withobsrvr/csv-sink:v1.0.0 |
Sink | Writes data to CSV files |
- Registry: Docker Hub (
docker.io) - Organization:
withobsrvr - Cache Location:
~/.flowctl/components/
To pre-download components:
docker pull docker.io/withobsrvr/stellar-live-source:v1.0.0
docker pull docker.io/withobsrvr/duckdb-consumer:v1.0.0
docker pull docker.io/withobsrvr/postgres-sink:v1.0.0
docker pull docker.io/withobsrvr/csv-sink:v1.0.0The init command generates pipelines with a three-stage architecture: source → processor → sink.
apiVersion: flowctl/v1
kind: Pipeline
metadata:
name: <generated-name>
description: <generated-description>
spec:
driver: process
sources:
- id: stellar-source
type: stellar-live-source@v1.0.0
config:
network_passphrase: <network-passphrase>
backend_type: RPC
rpc_endpoint: <rpc-endpoint>
start_ledger: <recent-ledger>
processors:
- id: contract-events
type: contract-events-processor@v1.0.0
config:
network_passphrase: <network-passphrase>
inputs: ["stellar-source"]
sinks:
- id: <sink-id>
type: <sink-type>
config:
<sink-specific-config>
inputs: ["contract-events"]- driver: Uses
processdriver to run components as local processes - type: Component type with version (e.g.,
stellar-live-source@v1.0.0) - processors: The
contract-events-processorextracts Soroban contract events from raw ledgers - inputs: Connects each component to its upstream data source
- Three-stage flow: Source produces ledgers → Processor extracts events → Sink stores data
After creating a pipeline:
# Run the pipeline
./bin/flowctl run stellar-pipeline.yaml
# Run with debug logging
./bin/flowctl run stellar-pipeline.yaml --log-level=debug
# Dry run (validate only)
./bin/flowctl run --dry-run stellar-pipeline.yamlAfter creating your first pipeline:
- Run it:
./bin/flowctl run stellar-pipeline.yaml - Monitor it:
./bin/flowctl dashboard - Add processors: Edit the YAML to add transformation stages
- Deploy to production:
./bin/flowctl translate -f pipeline.yaml -o docker-compose