Deploy your own ZAP1 attestation instance on Zcash mainnet.
- Linux host with Docker
- Zebra 4.3.0+ synced to mainnet (RPC on 127.0.0.1:8232)
- Zaino scanner (optional, for compact block path - gRPC on 127.0.0.1:8137)
- Rust toolchain (for building the image and running keygen)
git clone https://github.com/Frontier-Compute/zap1.git
cd zap1
# Generate operator config (keys, .env, docker-compose, run script)
bash scripts/operator-setup.sh myoperator 3081
# Build the Docker image
docker build -t zap1:latest .
# Start
cd operators/myoperator && ./run.shThe setup script generates:
.envwith UFVK, API key, Zebra/Zaino URLs, listen port.seedwith spending seed (chmod 600, keep offline after setup)docker-compose.ymlwith health checkrun.shto start the container
# Health check
curl http://127.0.0.1:3081/health
# Protocol info
curl http://127.0.0.1:3081/protocol/info
# Run conformance checks against your instance
python3 conformance/check.py --url http://127.0.0.1:3081
python3 conformance/check_api.py --url http://127.0.0.1:3081Zebra RPC (8232) --> ZAP1 API (3081) --> SQLite (/data/zap1.db)
Zaino gRPC (8137) -/ |
v
Merkle tree
|
anchor to Zcash
The ZAP1 server:
- Scans Zebra for shielded transactions matching your UFVK
- Detects paid invoices automatically
- Creates lifecycle events as Merkle leaves
- Periodically anchors the Merkle root to Zcash via a shielded memo
The server anchors automatically when:
- Unanchored leaves reach the threshold (default: 10), or
- Time since last anchor exceeds the interval (default: 24 hours)
On failure, exponential backoff kicks in: 5m, 10m, 20m, 40m, capped at 60m. Backoff resets on success.
Configure via environment variables:
ANCHOR_ZINGO_CLI- path to zingo-cli binaryANCHOR_TO_ADDRESS- the anchor receiving address (generated by keygen)ANCHOR_SERVER- lightwalletd URL for zingo-cliANCHOR_DATA_DIR- zingo wallet data directory
If the automated path is unavailable (wallet empty, zingo-cli sync broken):
# Generate a scannable QR code for any Zcash wallet (Zodl, etc.)
curl -H "Authorization: Bearer YOUR_API_KEY" http://127.0.0.1:3081/admin/anchor/qrScan the QR with your wallet, confirm the send (0.0001 ZEC), then record the anchor:
# After the transaction confirms, record it
cargo run --bin anchor_root -- record \
--db /data/zap1.db \
--root ROOT_HASH \
--txid TXID \
--height BLOCK_HEIGHTRegister a URL to receive POST notifications on events:
# Register a webhook
curl -X POST http://127.0.0.1:3081/webhooks/register \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"url": "https://your-endpoint.example/hook"}'Webhook payloads include:
leaf_created- new event committed to the Merkle treeanchor_confirmed- root anchored to Zcash (includes txid, root, leaf count)anchor_failed- broadcast failed (includes reason, fail count, backoff duration)
Payloads are signed with a keyed BLAKE2b MAC. The secret is returned at registration time.
Set SIGNAL_API_URL and SIGNAL_NUMBER in .env to receive Signal messages on anchor success and failure.
# Full operator status report (live API)
cargo run --bin zap1_ops -- --base-url http://127.0.0.1:3081
# JSON output for automated monitoring
cargo run --bin zap1_ops -- --base-url http://127.0.0.1:3081 --json
# Test against fixture data (offline)
cargo run --bin zap1_ops -- --from-dir examples/zap1_ops_fixture --jsonThe zap1_ops tool checks:
- Scanner sync lag (configurable threshold)
- Anchor staleness (configurable max age)
- Unanchored leaf queue depth
- Cross-consistency between /stats, /anchor/history, and /anchor/status
- Protocol version and network
Output: ok, warn, or critical with details.
| Endpoint | Purpose |
|---|---|
| /health | Scanner status, sync lag, RPC reachability |
| /stats | Anchor count, leaf count, event type distribution |
| /anchor/status | Current root, unanchored leaves, anchor recommendation |
| /anchor/history | All anchored roots with txids and block heights |
The database is a single SQLite file at the path configured in DB_PATH (default: /data/zap1.db). Back it up with:
sqlite3 /data/zap1.db ".backup /path/to/backup.db"The Merkle tree is deterministic. Given the same events in the same order, any operator can reconstruct the identical tree and root hashes.
| Variable | Required | Default | Description |
|---|---|---|---|
UFVK |
Yes | - | Unified full viewing key for scanning |
NETWORK |
Yes | - | Mainnet or Testnet |
ZEBRA_RPC_URL |
Yes | - | Zebra JSON-RPC endpoint |
LISTEN_ADDR |
Yes | - | API listen address (e.g., 127.0.0.1:3081) |
DB_PATH |
Yes | - | SQLite database path |
API_KEY |
No | - | Bearer token for write endpoints |
ZAINO_GRPC_URL |
No | - | Zaino gRPC endpoint for compact block scanning |
ANCHOR_ZINGO_CLI |
No | - | Path to zingo-cli (enables auto-anchoring) |
ANCHOR_TO_ADDRESS |
No | - | Anchor receiving address |
ANCHOR_SERVER |
No | - | Lightwalletd URL for wallet sync |
ANCHOR_DATA_DIR |
No | - | Wallet data directory |
WEBHOOK_URL |
No | - | Anchor event webhook URL |
SIGNAL_API_URL |
No | - | Signal REST API URL |
SIGNAL_NUMBER |
No | - | Signal sender number |
SCAN_FROM_HEIGHT |
No | 0 | Start scanning from this block height |