Skip to content

unattended-backpack/hierophant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

271 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Hierophant

CodeQL Create Release

Prove all things; hold fast that which is good.

Hierophant is an open-source ZK prover network that serves both SP1 and RISC Zero proof requests. It is built to be a drop in replacement for a Succinct SP1 prover network endpoint as well as a drop in replacement for a Bonsai RISC Zero proving endpoint.

SP1 is Succinct's zero-knowledge virtual machine (zkVM). RISC Zero is a separate zkVM with its own proof format and its own SDK. Hierophant was built to be directly compatible with op-succinct, and any program that uses the sp1-sdk or the bonsai-sdk to request proofs can instead use a Hierophant instance.

Hierophant saves costs and maintains censorship-resistance over centralized prover network offerings, making it well-suited for truly-unstoppable applications.

Hierophant and Contemplant

"A hierophant is an interpreter of sacred mysteries and arcane principles."

The Hierophant is master of a self-hosted ZK prover network. The Hierophant receives proof requests and delegates them to any available Contemplants.

"Contemplant: one who contemplates."

The Contemplant is slave to a self-hosted ZK prover network. A Contemplant actually performs the work of generating a proof request that is forwarded from a Hierophant.

Quickstart

To get up and running quickly, we recommend visiting the Scriptory to utilize our prepared setup for easily running a Hierophant, Magister, and a number of Contemplants.

If you would like to run a simple Hierophant and Contemplant pair, we provide a setup here. Simply run make init, adjust your hierophant.toml and contemplant.toml files as desired, and run make run to use it. This will provide you with a working endpoint on the specified Docker network that you can use as the NETWORK_RPC_URL in programs that request proofs, such as the Supplicant. Do note that you will also need to specify a NETWORK_PRIVATE_KEY as well.

You can test a full example of this by running make test-sp1 (or make test-risc0 for the RISC Zero variant). Inspect the corresponding test programs (SP1, RISC Zero) and compose files (SP1, RISC Zero).

Standalone Hierophant

You can build a native version of Hierophant via make build. You can supply configuration to this Hierophant as either environment variables, or through a hierophant.toml created with make init. Please observe the available configuration in hierophant.example.toml.

Running Hierophant by itself doesn't do anything. Hierophant is the master who receives proof requests and assigns them to be executed by Contemplants. There is always just one Hierophant for many Contemplants, and you must run at least one Contemplant to successfully execute proofs.

Once at least one Contemplant is connected, there is nothing else to do besides request a proof to the Hierophant. It will automatically route the proof to a Contemplant for work, and will return the proof when complete. Do note however that the execution loop of Hierophant is driven by receiving proof status requests from the sp1-sdk, so make sure to poll for a proof status often. You likely don't have to worry about this as services that request SP1 proofs will by default poll frequently enough.

Multiple Configuration Files

If you're running in an environment with multiple configuration files (for example, running an integration test while debugging), you can specify a specific config file with -- --config <file>.

Hierophant Endpoints

Hierophant exposes several endpoints for basic status checking. They are all available on the HTTP port (default 9010).

curl --request GET --url http://127.0.0.1:9010/contemplants
curl --request GET --url http://127.0.0.1:9010/dead-contemplants
curl --request GET --url http://127.0.0.1:9010/proof-history
  • GET /contemplants: returns as JSON information on all Contemplants connected to this Hierophant. This includes IP, name, time alive, strikes, average proof completion time, the current in-progress proof, and progress on that proof.
  • GET /dead-contemplants: returns as JSON information on all Contemplants that have been dropped by this Hierophant. Reasons for drop could be network disconnection, not making fast enough progress, or returning incorrect data.
  • GET /proof-history: returns as JSON information on all proofs that this Hierophant has completed, including the Contemplant who was assigned to the proof, and the proof time.

Standalone Contemplant

You can build a native version of Contemplant via make build. You can supply configuration to this Contemplant as either environment variables, or through a contemplant.toml created with make init. Please observe the available configuration in contemplant.example.toml.

A Contemplant must have a Hierophant to connect to. If the machine running Contemplant does not have Docker, Contemplant has to be run with the enable-native-gnark feature. Otherwise, proofs will fail to verify. Building using this setting is the default behavior of our Makefile.

Contemplant also exposes a second cargo feature, enable-risc0-cuda, that links risc0-zkvm's CUDA kernels into the binary for in-process GPU RISC Zero proving. Turning this on produces a binary that requires a CUDA runtime at load time (supplied at deploy time by nvidia-container-runtime or an equivalent host CUDA stack), so leave it off for CPU-only or mixed-deployment images. A binary built with the feature won't load on hosts that don't provide CUDA. The Makefile turns enable-risc0-cuda on automatically when BACKEND=cuda is passed (e.g. make test-risc0 BACKEND=cuda) and leaves it off for the default BACKEND=cpu build.

SSH Access

To aid in debugging running Contemplant images, they make themselves accessible via SSH. Add your SSH keys to container/authorized_keys if you want SSH access inside Contemplant.

Prover VMs and Backends

A Contemplant declares which ZK VMs it serves, and with which backend, through a [[provers]] array in contemplant.toml. A single Contemplant may declare multiple entries so that it will serve either VM as it becomes idle. A Contemplant only processes one proof at a time regardless of how many entries it declares.

The available fields per entry are:

  • vm = "sp1" | "risc0" (required).
  • backend = "cpu" | "cuda" (default "cpu"). CPU uses no GPU and is significantly slower than CUDA. CUDA requires a CUDA-capable NVIDIA GPU on the host and for the container to be launched with GPU access (e.g. docker run --gpus all, or through nvidia-container-runtime). For SP1 specifically, the vendored moongate-server binary is compiled for Ada (sm_89), so a CUDA backend needs an RTX 40-series or newer card. A binary built with enable-risc0-cuda covers every NVIDIA architecture from Turing (sm_75) through Blackwell (sm_120).
  • moongate_endpoint = "http://host:3000/twirp/" (SP1 CUDA only, optional). When supplied, the Contemplant talks to an external moongate server at that address instead of spinning up a Dockerized moongate container. The URL must terminate in /twirp/ because moongate mounts its prover service router under that prefix. The Contemplant appends /twirp/ automatically when the configured URL does not already contain it, so either http://host:3000 or http://host:3000/twirp/ is accepted. Omit the endpoint to have the SP1 SDK spin up a Dockerized moongate container instead.
  • groth16_enabled = true | false (RISC Zero only, default false). Opts this worker into producing Groth16 wrapped proofs, the onchain verifiable flavor. Requires the 2.5 GB of vendored Groth16 prover assets baked into the Contemplant image by Dockerfile.contemplant.

See contemplant.example.toml for a complete annotated configuration.

Progress Tracking Limitation

Progress tracking is only available for SP1 proofs using backend = "cuda" with a remote moongate_endpoint.

The following configurations do not support progress tracking:

  • CPU proving for either VM.
  • Dockerized SP1 CUDA proving (backend = "cuda" without moongate_endpoint).
  • RISC Zero proving in any configuration.
  • Mock proving (when proof requests have mock = true).

Important: Hierophant's worker_required_progress_interval_mins configuration defaults to 0 (disabled). If you want Hierophant to drop workers that don't report progress within a certain interval, you must:

  1. Ensure all your Contemplants serve SP1 with backend = "cuda" and a moongate_endpoint.
  2. Set worker_required_progress_interval_mins to a non-zero value in your Hierophant configuration.

Proof Modes

SP1 clients request one of four modes via the sp1-sdk proof builder: core (raw STARK, not EVM verifiable), compressed (recursive STARK), plonk (EVM verifiable Plonk SNARK), or groth16 (EVM verifiable Groth16 SNARK).

RISC Zero clients request one of three session modes via the Bonsai REST surface that Hierophant exposes at /bonsai/ on its HTTP port: composite (the default, raw STARK), succinct (recursive STARK in a single segment), or groth16 (direct onchain Groth16 seal, requires a groth16_enabled Contemplant). For the canonical Bonsai onchain flow, request a composite STARK session and then wrap it into a Groth16 seal with a separate POST /bonsai/snark/create call. The wrap also requires a groth16_enabled Contemplant.

The src/sp1-fibonacci/ and src/risc0-fibonacci/ integration tests exercise every mode on either VM. See make test-sp1 and make test-risc0 below.

Multiple Configuration Files

If you're running in an environment with multiple configuration files (for example, running an integration test while debugging), you can specify a specific config file with -- --config <file>.

Architecture

This is what the general network architecture will look like when following the quickstart and using the Scriptory.

Scriptory Diagram

State Scheme

Most of state is handled in a non-blocking actor pattern where one thread holds state and others interact with state by sending messages to that thread. Any module in this repo that contain the files client.rs and command.rs is following this pattern. These modules are contemplant/proof_store, hierophant/artifact_store, and hierophant/worker_registry.

The flow of control for these modules are client method → command → handler → state update/read. To add a new state-touching function, first add a new command to the appropriate command.rs file. Let the Rust compiler guide you through the rest of the implementation. In the future we'd like to move to a more robust actor library like Actix or Ractor.

Contemplant

src/
├── api/
│   ├── connect_to_hierophant.rs       # WebSocket initialization with Hierophant
│   ├── http.rs                        # HTTP handlers
│   └── mod.rs
│
├── proof_executor/
│   ├── assessor.rs                    # Proof execution estimation assessment 
│   ├── executor.rs                    # Proof execution
│   └── mod.rs
│
├── proof_store/
│   ├── client.rs                      # Public proof store interface
│   ├── command.rs                     # ProofStore access commands
│   ├── mod.rs
│   └── store.rs                       # Local proof status storage
│
├── config.rs                          # Configuration 
├── main.rs                            # Entry point
├── message_handler.rs                 # Handles messages from Hierophant
└── worker_state.rs                    # Global Contemplant state

Hierophant

src/
├── api/
│   ├── grpc/
│   │   ├── create_artifact_service.rs     # ArtifactStore service handlers
│   │   ├── mod.rs
│   │   └── prover_network_service.rs      # ProverNetwork service handlers
│   │
│   ├── http.rs                            # HTTP handlers
│   ├── mod.rs
│   └── websocket.rs                       # WebSocket handlers
│
├── artifact_store/
│   ├── artifact_uri.rs                    # Artifact id struct
│   ├── client.rs                          # Public artifact store interface 
│   ├── command.rs                         # ArtifactStore access commands
│   ├── mod.rs
│   └── store.rs                           # Artifact reading & writing
│
├── proof/
│   ├── completed_proof_info.rs            # A proof struct
│   ├── mod.rs
│   ├── router.rs                          # Interface for assigning and retrieving proofs
│   └── status.rs                          # ProofStatus struct
│
├── worker_registry/
│   ├── client.rs                          # Public worker registry interface
│   ├── command.rs                         # WorkerRegistry access commands
│   ├── mod.rs
│   ├── registry.rs                        # Managing and communicating with Contemplants
│   └── worker_state.rs                    # WorkerState struct
│
├── config.rs                              # Hierophant configuration
├── hierophant_state.rs                    # Global Hierophant state
└── main.rs

proto/
├── artifact.proto                         # Protobuf definitions for ArtifactStore service
└── network.proto                          # Protobuf definitions for ProverNetwork service

Shared network-lib

src/
├── lib.rs                 # Shared structs
├── messages.rs            # Shared message types 
└── protocol.rs            # Shared protocol constants

Shared fibonacci

src/
└── lib.rs                 # no_std `fibonacci(n) -> (u32, u32)`

The src/fibonacci/ crate is a zero-dependency no_std implementation of fibonacci(n) that is shared by both the SP1 and RISC Zero integration test guests (and by their hosts, for cross-checking the committed journal or public values). It exists as the structural demonstration that identical business logic can be shared verbatim across ZK VMs in this repo. If you add a new cross-VM example or shared helper, put it here.

Developing

When making a breaking change in inter-Hierophant-Contemplant communication, increment the CONTEMPLANT_VERSION variable in network-lib/src/lib.rs. On each Contemplant connection, the Hierophant asserts that the CONTEMPLANT_VERSION matches.

If file structure is changed, kindly update the architecture tree for readability.

When a new version of SP1 is released, re-vendor all three SP1 assets (groth16.tar.gz, plonk.tar.gz, and moongate-server.tar.gz) under provers/sp1/<new-version>/. Moongate is Succinct's closed-source CUDA proof accelerator; its binary is extracted from their CUDA prover docker image. All three commit only sha256 checksums here; the actual tarballs live at supply-chain-hardened locations under ${VENDOR_BASE_URL}/sp1/<new-version>/. See provers/README.md for the full procedure. Then, update SP1_CIRCUITS_VERSION in .env.maintainer.

For RISC Zero, the analogous bump lives under provers/risc0/<docker-tag>/ driven by RISC0_GROTH16_PROVER_TAG; same procedure is documented in provers/README.md.

Integration Tests

The integration tests are basic configurations that test minimal compatibility. Each one runs a Hierophant with one Contemplant and requests a single small fibonacci proof against a known answer.

There is one target per VM:

  • make test-sp1 runs the SP1 round trip. Set MODE=core|compressed|plonk|groth16 to pick the SP1 proof mode (default plonk), and BACKEND=cpu|cuda to pick the Contemplant backend (default cpu).
  • make test-risc0 runs the RISC Zero round trip. Set MODE=composite|succinct|groth16|groth16-direct to pick the proof mode (default composite), and BACKEND=cpu|cuda to pick the backend (default cpu).

The groth16 RISC Zero mode wraps a composite STARK into a Groth16 seal through the canonical two-step Bonsai flow. The groth16-direct mode asks the worker for a Groth16 seal directly, without the STARK wrap step; it is rarely used but exposed for completeness.

Both VMs share the same fibonacci(n) implementation under src/fibonacci/, which demonstrates that identical business logic can be compiled into either zkVM guest without modification.

Building

To build both the Hierophant and Contemplant native binaries, you need to install protoc and Go. Then, you should simply need to run make; you can see more in the Makefile. This will default to building with the maintainer-provided details from .env.maintainer, which we will periodically update as details change.

You can also build a Docker image using make docker, which uses a BUILD_IMAGE for building dependencies that are packaged to run in a RUNTIME_IMAGE. Configuration values in .env.maintainer may be overridden by specifying them as environment variables.

HIEROPHANT_NAME=hierophant
BUILD_IMAGE=registry.digitalocean.com/sigil/petros:latest make build
RUNTIME_IMAGE=debian:bookworm-slim@sha256:... make build

Building Container Images

You can build container images via either make docker or via make ci after building native binaries. Check the Makefile goals for more detailed information.

Configuration

Our configuration follows a zero-trust model where all sensitive configuration is stored on the self-hosted runner, not in GitHub. This section documents the configuration required for automated releases via GitHub Actions.

Running this project may require some sensitive configuration to be provided in .env and other files; you can generate the configuration files from the provided examples with make init. Review configuration files carefully and populate all required fields before proceeding.

Runner-Local Secrets

All automated build secrets must be stored on the self-hosted runner at /opt/github-runner/secrets/. These files are mounted read-only into the release workflow container; they are never stored in git.

Required Secrets

GitHub Access Tokens (for creating releases and pushing to GHCR):

  • ci_gh_pat - A GitHub fine-grained personal access token with repository permissions.
  • ci_gh_classic_pat - A GitHub classic personal access token for GHCR authentication.

Registry Access Tokens (for pushing container images):

  • do_token - A DigitalOcean API token with container registry write access.
  • dh_token - A Docker Hub access token.

GPG Signing Keys (for signing release artifacts):

  • gpg_private_key - A base64-encoded GPG private key for signing digests.
  • gpg_passphrase - The passphrase for the GPG private key.
  • gpg_public_key - The base64-encoded GPG public key (included in release notes).

Registry Configuration (registry.env file):

This file contains non-sensitive registry identifiers and build configuration:

# The Docker image to perform release builds with.
# If not set, defaults to unattended/petros:latest from Docker Hub.
# Examples:
#   BUILD_IMAGE=registry.digitalocean.com/sigil/petros:latest
#   BUILD_IMAGE=ghcr.io/your-org/petros:latest
#   BUILD_IMAGE=unattended/petros:latest
BUILD_IMAGE=unattended/petros:latest

# The runtime base image for the final container.
# If not set, uses the value from .env.maintainer.
# Example:
#   RUNTIME_IMAGE=debian:trixie-slim@sha256:66b37a5078a77098bfc80175fb5eb881a3196809242fd295b25502854e12cbec
RUNTIME_IMAGE=debian:trixie-slim@sha256:66b37a5078a77098bfc80175fb5eb881a3196809242fd295b25502854e12cbec

# The name of the DigitalOcean registry to publish the built image to.
DO_REGISTRY_NAME=

# The username of the Docker Hub account to publish the built image to.
DH_USERNAME=unattended

Public Configuration

Public configuration that anyone building this project needs is stored in the repository at .env.maintainer:

  • HIEROPHANT_NAME - The name of the Hierophant image.
  • CONTEMPLANT_NAME - The name of the Contemplant image.
  • BUILD_IMAGE - The builder image for compiling Rust code (default: unattended/petros:latest).
  • RUNTIME_IMAGE - The runtime base image (default: pinned debian:trixie-slim@sha256:...).
  • VENDOR_BASE_URL - The URL where large, specifically-vendored binaries are downloaded from.
  • SP1_CIRCUITS_VERSION - The SP1 release tag that drives the path for all SP1 vendor assets (provers/sp1/<version>/…). Pinned to match the sp1-sdk crate version in the workspace Cargo.toml.
  • RISC0_GROTH16_PROVER_TAG - The upstream risczero/risc0-groth16-prover docker-image tag whose contents are vendored under provers/risc0/<tag>/. Pinned to match the tag that the current risc0-groth16 crate version hardcodes.

This file is version-controlled and updated by maintainers as infrastructure details change.

Verifying Release Artifacts

All releases include GPG-signed artifacts for verification. Each release contains:

  • image-digests.txt - A human-readable list of container image digests.
  • image-digests.txt.asc - A GPG signature for the digest list.
  • ghcr-manifest.json / ghcr-manifest.json.asc - A GitHub Container Registry OCI manifest and signature.
  • dh-manifest.json / dh-manifest.json.asc - A Docker Hub OCI manifest and signature.
  • do-manifest.json / do-manifest.json.asc - A DigitalOcean Container Registry OCI manifest and signature.

Quick Verification

Download the artifacts and verify signatures:

# Import the GPG public key (base64-encoded in release notes).
echo "<GPG_PUBLIC_KEY>" | base64 -d | gpg --import

# Verify digest list.
gpg --verify image-digests.txt.asc image-digests.txt

# Verify image manifests.
gpg --verify ghcr-manifest.json.asc ghcr-manifest.json
gpg --verify dh-manifest.json.asc dh-manifest.json
gpg --verify do-manifest.json.asc do-manifest.json

Manifest Verification

The manifest files contain the complete OCI image structure (layers, config, metadata). You can use these to verify that a registry hasn't tampered with an image.

# Pull the manifest from the registry.
docker manifest inspect ghcr.io/unattended-backpack/...@sha256:... \
  --verbose > registry-manifest.json

# Compare to the signed manifest.
diff ghcr-manifest.json registry-manifest.json

This provides cryptographic proof that the image structure (all layers and configuration) matches what was signed at release time.

Cosign Verification

Images are also signed with cosign using GitHub Actions OIDC for keyless signing. This provides automated verification and build provenance.

To verify with cosign:

# Verify image signature (proves it was built by our workflow).
cosign verify ghcr.io/unattended-backpack/...@sha256:... \
  --certificate-identity-regexp='^https://github.com/unattended-backpack/.+' \
  --certificate-oidc-issuer=https://token.actions.githubusercontent.com

Cosign verification provides:

  • Automated verification (no manual GPG key management).
  • Build provenance (proves image was built by the GitHub Actions workflow).
  • Registry-native signatures (stored alongside images).

Note: Cosign depends on external infrastructure (GitHub OIDC, Rekor). For maximum trust independence, rely on the GPG-signed manifests as your ultimate root of trust.

Local Testing

This repository is configured to support testing the release workflow locally using the act tool. There is a corresponding goal in the Makefile, and instructions for further management of secrets here. This local testing file also shows how to configure the required secrets for building.

Security

If you discover any bug; flaw; issue; dæmonic incursion; or other malicious, negligent, or incompetent action that impacts the security of any of these projects please responsibly disclose them to us; instructions are available here.

License

The license for all of our original work is LicenseRef-VPL WITH AGPL-3.0-only. This includes every asset in this repository: code, documentation, images, branding, and more. You are licensed to use all of it so long as you maintain maximum possible virality and our copyleft licenses.

Permissive open source licenses are tools for the corporate subversion of libre software; visible source licenses are an even more malignant scourge. All original works in this project are to be licensed under the most aggressive, virulently-contagious copyleft terms possible. To that end everything is licensed under the Viral Public License coupled with the GNU Affero General Public License v3.0 for use in the event that some unaligned party attempts to weasel their way out of copyleft protections. In short: if you use or modify anything in this project for any reason, your project must be licensed under these same terms.

For art assets specifically, in case you want to further split hairs or attempt to weasel out of this virality, we explicitly license those under the viral and copyleft Free Art License 1.3.

About

An open-source prover network for SP1 and RISC Zero.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors