Skip to content

Releases: ObolNetwork/lido-charon-distributed-validator-node

v0.3.4

22 Apr 15:03
78020e9

Choose a tag to compare

Summary

This is highly recommended update for hoodi and mainnet operators, fixing a rarely hit issue with some VCs and specific cluster combinations.

Also infstones is removed from the list of default relays as it has been shut down.

Breaking changes

If you are updating to this release from v0.2.14 or newer, (and you have set your ALERT_DISCORD_IDS= environment variable as requested in earlier updates), there are no additional action items to update to this release beyond the normal flow described below.

Operators upgrading from a version older than v0.2.14 should note that there are breaking changes to monitoring that require environment variable adjustment for those running v0.2.9 and earlier. Details below:

Note

Recent releases adjusts how metrics are sent to Obol. Since recent updates, you no longer have to keep stashing and unstashing changes on prometheus/prometheus.yml every update. It is now generated at runtime by ./prometheus/run.sh, and the required variables are injected into the file from your .env. If you're updating from an older LCDVN version. The steps you need to take to handle this change are described below.

  1. If you currently have a locally modified prometheus/prometheus.yml, make a backup copy of it outside of the repository before upgrading (just in case). Ensure you record the value of PROM_REMOTE_WRITE_TOKEN from the authorization.credentials section of that file.

  2. This token must now be provided via your .env file instead of being defined in prometheus.yml. The variable PROM_REMOTE_WRITE_TOKEN is present (commented out) in all .env.sample.* files. To configure it, copy the variable to your own .env, uncomment it and set your token, for example:

PROM_REMOTE_WRITE_TOKEN=obolH7d...
  1. We have added a dedicated way to set discord IDs on your deployment, allowing you to be @'d directly by our Obol Agent if there is an issue with your node. Add (uncomment) ALERT_DISCORD_IDS= in your .env, and specify one or more (with comma separation) IDs. To get the ID that corresponds to your discord account; Enable developer mode on discord with User Settings > Advanced. Then right click on a user's profile picture or name and select Copy ID to get a unique 18-digit number that represents their account. If you previously changed your CHARON_NICKNAME to your discord ID, you can set that back to a human friendly name for your node.

  2. If you have any other custom modifications to your original prometheus.yml, compare it against prometheus.yml.example, which is now used as the base template for generating the final configuration. Any additional fields or customizations should be added to prometheus.yml.example.

  3. With your environment variables set, and any other modifications ported to the prometheus.yml.example file; run docker compose up -d as normal. Confirm that your metrics are being received on the Obol Grafana. Once you have verified that the new Prometheus setup is working as expected, you can safely delete your backup of the old prometheus.yml.

Important

To maximise compatibility of environment variable interpolation across operating systems, changes have been made between this version and v0.2.3.

Please set (uncomment) the following .env vars if you haven't already, or your charon, mev-boost, and charon-dv-exit-sidecar may not start properly.

(Copying the .env.sample.mainnet file afresh and making any modifications you may have made, might be the least disruptive way to pick up these changes.)

CHARON_BEACON_NODE_ENDPOINTS=http://${CL}:5052
CHARON_EXECUTION_CLIENT_RPC_ENDPOINT=http://${EL}:8545
VE_BEACON_NODE_URL=http://${CL}:5052
VE_EXECUTION_NODE_URL=http://${EL}:8545
LIDO_DV_EXIT_BEACON_NODE_URL=http://${CL}:5052
CLUSTER_NAME="Your cluster name here"
CLUSTER_PEER="Your peer name here"

Important

Users should no longer run docker compose -f docker-compose.yml -f logging.yml -f docker-compose.override.yml to run their cluster, they should instead only run docker compose up -d. Overrides to turn off certain containers (e.g. in the case where you use an external EL/CL rather than the one in this repo) should be done with .env file variables, by setting EL=el-none, CL=cl-none, and MEV=mev-none for example. Read more about client swapping in our docs.

Logging can be enabled by uncommenting the line #MONITORING=${MONITORING:-monitoring},monitoring-log-collector in their .env file, and setting CHARON_LOKI_ADDRESSES to a URI given to you by the Obol team.

To update to this version, please run the following commands:

# Stop the node
docker compose down
# Copy your PROM_REMOTE_WRITE_TOKEN and put it in .env after stashing
git stash
# Update your local copy of this repo
git pull
# Checkout this release
git checkout v0.3.1
# You should no longer need to apply any stashed changes if you followed the instructions above
# Restart the node
docker compose up -d

Note

lido-charon-distributed-validator-node is a repo intended as a deployment guide and is not intended to be the canonical way to deploy a distributed validator.

Operators are encouraged to use this repository to build and maintain their own configurations that work for their individual use case. Please work with your squad to have your cluster exposed to no single point of failure on three or more nodes.

What's Changed

New Contributors

Full Changelog: v0.3.3...v0.3.4

v0.3.3

02 Apr 12:30
aab05e9

Choose a tag to compare

Summary

This is highly recommended update for hoodi and mainnet operators, fixing memory leak in Charon.

Breaking changes

If you are updating to this release from v0.2.14 or newer, (and you have set your ALERT_DISCORD_IDS= environment variable as requested in earlier updates), there are no additional action items to update to this release beyond the normal flow described below.

Operators upgrading from a version older than v0.2.14 should note that there are breaking changes to monitoring that require environment variable adjustment for those running v0.2.9 and earlier. Details below:

Note

Recent releases adjusts how metrics are sent to Obol. Since recent updates, you no longer have to keep stashing and unstashing changes on prometheus/prometheus.yml every update. It is now generated at runtime by ./prometheus/run.sh, and the required variables are injected into the file from your .env. If you're updating from an older LCDVN version. The steps you need to take to handle this change are described below.

  1. If you currently have a locally modified prometheus/prometheus.yml, make a backup copy of it outside of the repository before upgrading (just in case). Ensure you record the value of PROM_REMOTE_WRITE_TOKEN from the authorization.credentials section of that file.

  2. This token must now be provided via your .env file instead of being defined in prometheus.yml. The variable PROM_REMOTE_WRITE_TOKEN is present (commented out) in all .env.sample.* files. To configure it, copy the variable to your own .env, uncomment it and set your token, for example:

PROM_REMOTE_WRITE_TOKEN=obolH7d...
  1. We have added a dedicated way to set discord IDs on your deployment, allowing you to be @'d directly by our Obol Agent if there is an issue with your node. Add (uncomment) ALERT_DISCORD_IDS= in your .env, and specify one or more (with comma separation) IDs. To get the ID that corresponds to your discord account; Enable developer mode on discord with User Settings > Advanced. Then right click on a user's profile picture or name and select Copy ID to get a unique 18-digit number that represents their account. If you previously changed your CHARON_NICKNAME to your discord ID, you can set that back to a human friendly name for your node.

  2. If you have any other custom modifications to your original prometheus.yml, compare it against prometheus.yml.example, which is now used as the base template for generating the final configuration. Any additional fields or customizations should be added to prometheus.yml.example.

  3. With your environment variables set, and any other modifications ported to the prometheus.yml.example file; run docker compose up -d as normal. Confirm that your metrics are being received on the Obol Grafana. Once you have verified that the new Prometheus setup is working as expected, you can safely delete your backup of the old prometheus.yml.

Important

To maximise compatibility of environment variable interpolation across operating systems, changes have been made between this version and v0.2.3.

Please set (uncomment) the following .env vars if you haven't already, or your charon, mev-boost, and charon-dv-exit-sidecar may not start properly.

(Copying the .env.sample.mainnet file afresh and making any modifications you may have made, might be the least disruptive way to pick up these changes.)

CHARON_BEACON_NODE_ENDPOINTS=http://${CL}:5052
CHARON_EXECUTION_CLIENT_RPC_ENDPOINT=http://${EL}:8545
VE_BEACON_NODE_URL=http://${CL}:5052
VE_EXECUTION_NODE_URL=http://${EL}:8545
LIDO_DV_EXIT_BEACON_NODE_URL=http://${CL}:5052
CLUSTER_NAME="Your cluster name here"
CLUSTER_PEER="Your peer name here"

Important

Users should no longer run docker compose -f docker-compose.yml -f logging.yml -f docker-compose.override.yml to run their cluster, they should instead only run docker compose up -d. Overrides to turn off certain containers (e.g. in the case where you use an external EL/CL rather than the one in this repo) should be done with .env file variables, by setting EL=el-none, CL=cl-none, and MEV=mev-none for example. Read more about client swapping in our docs.

Logging can be enabled by uncommenting the line #MONITORING=${MONITORING:-monitoring},monitoring-log-collector in their .env file, and setting CHARON_LOKI_ADDRESSES to a URI given to you by the Obol team.

To update to this version, please run the following commands:

# Stop the node
docker compose down
# Copy your PROM_REMOTE_WRITE_TOKEN and put it in .env after stashing
git stash
# Update your local copy of this repo
git pull
# Checkout this release
git checkout v0.3.1
# You should no longer need to apply any stashed changes if you followed the instructions above
# Restart the node
docker compose up -d

Note

lido-charon-distributed-validator-node is a repo intended as a deployment guide and is not intended to be the canonical way to deploy a distributed validator.

Operators are encouraged to use this repository to build and maintain their own configurations that work for their individual use case. Please work with your squad to have your cluster exposed to no single point of failure on three or more nodes.

What's Changed

Full Changelog: v0.3.2...v0.3.3

v0.3.2

26 Mar 09:31
2c6594b

Choose a tag to compare

Summary

This is an urgent update for hoodi and mainnet operators running Lighthouse.

Breaking changes

If you are updating to this release from v0.2.14 or newer, (and you have set your ALERT_DISCORD_IDS= environment variable as requested in earlier updates), there are no additional action items to update to this release beyond the normal flow described below.

Operators upgrading from a version older than v0.2.14 should note that there are breaking changes to monitoring that require environment variable adjustment for those running v0.2.9 and earlier. Details below:

Note

Recent releases adjusts how metrics are sent to Obol. Since recent updates, you no longer have to keep stashing and unstashing changes on prometheus/prometheus.yml every update. It is now generated at runtime by ./prometheus/run.sh, and the required variables are injected into the file from your .env. If you're updating from an older LCDVN version. The steps you need to take to handle this change are described below.

  1. If you currently have a locally modified prometheus/prometheus.yml, make a backup copy of it outside of the repository before upgrading (just in case). Ensure you record the value of PROM_REMOTE_WRITE_TOKEN from the authorization.credentials section of that file.

  2. This token must now be provided via your .env file instead of being defined in prometheus.yml. The variable PROM_REMOTE_WRITE_TOKEN is present (commented out) in all .env.sample.* files. To configure it, copy the variable to your own .env, uncomment it and set your token, for example:

PROM_REMOTE_WRITE_TOKEN=obolH7d...
  1. We have added a dedicated way to set discord IDs on your deployment, allowing you to be @'d directly by our Obol Agent if there is an issue with your node. Add (uncomment) ALERT_DISCORD_IDS= in your .env, and specify one or more (with comma separation) IDs. To get the ID that corresponds to your discord account; Enable developer mode on discord with User Settings > Advanced. Then right click on a user's profile picture or name and select Copy ID to get a unique 18-digit number that represents their account. If you previously changed your CHARON_NICKNAME to your discord ID, you can set that back to a human friendly name for your node.

  2. If you have any other custom modifications to your original prometheus.yml, compare it against prometheus.yml.example, which is now used as the base template for generating the final configuration. Any additional fields or customizations should be added to prometheus.yml.example.

  3. With your environment variables set, and any other modifications ported to the prometheus.yml.example file; run docker compose up -d as normal. Confirm that your metrics are being received on the Obol Grafana. Once you have verified that the new Prometheus setup is working as expected, you can safely delete your backup of the old prometheus.yml.

Important

To maximise compatibility of environment variable interpolation across operating systems, changes have been made between this version and v0.2.3.

Please set (uncomment) the following .env vars if you haven't already, or your charon, mev-boost, and charon-dv-exit-sidecar may not start properly.

(Copying the .env.sample.mainnet file afresh and making any modifications you may have made, might be the least disruptive way to pick up these changes.)

CHARON_BEACON_NODE_ENDPOINTS=http://${CL}:5052
CHARON_EXECUTION_CLIENT_RPC_ENDPOINT=http://${EL}:8545
VE_BEACON_NODE_URL=http://${CL}:5052
VE_EXECUTION_NODE_URL=http://${EL}:8545
LIDO_DV_EXIT_BEACON_NODE_URL=http://${CL}:5052
CLUSTER_NAME="Your cluster name here"
CLUSTER_PEER="Your peer name here"

Important

Users should no longer run docker compose -f docker-compose.yml -f logging.yml -f docker-compose.override.yml to run their cluster, they should instead only run docker compose up -d. Overrides to turn off certain containers (e.g. in the case where you use an external EL/CL rather than the one in this repo) should be done with .env file variables, by setting EL=el-none, CL=cl-none, and MEV=mev-none for example. Read more about client swapping in our docs.

Logging can be enabled by uncommenting the line #MONITORING=${MONITORING:-monitoring},monitoring-log-collector in their .env file, and setting CHARON_LOKI_ADDRESSES to a URI given to you by the Obol team.

To update to this version, please run the following commands:

# Stop the node
docker compose down
# Copy your PROM_REMOTE_WRITE_TOKEN and put it in .env after stashing
git stash
# Update your local copy of this repo
git pull
# Checkout this release
git checkout v0.3.1
# You should no longer need to apply any stashed changes if you followed the instructions above
# Restart the node
docker compose up -d

Note

lido-charon-distributed-validator-node is a repo intended as a deployment guide and is not intended to be the canonical way to deploy a distributed validator.

Operators are encouraged to use this repository to build and maintain their own configurations that work for their individual use case. Please work with your squad to have your cluster exposed to no single point of failure on three or more nodes.

Full Changelog: v0.3.0...v0.3.1

What's Changed

  • fix(prometheus): quote external_labels values in YAML template by @apham0001 in #291
  • fix(loki): update config for Loki 3.x compatibility by @apham0001 in #292
  • chore(deps): update grafana/tempo docker tag to v2.10.2 by @renovate[bot] in #293
  • chore(deps): update ghcr.io/paradigmxyz/reth docker tag to v1.11.3 - autoclosed by @renovate[bot] in #290
  • chore(deps): update grafana/alloy docker tag to v1.14.0 by @renovate[bot] in #283
  • chore(deps): update grafana/grafana docker tag to v12.4.1 by @renovate[bot] in #282
  • fix: pin GitHub Actions to SHA for supply chain security by @apham0001 in #298
  • Bump LH v8.1.3 by @KaloyanTanev in #303

Full Changelog: v0.3.1...v0.3.2

v0.2.18

26 Mar 09:28
7253554

Choose a tag to compare

This is an urgent update for hoodi and mainnet operators running Lighthouse.

Operators upgrading from a version older than v0.2.14 should note that there are breaking changes to monitoring that require environment variable adjustment for those running v0.2.9 and earlier. This release includes Charon v1.8.2, and is recommended for clusters that have missed a proposal in slot 0 of an epoch recently. See below for migration notes.

There are two minor changes since v0.2.10 - bump of reth,lighthouse, and lodestar versions and improvement on config parsing. If you have not bumped to v0.2.10 previously, the following instructions below apply.

The latest releases contains improved monitoring (swapping the deprecated promtail for alloy), alerting (add multiple Discord IDs to get tagged for issues directly), tracing (allowing us to better debug proposals), adding new env variable CL_TARGET_PEERS to cap the amount of peers a CL connects to, and updated client versions (all clients updated to their latest releases).

Please note the mandatory adjustment of .env vars required since v0.2.3 if updating to this release.

Warning

Breaking Change

Recent releases adjusts how metrics are sent to Obol. After this update, you will no longer have to keep stashing and unstashing changes on prometheus/prometheus.yml every update. It will now be generated at runtime by ./prometheus/run.sh, and the required variables will be injected into the file from your .env. The steps you need to take to handle this change are described below.

  1. If you currently have a locally modified prometheus/prometheus.yml, make a backup copy of it outside of the repository before upgrading (just in case). Ensure you record the value of PROM_REMOTE_WRITE_TOKEN from the authorization.credentials section of that file.

  2. This token must now be provided via your .env file instead of being defined in prometheus.yml. The variable PROM_REMOTE_WRITE_TOKEN is present (commented out) in all .env.sample.* files. To configure it, copy the variable to your own .env, uncomment it and set your token, for example:

PROM_REMOTE_WRITE_TOKEN=obolH7d...
  1. We are now adding a dedicated way to add discord IDs to your deployment, allowing you to be @'d directly by our Obol Agent if there is an issue with your node. Add (uncomment) ALERT_DISCORD_IDS= in your .env, and specify one or more (with comma separation) IDs. To get the ID that corresponds to your discord account; Enable developer mode on discord with User Settings > Advanced. Then right click on a user's profile picture or name and select Copy ID to get a unique 18-digit number that represents their account. If you previously changed your CHARON_NICKNAME to your discord ID, you can set that back to a human friendly name for your node.

  2. If you have any other custom modifications to your original prometheus.yml, compare it against prometheus.yml.example, which is now used as the base template for generating the final configuration. Any additional fields or customizations should be added to prometheus.yml.example.

  3. With your environment variables set, and any other modifications ported to the prometheus.yml.example file; run docker compose up -d as normal. Confirm that your metrics are being received on the Obol Grafana. Once you have verified that the new Prometheus setup is working as expected, you can safely delete your backup of the old prometheus.yml.

Important

To maximise compatibility of environment variable interpolation across operating systems, changes have been made between this version and v0.2.3.

Please set (uncomment) the following .env vars if you haven't already, or your charon, mev-boost, and charon-dv-exit-sidecar may not start properly.

(Copying the .env.sample.mainnet file afresh and making any modifications you may have made, might be the least disruptive way to pick up these changes.)

CHARON_BEACON_NODE_ENDPOINTS=http://${CL}:5052
CHARON_EXECUTION_CLIENT_RPC_ENDPOINT=http://${EL}:8545
VE_BEACON_NODE_URL=http://${CL}:5052
VE_EXECUTION_NODE_URL=http://${EL}:8545
LIDO_DV_EXIT_BEACON_NODE_URL=http://${CL}:5052
CLUSTER_NAME="Your cluster name here"
CLUSTER_PEER="Your peer name here"

Important

Users should no longer run docker compose -f docker-compose.yml -f logging.yml -f docker-compose.override.yml to run their cluster, they should instead only run docker compose up -d. Overrides to turn off certain containers (e.g. in the case where you use an external EL/CL rather than the one in this repo) should be done with .env file variables, by setting EL=el-none, CL=cl-none, and MEV=mev-none for example. Read more about client swapping in our docs.

Logging can be enabled by uncommenting the line #MONITORING=${MONITORING:-monitoring},monitoring-log-collector in their .env file, and setting CHARON_LOKI_ADDRESSES to a URI given to you by the Obol team.

To update to this version, please run the following commands:

# Stop the node
docker compose down
# Copy your PROM_REMOTE_WRITE_TOKEN and put it in .env after stashing
git stash
# Update your local copy of this repo
git pull
# Checkout this release
git checkout v0.2.15
# You should no longer need to apply any stashed changes if you followed the instructions above
# Restart the node
docker compose up -d

Note

lido-charon-distributed-validator-node is a repo intended as a deployment guide and is not intended to be the canonical way to deploy a distributed validator.

Operators are encouraged to use this repository to build and maintain their own configurations that work for their individual use case. Please work with your squad to have your cluster exposed to no single point of failure on three or more nodes.

Full Changelog: v0.2.17...v0.2.18

v0.3.1

11 Mar 21:59

Choose a tag to compare

Summary

This release introduces Charon v1.9.2. This release is a strongly recommended update for Node Operators sending logs to Obol that are members of clusters that have been instructed to begin their upgrade to the Charon 1.9.* versions. If you are a node operator sending logs to Obol in a cluster still on v1.8.2, please update to LCDVN release v0.2.17 instead.

Operators sending metrics but not logs are unaffected by this patch Charon release.

Operators running Prysm VC must add disable_duties_cache to their CHARON_FEATURE_SET_ENABLE= environment variable when updating to this release.

Breaking changes

If you are updating to this release from v0.2.14 or newer, (and you have set your ALERT_DISCORD_IDS= environment variable as requested in earlier updates), there are no additional action items to update to this release beyond the normal flow described below.

Operators upgrading from a version older than v0.2.14 should note that there are breaking changes to monitoring that require environment variable adjustment for those running v0.2.9 and earlier. Details below:

Note

Recent releases adjusts how metrics are sent to Obol. Since recent updates, you no longer have to keep stashing and unstashing changes on prometheus/prometheus.yml every update. It is now generated at runtime by ./prometheus/run.sh, and the required variables are injected into the file from your .env. If you're updating from an older LCDVN version. The steps you need to take to handle this change are described below.

  1. If you currently have a locally modified prometheus/prometheus.yml, make a backup copy of it outside of the repository before upgrading (just in case). Ensure you record the value of PROM_REMOTE_WRITE_TOKEN from the authorization.credentials section of that file.

  2. This token must now be provided via your .env file instead of being defined in prometheus.yml. The variable PROM_REMOTE_WRITE_TOKEN is present (commented out) in all .env.sample.* files. To configure it, copy the variable to your own .env, uncomment it and set your token, for example:

PROM_REMOTE_WRITE_TOKEN=obolH7d...
  1. We have added a dedicated way to set discord IDs on your deployment, allowing you to be @'d directly by our Obol Agent if there is an issue with your node. Add (uncomment) ALERT_DISCORD_IDS= in your .env, and specify one or more (with comma separation) IDs. To get the ID that corresponds to your discord account; Enable developer mode on discord with User Settings > Advanced. Then right click on a user's profile picture or name and select Copy ID to get a unique 18-digit number that represents their account. If you previously changed your CHARON_NICKNAME to your discord ID, you can set that back to a human friendly name for your node.

  2. If you have any other custom modifications to your original prometheus.yml, compare it against prometheus.yml.example, which is now used as the base template for generating the final configuration. Any additional fields or customizations should be added to prometheus.yml.example.

  3. With your environment variables set, and any other modifications ported to the prometheus.yml.example file; run docker compose up -d as normal. Confirm that your metrics are being received on the Obol Grafana. Once you have verified that the new Prometheus setup is working as expected, you can safely delete your backup of the old prometheus.yml.

Important

To maximise compatibility of environment variable interpolation across operating systems, changes have been made between this version and v0.2.3.

Please set (uncomment) the following .env vars if you haven't already, or your charon, mev-boost, and charon-dv-exit-sidecar may not start properly.

(Copying the .env.sample.mainnet file afresh and making any modifications you may have made, might be the least disruptive way to pick up these changes.)

CHARON_BEACON_NODE_ENDPOINTS=http://${CL}:5052
CHARON_EXECUTION_CLIENT_RPC_ENDPOINT=http://${EL}:8545
VE_BEACON_NODE_URL=http://${CL}:5052
VE_EXECUTION_NODE_URL=http://${EL}:8545
LIDO_DV_EXIT_BEACON_NODE_URL=http://${CL}:5052
CLUSTER_NAME="Your cluster name here"
CLUSTER_PEER="Your peer name here"

Important

Users should no longer run docker compose -f docker-compose.yml -f logging.yml -f docker-compose.override.yml to run their cluster, they should instead only run docker compose up -d. Overrides to turn off certain containers (e.g. in the case where you use an external EL/CL rather than the one in this repo) should be done with .env file variables, by setting EL=el-none, CL=cl-none, and MEV=mev-none for example. Read more about client swapping in our docs.

Logging can be enabled by uncommenting the line #MONITORING=${MONITORING:-monitoring},monitoring-log-collector in their .env file, and setting CHARON_LOKI_ADDRESSES to a URI given to you by the Obol team.

To update to this version, please run the following commands:

# Stop the node
docker compose down
# Copy your PROM_REMOTE_WRITE_TOKEN and put it in .env after stashing
git stash
# Update your local copy of this repo
git pull
# Checkout this release
git checkout v0.3.1
# You should no longer need to apply any stashed changes if you followed the instructions above
# Restart the node
docker compose up -d

Note

lido-charon-distributed-validator-node is a repo intended as a deployment guide and is not intended to be the canonical way to deploy a distributed validator.

Operators are encouraged to use this repository to build and maintain their own configurations that work for their individual use case. Please work with your squad to have your cluster exposed to no single point of failure on three or more nodes.

What's Changed

Full Changelog: v0.3.0...v0.3.1

v0.2.17

11 Mar 21:50

Choose a tag to compare

This release introduces Charon v1.8.3. This release is a strongly recommended update for Node Operators sending logs to Obol. (Nodes sending only metrics are unaffected by this patch)

If you are updating from v0.2.14 or newer, (and you have set your ALERT_DISCORD_IDS= environment variable as requested in earlier updates), there are no action items to update to this release. Operators upgrading from a version older than v0.2.14 should note that there are breaking changes to monitoring that require environment variable adjustment for those running v0.2.9 and earlier. Details below:

Note

Recent releases adjusts how metrics are sent to Obol. Since recent updates, you no longer have to keep stashing and unstashing changes on prometheus/prometheus.yml every update. It is now generated at runtime by ./prometheus/run.sh, and the required variables are injected into the file from your .env. If you're updating from an older LCDVN version. The steps you need to take to handle this change are described below.

  1. If you currently have a locally modified prometheus/prometheus.yml, make a backup copy of it outside of the repository before upgrading (just in case). Ensure you record the value of PROM_REMOTE_WRITE_TOKEN from the authorization.credentials section of that file.

  2. This token must now be provided via your .env file instead of being defined in prometheus.yml. The variable PROM_REMOTE_WRITE_TOKEN is present (commented out) in all .env.sample.* files. To configure it, copy the variable to your own .env, uncomment it and set your token, for example:

PROM_REMOTE_WRITE_TOKEN=obolH7d...
  1. We are now adding a dedicated way to add discord IDs to your deployment, allowing you to be @'d directly by our Obol Agent if there is an issue with your node. Add (uncomment) ALERT_DISCORD_IDS= in your .env, and specify one or more (with comma separation) IDs. To get the ID that corresponds to your discord account; Enable developer mode on discord with User Settings > Advanced. Then right click on a user's profile picture or name and select Copy ID to get a unique 18-digit number that represents their account. If you previously changed your CHARON_NICKNAME to your discord ID, you can set that back to a human friendly name for your node.

  2. If you have any other custom modifications to your original prometheus.yml, compare it against prometheus.yml.example, which is now used as the base template for generating the final configuration. Any additional fields or customizations should be added to prometheus.yml.example.

  3. With your environment variables set, and any other modifications ported to the prometheus.yml.example file; run docker compose up -d as normal. Confirm that your metrics are being received on the Obol Grafana. Once you have verified that the new Prometheus setup is working as expected, you can safely delete your backup of the old prometheus.yml.

Important

To maximise compatibility of environment variable interpolation across operating systems, changes have been made between this version and v0.2.3.

Please set (uncomment) the following .env vars if you haven't already, or your charon, mev-boost, and charon-dv-exit-sidecar may not start properly.

(Copying the .env.sample.mainnet file afresh and making any modifications you may have made, might be the least disruptive way to pick up these changes.)

CHARON_BEACON_NODE_ENDPOINTS=http://${CL}:5052
CHARON_EXECUTION_CLIENT_RPC_ENDPOINT=http://${EL}:8545
VE_BEACON_NODE_URL=http://${CL}:5052
VE_EXECUTION_NODE_URL=http://${EL}:8545
LIDO_DV_EXIT_BEACON_NODE_URL=http://${CL}:5052
CLUSTER_NAME="Your cluster name here"
CLUSTER_PEER="Your peer name here"

Important

Users should no longer run docker compose -f docker-compose.yml -f logging.yml -f docker-compose.override.yml to run their cluster, they should instead only run docker compose up -d. Overrides to turn off certain containers (e.g. in the case where you use an external EL/CL rather than the one in this repo) should be done with .env file variables, by setting EL=el-none, CL=cl-none, and MEV=mev-none for example. Read more about client swapping in our docs.

Logging can be enabled by uncommenting the line #MONITORING=${MONITORING:-monitoring},monitoring-log-collector in their .env file, and setting CHARON_LOKI_ADDRESSES to a URI given to you by the Obol team.

To update to this version, please run the following commands:

# Stop the node
docker compose down
# Copy your PROM_REMOTE_WRITE_TOKEN and put it in .env after stashing
git stash
# Update your local copy of this repo
git pull
# Checkout this release
git checkout v0.2.17
# You should no longer need to apply any stashed changes if you followed the instructions above
# Restart the node
docker compose up -d

Note

lido-charon-distributed-validator-node is a repo intended as a deployment guide and is not intended to be the canonical way to deploy a distributed validator.

Operators are encouraged to use this repository to build and maintain their own configurations that work for their individual use case. Please work with your squad to have your cluster exposed to no single point of failure on three or more nodes.

What's Changed

Full Changelog: v0.2.16...v0.2.17

v0.3.0

10 Mar 18:25

Choose a tag to compare

This is an urgent release for mainnet operators running Lighthouse, Grandine, Reth, or are experiencing an uptick in missed attestations. This release updates clients to resolve security issues uncovered in the Rust ecosystem, and introduces Charon's v1.9.1 release.

For operators updating from v0.2.13 through v0.2.16, there are no special changes to upgrade (you can remove the values you have set in CHARON_FEATURE_SET_ENABLE in your .env file, they are now default), use the normal update instructions below.

For operators jumping from older versions, the following instructions apply:

Breaking changes for Operators upgrading from a release older than v0.2.10

This release contains improved monitoring (swapping the deprecated promtail for alloy), alerting (add multiple Discord IDs to get tagged for issues directly), tracing (allowing us to better debug proposals), adding new env variable CL_TARGET_PEERS to cap the amount of peers a CL connects to, and updated client versions (all clients updated to their latest releases).

Please note the mandatory adjustment of .env vars required since v0.2.3 if updating to this release.

Warning

Breaking Change

This release adjusts how metrics are sent to Obol. After this update, you will no longer have to keep stashing and unstashing changes on prometheus/prometheus.yml every update. It will now be generated at runtime by ./prometheus/run.sh, and the required variables will be injected into the file from your .env. The steps you need to take to handle this change are described below.

  1. If you currently have a locally modified prometheus/prometheus.yml, make a backup copy of it outside of the repository before upgrading (just in case). Ensure you record the value of PROM_REMOTE_WRITE_TOKEN from the authorization.credentials section of that file.

  2. This token must now be provided via your .env file instead of being defined in prometheus.yml. The variable PROM_REMOTE_WRITE_TOKEN is present (commented out) in all .env.sample.* files. To configure it, copy the variable to your own .env, uncomment it and set your token, for example:

PROM_REMOTE_WRITE_TOKEN=obolH7d...
  1. We are now adding a dedicated way to add discord IDs to your deployment, allowing you to be @'d directly by our Obol Agent if there is an issue with your node. Add (uncomment) ALERT_DISCORD_IDS= in your .env, and specify one or more (with comma separation) IDs. To get the ID that corresponds to your discord account; Enable developer mode on discord with User Settings > Advanced. Then right click on a user's profile picture or name and select Copy ID to get a unique 18-digit number that represents their account. If you previously changed your CHARON_NICKNAME to your discord ID, you can set that back to a human friendly name for your node.

  2. If you have any other custom modifications to your original prometheus.yml, compare it against prometheus.yml.example, which is now used as the base template for generating the final configuration. Any additional fields or customizations should be added to prometheus.yml.example.

  3. With your environment variables set, and any other modifications ported to the prometheus.yml.example file; run docker compose up -d as normal. Confirm that your metrics are being received on the Obol Grafana. Once you have verified that the new Prometheus setup is working as expected, you can safely delete your backup of the old prometheus.yml.

Important

To maximise compatibility of environment variable interpolation across operating systems, changes have been made between this version and v0.2.3.

Please set (uncomment) the following .env vars if you haven't already, or your charon, mev-boost, and charon-dv-exit-sidecar may not start properly.

(Copying the .env.sample.mainnet file afresh and making any modifications you may have made, might be the least disruptive way to pick up these changes.)

CHARON_BEACON_NODE_ENDPOINTS=http://${CL}:5052
CHARON_EXECUTION_CLIENT_RPC_ENDPOINT=http://${EL}:8545
VE_BEACON_NODE_URL=http://${CL}:5052
VE_EXECUTION_NODE_URL=http://${EL}:8545
LIDO_DV_EXIT_BEACON_NODE_URL=http://${CL}:5052
CLUSTER_NAME="Your cluster name here"
CLUSTER_PEER="Your peer name here"

Important

Users should no longer run docker compose -f docker-compose.yml -f logging.yml -f docker-compose.override.yml to run their cluster, they should instead only run docker compose up -d. Overrides to turn off certain containers (e.g. in the case where you use an external EL/CL rather than the one in this repo) should be done with .env file variables, by setting EL=el-none, CL=cl-none, and MEV=mev-none for example. Read more about client swapping in our docs.

Logging can be enabled by uncommenting the line #MONITORING=${MONITORING:-monitoring},monitoring-log-collector in their .env file, and setting CHARON_LOKI_ADDRESSES to a URI given to you by the Obol team.

To update to this version, please run the following commands:

# Stop the node
docker compose down
# Copy your PROM_REMOTE_WRITE_TOKEN from prometheus.yml and put it in .env after stashing
git stash
# Update your local copy of this repo
git pull
# Checkout this release
git checkout v0.3.0
# You should no longer need to apply any stashed changes if you followed the instructions above
# Restart the node
docker compose up -d

Note

lido-charon-distributed-validator-node is a repo intended as a deployment guide and is not intended to be the canonical way to deploy a distributed validator.

Operators are encouraged to use this repository to build and maintain their own configurations that work for their individual use case. Please work with your squad to have your cluster exposed to no single point of failure on three or more nodes.

What's Changed

  • chore(deps): update grafana/grafana docker tag to v10.4.19 by @renovate[bot] in #243
  • chore(deps): update grafana/loki docker tag to v2.9.17 - autoclosed by @renovate[bot] in #244
  • chore(deps): update bitnamilegacy/node-exporter docker tag to v1.9.1 by @renovate[bot] in #246
  • chore(deps): update gcr.io/cadvisor/cadvisor docker tag to v0.55.1 by @renovate[bot] in #247
  • chore(deps): update grafana/alloy docker tag to v1.13.2 by @renovate[bot] in #248
  • chore(deps): update grafana/tempo docker tag to v2.10.1 by @renovate[bot] in #249
  • chore(deps): update prom/prometheus docker tag to v2.55.1 by @renovate[bot] in #250
  • chore(deps): update grafana/grafana docker tag to v12 by @renovate[bot] in #251
  • chore(deps): update grafana/loki docker tag to v3 by @renovate[bot] in #252
  • chore(deps): update ghcr.io/commit-boost/pbs docker tag to v0.9.3 by @renovate[bot] in #257
  • chore(deps): update offchainlabs/prysm-validator docker tag to v7.1.3 by @renovate[bot] in #262
  • chore(deps): update chainsafe/lodestar docker tag to v1.40.0 by @renovate[bot] in #263
  • chore(deps): update dependency obolnetwork/charon to v1.9.0 by @renovate[bot] in #264
  • chore(deps): update flashbots/mev-boost docker tag to v1.12.0 - autoclosed by @renovate[bot] in #265
  • chore(deps): update ghcr.io/paradigmxyz/reth docker tag to v1.11.1 by @renovate[bot] in #266
  • chore(deps): update prom/prometheus docker tag to v3 by @renovate[bot] in #267
  • Downgrade Prysm by @KaloyanTanev in #269
  • lighthouse version bump by @pinebit in #273
  • feat: add cluster_name and cluster_peer labels to Prometheus metrics by @apham0001 in #277
  • Bump Charon to v1.9.1 by @KaloyanTanev in #278
  • Bump Lighthouse v8.1.2 by @KaloyanTanev in #280
  • chore(deps): update nethermind/nethermind docker tag to v1.36.1 by @renovate[bot] in #279
  • chore(deps): update consensys/teku docker tag to v26.3.0 by @renovate[bot] in #276
  • chore(deps): update sifrai/grandine docker ...
Read more

v0.3.0-rc4

09 Mar 13:13

Choose a tag to compare

v0.3.0-rc4 Pre-release
Pre-release

This is a second urgent pre-release update for hoodi operators running Lighthouse. This release also introduces Charon's v1.9.0 release, and updates other clients to their latest versions. This release contains a multitude of performance improvements.

For operators updating from v0.2.13 through v0.2.15, there are no special changes to upgrade (you can remove the values you have set in CHARON_FEATURE_SET_ENABLE if you want, they are now default), use the normal update instructions below.

For operators jumping from older versions, the following instructions apply:

Breaking changes for Operators upgrading from a release older than v0.2.10

This release contains improved monitoring (swapping the deprecated promtail for alloy), alerting (add multiple Discord IDs to get tagged for issues directly), tracing (allowing us to better debug proposals), adding new env variable CL_TARGET_PEERS to cap the amount of peers a CL connects to, and updated client versions (all clients updated to their latest releases).

Please note the mandatory adjustment of .env vars required since v0.2.3 if updating to this release.

Warning

Breaking Change

This release adjusts how metrics are sent to Obol. After this update, you will no longer have to keep stashing and unstashing changes on prometheus/prometheus.yml every update. It will now be generated at runtime by ./prometheus/run.sh, and the required variables will be injected into the file from your .env. The steps you need to take to handle this change are described below.

  1. If you currently have a locally modified prometheus/prometheus.yml, make a backup copy of it outside of the repository before upgrading (just in case). Ensure you record the value of PROM_REMOTE_WRITE_TOKEN from the authorization.credentials section of that file.

  2. This token must now be provided via your .env file instead of being defined in prometheus.yml. The variable PROM_REMOTE_WRITE_TOKEN is present (commented out) in all .env.sample.* files. To configure it, copy the variable to your own .env, uncomment it and set your token, for example:

PROM_REMOTE_WRITE_TOKEN=obolH7d...
  1. We are now adding a dedicated way to add discord IDs to your deployment, allowing you to be @'d directly by our Obol Agent if there is an issue with your node. Add (uncomment) ALERT_DISCORD_IDS= in your .env, and specify one or more (with comma separation) IDs. To get the ID that corresponds to your discord account; Enable developer mode on discord with User Settings > Advanced. Then right click on a user's profile picture or name and select Copy ID to get a unique 18-digit number that represents their account. If you previously changed your CHARON_NICKNAME to your discord ID, you can set that back to a human friendly name for your node.

  2. If you have any other custom modifications to your original prometheus.yml, compare it against prometheus.yml.example, which is now used as the base template for generating the final configuration. Any additional fields or customizations should be added to prometheus.yml.example.

  3. With your environment variables set, and any other modifications ported to the prometheus.yml.example file; run docker compose up -d as normal. Confirm that your metrics are being received on the Obol Grafana. Once you have verified that the new Prometheus setup is working as expected, you can safely delete your backup of the old prometheus.yml.

Important

To maximise compatibility of environment variable interpolation across operating systems, changes have been made between this version and v0.2.3.

Please set (uncomment) the following .env vars if you haven't already, or your charon, mev-boost, and charon-dv-exit-sidecar may not start properly.

(Copying the .env.sample.mainnet file afresh and making any modifications you may have made, might be the least disruptive way to pick up these changes.)

CHARON_BEACON_NODE_ENDPOINTS=http://${CL}:5052
CHARON_EXECUTION_CLIENT_RPC_ENDPOINT=http://${EL}:8545
VE_BEACON_NODE_URL=http://${CL}:5052
VE_EXECUTION_NODE_URL=http://${EL}:8545
LIDO_DV_EXIT_BEACON_NODE_URL=http://${CL}:5052
CLUSTER_NAME="Your cluster name here"
CLUSTER_PEER="Your peer name here"

Important

Users should no longer run docker compose -f docker-compose.yml -f logging.yml -f docker-compose.override.yml to run their cluster, they should instead only run docker compose up -d. Overrides to turn off certain containers (e.g. in the case where you use an external EL/CL rather than the one in this repo) should be done with .env file variables, by setting EL=el-none, CL=cl-none, and MEV=mev-none for example. Read more about client swapping in our docs.

Logging can be enabled by uncommenting the line #MONITORING=${MONITORING:-monitoring},monitoring-log-collector in their .env file, and setting CHARON_LOKI_ADDRESSES to a URI given to you by the Obol team.

To update to this version, please run the following commands:

# Stop the node
docker compose down
# Copy your PROM_REMOTE_WRITE_TOKEN from prometheus.yml and put it in .env after stashing
git stash
# Update your local copy of this repo
git pull
# Checkout this release
git checkout v0.3.0-rc3
# You should no longer need to apply any stashed changes if you followed the instructions above
# Restart the node
docker compose up -d

Note

lido-charon-distributed-validator-node is a repo intended as a deployment guide and is not intended to be the canonical way to deploy a distributed validator.

Operators are encouraged to use this repository to build and maintain their own configurations that work for their individual use case. Please work with your squad to have your cluster exposed to no single point of failure on three or more nodes.

What's Changed

Full Changelog: v0.3.0-rc3...v0.3.0-rc4

v0.2.16

09 Mar 13:10
b00f7c3

Choose a tag to compare

This is a second urgent update for hoodi and mainnet operators running Lighthouse.

Operators upgrading from a version older than v0.2.14 should note that there are breaking changes to monitoring that require environment variable adjustment for those running v0.2.9 and earlier. This release includes Charon v1.8.2, and is recommended for clusters that have missed a proposal in slot 0 of an epoch recently. See below for migration notes.

There are two minor changes since v0.2.10 - bump of reth,lighthouse, and lodestar versions and improvement on config parsing. If you have not bumped to v0.2.10 previously, the following instructions below apply.

The latest releases contains improved monitoring (swapping the deprecated promtail for alloy), alerting (add multiple Discord IDs to get tagged for issues directly), tracing (allowing us to better debug proposals), adding new env variable CL_TARGET_PEERS to cap the amount of peers a CL connects to, and updated client versions (all clients updated to their latest releases).

Please note the mandatory adjustment of .env vars required since v0.2.3 if updating to this release.

Warning

Breaking Change

Recent releases adjusts how metrics are sent to Obol. After this update, you will no longer have to keep stashing and unstashing changes on prometheus/prometheus.yml every update. It will now be generated at runtime by ./prometheus/run.sh, and the required variables will be injected into the file from your .env. The steps you need to take to handle this change are described below.

  1. If you currently have a locally modified prometheus/prometheus.yml, make a backup copy of it outside of the repository before upgrading (just in case). Ensure you record the value of PROM_REMOTE_WRITE_TOKEN from the authorization.credentials section of that file.

  2. This token must now be provided via your .env file instead of being defined in prometheus.yml. The variable PROM_REMOTE_WRITE_TOKEN is present (commented out) in all .env.sample.* files. To configure it, copy the variable to your own .env, uncomment it and set your token, for example:

PROM_REMOTE_WRITE_TOKEN=obolH7d...
  1. We are now adding a dedicated way to add discord IDs to your deployment, allowing you to be @'d directly by our Obol Agent if there is an issue with your node. Add (uncomment) ALERT_DISCORD_IDS= in your .env, and specify one or more (with comma separation) IDs. To get the ID that corresponds to your discord account; Enable developer mode on discord with User Settings > Advanced. Then right click on a user's profile picture or name and select Copy ID to get a unique 18-digit number that represents their account. If you previously changed your CHARON_NICKNAME to your discord ID, you can set that back to a human friendly name for your node.

  2. If you have any other custom modifications to your original prometheus.yml, compare it against prometheus.yml.example, which is now used as the base template for generating the final configuration. Any additional fields or customizations should be added to prometheus.yml.example.

  3. With your environment variables set, and any other modifications ported to the prometheus.yml.example file; run docker compose up -d as normal. Confirm that your metrics are being received on the Obol Grafana. Once you have verified that the new Prometheus setup is working as expected, you can safely delete your backup of the old prometheus.yml.

Important

To maximise compatibility of environment variable interpolation across operating systems, changes have been made between this version and v0.2.3.

Please set (uncomment) the following .env vars if you haven't already, or your charon, mev-boost, and charon-dv-exit-sidecar may not start properly.

(Copying the .env.sample.mainnet file afresh and making any modifications you may have made, might be the least disruptive way to pick up these changes.)

CHARON_BEACON_NODE_ENDPOINTS=http://${CL}:5052
CHARON_EXECUTION_CLIENT_RPC_ENDPOINT=http://${EL}:8545
VE_BEACON_NODE_URL=http://${CL}:5052
VE_EXECUTION_NODE_URL=http://${EL}:8545
LIDO_DV_EXIT_BEACON_NODE_URL=http://${CL}:5052
CLUSTER_NAME="Your cluster name here"
CLUSTER_PEER="Your peer name here"

Important

Users should no longer run docker compose -f docker-compose.yml -f logging.yml -f docker-compose.override.yml to run their cluster, they should instead only run docker compose up -d. Overrides to turn off certain containers (e.g. in the case where you use an external EL/CL rather than the one in this repo) should be done with .env file variables, by setting EL=el-none, CL=cl-none, and MEV=mev-none for example. Read more about client swapping in our docs.

Logging can be enabled by uncommenting the line #MONITORING=${MONITORING:-monitoring},monitoring-log-collector in their .env file, and setting CHARON_LOKI_ADDRESSES to a URI given to you by the Obol team.

To update to this version, please run the following commands:

# Stop the node
docker compose down
# Copy your PROM_REMOTE_WRITE_TOKEN and put it in .env after stashing
git stash
# Update your local copy of this repo
git pull
# Checkout this release
git checkout v0.2.15
# You should no longer need to apply any stashed changes if you followed the instructions above
# Restart the node
docker compose up -d

Note

lido-charon-distributed-validator-node is a repo intended as a deployment guide and is not intended to be the canonical way to deploy a distributed validator.

Operators are encouraged to use this repository to build and maintain their own configurations that work for their individual use case. Please work with your squad to have your cluster exposed to no single point of failure on three or more nodes.

Full Changelog: v0.2.15...v0.2.16

v0.3.0-rc3

27 Feb 11:29

Choose a tag to compare

v0.3.0-rc3 Pre-release
Pre-release

This is an urgent pre-release update for hoodi operators running Lighthouse. This release also introduces Charon's v1.9.0 release, and updates other clients to their latest versions. This release contains a multitude of performance improvements.

For operators updating from v0.2.13 through v0.2.15, there are no special changes to upgrade (you can remove the values you have set in CHARON_FEATURE_SET_ENABLE if you want, they are now default), use the normal update instructions below.

For operators jumping from older versions, the following instructions apply:

Breaking changes for Operators upgrading from a release older than v0.2.10

This release contains improved monitoring (swapping the deprecated promtail for alloy), alerting (add multiple Discord IDs to get tagged for issues directly), tracing (allowing us to better debug proposals), adding new env variable CL_TARGET_PEERS to cap the amount of peers a CL connects to, and updated client versions (all clients updated to their latest releases).

Please note the mandatory adjustment of .env vars required since v0.2.3 if updating to this release.

Warning

Breaking Change

This release adjusts how metrics are sent to Obol. After this update, you will no longer have to keep stashing and unstashing changes on prometheus/prometheus.yml every update. It will now be generated at runtime by ./prometheus/run.sh, and the required variables will be injected into the file from your .env. The steps you need to take to handle this change are described below.

  1. If you currently have a locally modified prometheus/prometheus.yml, make a backup copy of it outside of the repository before upgrading (just in case). Ensure you record the value of PROM_REMOTE_WRITE_TOKEN from the authorization.credentials section of that file.

  2. This token must now be provided via your .env file instead of being defined in prometheus.yml. The variable PROM_REMOTE_WRITE_TOKEN is present (commented out) in all .env.sample.* files. To configure it, copy the variable to your own .env, uncomment it and set your token, for example:

PROM_REMOTE_WRITE_TOKEN=obolH7d...
  1. We are now adding a dedicated way to add discord IDs to your deployment, allowing you to be @'d directly by our Obol Agent if there is an issue with your node. Add (uncomment) ALERT_DISCORD_IDS= in your .env, and specify one or more (with comma separation) IDs. To get the ID that corresponds to your discord account; Enable developer mode on discord with User Settings > Advanced. Then right click on a user's profile picture or name and select Copy ID to get a unique 18-digit number that represents their account. If you previously changed your CHARON_NICKNAME to your discord ID, you can set that back to a human friendly name for your node.

  2. If you have any other custom modifications to your original prometheus.yml, compare it against prometheus.yml.example, which is now used as the base template for generating the final configuration. Any additional fields or customizations should be added to prometheus.yml.example.

  3. With your environment variables set, and any other modifications ported to the prometheus.yml.example file; run docker compose up -d as normal. Confirm that your metrics are being received on the Obol Grafana. Once you have verified that the new Prometheus setup is working as expected, you can safely delete your backup of the old prometheus.yml.

Important

To maximise compatibility of environment variable interpolation across operating systems, changes have been made between this version and v0.2.3.

Please set (uncomment) the following .env vars if you haven't already, or your charon, mev-boost, and charon-dv-exit-sidecar may not start properly.

(Copying the .env.sample.mainnet file afresh and making any modifications you may have made, might be the least disruptive way to pick up these changes.)

CHARON_BEACON_NODE_ENDPOINTS=http://${CL}:5052
CHARON_EXECUTION_CLIENT_RPC_ENDPOINT=http://${EL}:8545
VE_BEACON_NODE_URL=http://${CL}:5052
VE_EXECUTION_NODE_URL=http://${EL}:8545
LIDO_DV_EXIT_BEACON_NODE_URL=http://${CL}:5052
CLUSTER_NAME="Your cluster name here"
CLUSTER_PEER="Your peer name here"

Important

Users should no longer run docker compose -f docker-compose.yml -f logging.yml -f docker-compose.override.yml to run their cluster, they should instead only run docker compose up -d. Overrides to turn off certain containers (e.g. in the case where you use an external EL/CL rather than the one in this repo) should be done with .env file variables, by setting EL=el-none, CL=cl-none, and MEV=mev-none for example. Read more about client swapping in our docs.

Logging can be enabled by uncommenting the line #MONITORING=${MONITORING:-monitoring},monitoring-log-collector in their .env file, and setting CHARON_LOKI_ADDRESSES to a URI given to you by the Obol team.

To update to this version, please run the following commands:

# Stop the node
docker compose down
# Copy your PROM_REMOTE_WRITE_TOKEN from prometheus.yml and put it in .env after stashing
git stash
# Update your local copy of this repo
git pull
# Checkout this release
git checkout v0.3.0-rc3
# You should no longer need to apply any stashed changes if you followed the instructions above
# Restart the node
docker compose up -d

Note

lido-charon-distributed-validator-node is a repo intended as a deployment guide and is not intended to be the canonical way to deploy a distributed validator.

Operators are encouraged to use this repository to build and maintain their own configurations that work for their individual use case. Please work with your squad to have your cluster exposed to no single point of failure on three or more nodes.

What's Changed

  • chore(deps): update grafana/grafana docker tag to v10.4.19 by @renovate[bot] in #243
  • chore(deps): update grafana/loki docker tag to v2.9.17 - autoclosed by @renovate[bot] in #244
  • chore(deps): update bitnamilegacy/node-exporter docker tag to v1.9.1 by @renovate[bot] in #246
  • chore(deps): update gcr.io/cadvisor/cadvisor docker tag to v0.55.1 by @renovate[bot] in #247
  • chore(deps): update grafana/alloy docker tag to v1.13.2 by @renovate[bot] in #248
  • chore(deps): update grafana/tempo docker tag to v2.10.1 by @renovate[bot] in #249
  • chore(deps): update prom/prometheus docker tag to v2.55.1 by @renovate[bot] in #250
  • chore(deps): update grafana/grafana docker tag to v12 by @renovate[bot] in #251
  • chore(deps): update grafana/loki docker tag to v3 by @renovate[bot] in #252
  • chore(deps): update ghcr.io/commit-boost/pbs docker tag to v0.9.3 by @renovate[bot] in #257
  • chore(deps): update offchainlabs/prysm-validator docker tag to v7.1.3 by @renovate[bot] in #262
  • chore(deps): update chainsafe/lodestar docker tag to v1.40.0 by @renovate[bot] in #263
  • chore(deps): update dependency obolnetwork/charon to v1.9.0 by @renovate[bot] in #264
  • chore(deps): update flashbots/mev-boost docker tag to v1.12.0 - autoclosed by @renovate[bot] in #265
  • chore(deps): update ghcr.io/paradigmxyz/reth docker tag to v1.11.1 by @renovate[bot] in #266
  • chore(deps): update prom/prometheus docker tag to v3 by @renovate[bot] in #267
  • Downgrade Prysm by @KaloyanTanev in #269
  • lighthouse version bump by @pinebit in #273

Full Changelog: v0.2.14...v0.3.0-rc3