-
Notifications
You must be signed in to change notification settings - Fork 164
Expand file tree
/
Copy pathdefault.tmpl
More file actions
143 lines (132 loc) · 6.37 KB
/
default.tmpl
File metadata and controls
143 lines (132 loc) · 6.37 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
# Autogenerated - DO NOT MODIFY THIS FILE DIRECTLY If you want to overwrite some
# of these values with your own customizations, please disable the alerts in the
# TUI and then add your own rule files to the <rocketpool-root>/alerting/rules
# directory.
#
# NOTE: This file uses non-default go template delimiters (triple braces) to avoid
# conflicts with the default delimiters used in the alerting rules.
groups:
- name: NodeOperator
rules:
{{{- if .AlertEnabled_ClientSyncStatusBeacon.Value }}}
- alert: ClientSyncStatusBeacon
expr: rocketpool_node_sync_progress{client="beacon"} < 1.0
for: 5m
labels:
severity: critical
annotations:
summary: "The beacon client is not synced"
{{{- end }}}
{{{- if .AlertEnabled_ClientSyncStatusExecution.Value }}}
- alert: ClientSyncStatusExecution
expr: rocketpool_node_sync_progress{client="execution"} < 1.0
for: 5m
labels:
severity: critical
annotations:
summary: "The execution client is not synced"
{{{- end }}}
{{{- if .AlertEnabled_UpcomingSyncCommittee.Value }}}
- alert: UpcomingSyncCommittee
expr: rocketpool_beacon_upcoming_sync_committee > 0
labels:
severity: warning
job: validator
annotations:
summary: "Your Rocket Pool node is about to become part of a sync committee"
description: |
If you were planning on doing maintenance to your node, **you should wait until the sync committee is over**. Not only are they worth an **extremely** large amount of ETH, but if you miss attestations during a sync committee, you **lose an extremely large amount of ETH** instead!
You should be online as long as possible while you are in a sync committee.
{{{- end }}}
{{{- if .AlertEnabled_ActiveSyncCommittee.Value }}}
- alert: ActiveSyncCommittee
expr: rocketpool_beacon_active_sync_committee > 0
labels:
severity: warning
job: validator
annotations:
summary: "Your Rocket Pool node is part of a sync committee"
description: |
If you were planning on doing maintenance to your node, **you should wait until the sync committee is over**. Not only are they worth an **extremely** large amount of ETH, but if you miss attestations during a sync committee, you **lose an extremely large amount of ETH** instead!
You should be online as long as possible while you are in a sync committee.
{{{- end }}}
{{{- if .AlertEnabled_UpcomingProposal.Value }}}
- alert: UpcomingProposal
expr: rocketpool_beacon_upcoming_proposals > 0
labels:
severity: warning
job: validator
annotations:
summary: "Your Rocket Pool node is about to propose a block"
description: |
You have {{ $value }} block proposals coming up in the next few minutes. If you were planning on taking your node down for maintenance, you should wait until after the proposals because they're worth a lot of ETH!
{{{- end }}}
{{{- if .AlertEnabled_RecentProposal.Value }}}
- alert: RecentProposal
expr: rocketpool_beacon_recent_proposals > 0
# note: 384s = 12s slot time * 32 slots per epoch: This should prevent the alert from refiring during a single epoch
for: 384s
labels:
severity: info
job: validator
annotations:
summary: "Your Rocket Pool node proposed a block"
description: |
Your node proposed {{ $value }} blocks a recent epoch.
{{{- end }}}
{{{- if .AlertEnabled_LowDiskSpaceWarning.Value }}}
- alert: LowDiskSpaceWarning
expr: node_filesystem_avail_bytes{job="node", mountpoint="/"} / 1024^3 < 200
labels:
severity: warning
job: node
annotations:
summary: "Device {{ $labels.device }} on instance {{ $labels.instance }} is getting low on disk space"
description: "{{ $labels.instance }} has low disk space. Currently has {{ humanize $value }} GB free."
{{{- end }}}
{{{- if .AlertEnabled_LowDiskSpaceCritical.Value }}}
- alert: LowDiskSpaceCritical
# NOTE: 50GB taken from PruneFreeSpaceRequired in rocketpool-cli's nethermind pruning (it won't prune below 50GB)
expr: node_filesystem_avail_bytes{job="node", mountpoint="/"} / 1024^3 < 50
labels:
severity: critical
job: node
annotations:
summary: "Device {{ $labels.device }} on instance {{ $labels.instance }} has critically low disk space"
description: "{{ $labels.instance }} has critically low disk space. Currently has {{ humanize $value }} GB free."
{{{- end }}}
{{{- if .AlertEnabled_OSUpdatesAvailable.Value }}}
- alert: OSUpdatesAvailable
expr: max(os_upgrades_pending{job="node"}) > 0
labels:
severity: warning
job: node
annotations:
summary: "Rocket Pool OS Updates Available"
description: |
There are updates available for your OS that haven't been applied yet. You should update your OS.
For more information on updating see the documentation at https://docs.rocketpool.net/node-staking/updates#updating-your-operating-system
{{{- end }}}
{{{- if .AlertEnabled_RPUpdatesAvailable.Value }}}
- alert: RPUpdatesAvailable
expr: max(rocketpool_version_update{job="node"}) > 0
labels:
severity: warning
job: node
annotations:
summary: "Rocket PoolSmart Node Update Available"
description: |
There are updates available for the Rocket PoolSmart Node that haven't been applied yet. You should update the smartnode stack.
For more information on updating see the documentation at https://docs.rocketpool.net/node-staking/updates#updating-the-smartnode-stack
{{{- end }}}
{{{- if .AlertEnabled_LowETHBalance.Value }}}
- alert: LowETHBalance
expr: rocketpool_node_balance{job="rocketpool",Token="ETH"} < scalar(rocketpool_node_low_eth_balance_threshold)
for: 60m
labels:
severity: critical
job: node
annotations:
summary: "Low ETH Balance"
description: "The node ETH balance is low."
{{{- end }}}