-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Description
What broke? What's expected?
When using kubebuilder edit --plugins=helm/v2-alpha, the plugin generates the Helm chart but only converts the ClusterRole from config/rbac/role.yaml. Namespace-scoped Roles (used for leader election, cross-namespace permissions, etc.) are silently ignored and not included in the generated Helm chart.
This causes runtime RBAC permission errors when the operator is deployed via Helm, even though the same operator works fine when deployed with make deploy (Kustomize).
Expected behavior: The Helm plugin should generate templates for all RBAC resources in config/rbac/role.yaml, including:
- ClusterRole →
dist/chart/templates/rbac/manager-role.yaml✅ (works) - Namespace-scoped Roles →
dist/chart/templates/rbac/<namespace>-role.yaml❌ (missing) - Corresponding RoleBindings for each Role ❌ (missing)
Reproducing this issue
I have these RBAC markers in my controller:
//+kubebuilder:rbac:groups=apps,namespace=infrastructure,resources=deployments,verbs=get;list;watch;update;patch
//+kubebuilder:rbac:groups=coordination.k8s.io,namespace=production,resources=leases,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups="",namespace=production,resources=events,verbs=create;patch;updateThe RBAC files generated via make manifests are correct and include both ClusterRole and namespace-scoped Roles:
config/rbac/role.yaml:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: manager-role
rules:
- apiGroups: ["example.com"]
resources: ["myresources"]
verbs: ["get", "list", "watch", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: manager-role
namespace: infrastructure
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "patch", "update", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: manager-role
namespace: production
rules:
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]But when I run kubebuilder edit --plugins=helm/v2-alpha --force to generate the Helm chart:
ls dist/chart/templates/rbac/
# Output: manager-role.yaml, manager-rolebinding.yaml
# Expected: manager-role.yaml, infrastructure-role.yaml, production-role.yaml, + their bindingsOnly the ClusterRole is converted. The two namespace-scoped Roles are completely missing from the Helm chart.
When I deploy with Helm, the operator starts but immediately fails with permission errors:
E0104 12:33:40.422706 1 leaderelection.go:448] error retrieving resource lock production/a3ef19fc.example.com:
leases.coordination.k8s.io "a3ef19fc.example.com" is forbidden:
User "system:serviceaccount:production:myoperator-controller-manager" cannot get resource "leases"
in API group "coordination.k8s.io" in the namespace "production"
This happens because the leader election Role with lease permissions was not included in the Helm chart.
Workaround
Manually create the missing Role and RoleBinding templates in dist/chart/templates/rbac/:
infrastructure-role.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "chart.fullname" . }}-infrastructure
namespace: infrastructure
labels:
{{- include "chart.labels" . | nindent 4 }}
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "patch", "update", "watch"]infrastructure-rolebinding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "chart.fullname" . }}-infrastructure
namespace: infrastructure
labels:
{{- include "chart.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "chart.fullname" . }}-infrastructure
subjects:
- kind: ServiceAccount
name: {{ include "chart.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}Repeat for each namespace-scoped Role.
Note: These manual files will be overwritten if you regenerate the chart with kubebuilder edit --plugins=helm/v2-alpha --force.
Additional Context
The root cause appears to be in pkg/plugins/optional/helm/v2alpha/scaffolds/internal/kustomize/helm_templater.go - the RBAC conversion logic likely only processes the first YAML document or only looks for kind: ClusterRole.
The plugin should:
- Parse all YAML documents in
config/rbac/role.yaml(not just the first one) - Generate separate template files for each namespace-scoped Role
- Templatize namespace fields appropriately:
- ClusterRole: no namespace
- Role:
namespace: {{ .Release.Namespace }}or keep explicit namespace for cross-namespace scenarios
This is distinct from issue #5148 (Kustomize namespace override). That issue affects make deploy, while this Helm plugin bug affects helm install/upgrade.
KubeBuilder (CLI) Version
4.10.1 (project scaffolded with 4.9.0)
PROJECT version
3
Plugin versions
- go.kubebuilder.io/v4
plugins:
helm.kubebuilder.io/v2-alpha:
manifests: dist/install.yaml
output: distOther versions
- Go Version: 1.25.5
Extra Labels
No response