📋 Prerequisites
🎯 Affected Service(s)
Multiple services / System-wide issue
🚦 Impact/Severity
Minor inconvenience
🐛 Bug Description
I'm configuring my k3s setup with mycluster.k8s.somedomain.net as my cluster-domain.
This means servicename.somenamespace.svc.cluster.local does not resolve, but servicename.somenamespace.svc.mycluster.k8s.somedomain.net and servicename.somenamespace.svc do.
In a minimal deployment from Helm chart, I manually edited the resulting Deployments and removed cluster.local in NEXT_PUBLIC_BACKEND_URL and POSTGRES_DATABASE_URL. This complicates pushes, and would require hacks upon an automated push such as via ArgoCD.
Ideally either the helmcharts would expose the clusterdomain in values yaml, or they'd omit it and depend on search-domains being set up correctly in CoreDNS (as I did above).
I did not yet get to a functioning system, and I'll postpone it for later, so I don't know if this assumption about cluster-domain propagated anywhere else, but a quick search revealed at least one yaml file in tools dir to mention cluster.local.
🔄 Steps To Reproduce
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
namespace: kagent
name: kagent-crds
spec:
targetNamespace: kagent
createNamespace: true
version: "0.9.0" # https://github.com/kagent-dev/kagent/pkgs/container/kagent%2Fhelm%2Fkagent-crds
repo: ghcr.io/kagent-dev
chart: kagent/helm/kagent-crds
valuesContent: |
# n/a
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
namespace: kagent
name: kagent
spec:
targetNamespace: kagent
createNamespace: true
version: "0.9.0" # https://github.com/kagent-dev/kagent/pkgs/container/kagent%2Fhelm%2Fkagent
repo: ghcr.io/kagent-dev
chart: kagent/helm/kagent
valuesContent: |
providers:
default: ollama
# default: openAI
openAI:
apiKey: 12345
# gemini, anthropic, azureOpenAI
controller:
env: {}
# envFrom: [ { secretRef: { name: controller-secrets } } ]
ipv6:
enabled: true
Note, this is actually processed by ArgoCD appset, but applying on k3s with kubectl apply -f should have a comparable result. As noted, k3s has been initialized with a custom cluster-domain.
🤔 Expected Behavior
No response
📱 Actual Behavior
No response
💻 Environment
No response
🔧 CLI Bug Report
No response
🔍 Additional Context
No response
📋 Logs
📷 Screenshots
No response
🙋 Are you willing to contribute?
📋 Prerequisites
🎯 Affected Service(s)
Multiple services / System-wide issue
🚦 Impact/Severity
Minor inconvenience
🐛 Bug Description
I'm configuring my k3s setup with
mycluster.k8s.somedomain.netas mycluster-domain.This means
servicename.somenamespace.svc.cluster.localdoes not resolve, butservicename.somenamespace.svc.mycluster.k8s.somedomain.netandservicename.somenamespace.svcdo.In a minimal deployment from Helm chart, I manually edited the resulting
Deployments and removedcluster.localinNEXT_PUBLIC_BACKEND_URLandPOSTGRES_DATABASE_URL. This complicates pushes, and would require hacks upon an automated push such as via ArgoCD.Ideally either the helmcharts would expose the clusterdomain in values yaml, or they'd omit it and depend on search-domains being set up correctly in CoreDNS (as I did above).
I did not yet get to a functioning system, and I'll postpone it for later, so I don't know if this assumption about
cluster-domainpropagated anywhere else, but a quick search revealed at least one yaml file intoolsdir to mentioncluster.local.🔄 Steps To Reproduce
Note, this is actually processed by ArgoCD appset, but applying on k3s with
kubectl apply -fshould have a comparable result. As noted, k3s has been initialized with a customcluster-domain.🤔 Expected Behavior
No response
📱 Actual Behavior
No response
💻 Environment
No response
🔧 CLI Bug Report
No response
🔍 Additional Context
No response
📋 Logs
📷 Screenshots
No response
🙋 Are you willing to contribute?