Bug Description
I'm not sure if this is a bug exactly, but forward actions with weighted target groups appear to behave differently for ALB and NLB.
For ALB, I can use the same service names / service ports in target group specifications on multiple load balancer ingress definitions (e.g. internet facing and internal). For NLB, if I try to use the same service names / service ports in target group specifications on multiple load balancer service definitions I instead get an error TargetGroupAssociationLimit: The following target groups cannot be associated with more than one load balancer.
I haven't had the opportunity to investigate in depth yet but wanted to check here at the same time as well. As far as I can see each target group should be tagged with load balancer based stack name, so the same targets in several load balancers should create new target groups, but perhaps the binding mechanism isn't differentiating between them and attempting to bind only one of the target groups multiple times. Or perhaps I'm misunderstanding something about NLB specifically and this simply isn't possible for another technical reason.
Steps to Reproduce
- create two ClusterIP services - in our case
ingress-nginx-alb-controller and nginx-ingress-controller
- create two Load Balancer services (NLB) - in our case
aws-load-balancers-nlb-external and aws-load-balancers-nlb-internal
- create the annotation for
actions.TCP-443 on both load balancer service definitions, e.g:
{
"type": "forward",
"forwardConfig": {
"baseServiceWeight": 0,
"targetGroups": [
{
"serviceName": "ingress-nginx-alb-controller",
"servicePort": 80,
"weight": 50
},
{
"serviceName": "nginx-ingress-controller",
"servicePort": 80,
"weight": 50
}
],
"targetGroupStickinessConfig": {
"enabled": true,
"durationSeconds": 15
}
}
}
- reconcile fails with error:
aws-load-balancer-controller-8cbf8cc89-6fnn5 aws-load-balancer-controller {"level":"error","ts":"2025-12-31T10:57:13Z","msg":"Reconciler error","controller":"service","namespace":"cluster-ingress","name":"aws-load-balancers-nlb-internal","reconcileID":"d979ae13-d2af-4b3a-8f96-52045c065fa4","error":"operation error Elastic Load Balancing v2: ModifyListener, https response error StatusCode: 400, RequestID: 89d757fb-003c-4262-8c4c-08ff669e6290, TargetGroupAssociationLimit: The following target groups cannot be associated with more than one load balancer: arn:aws:elasticloadbalancing:eu-west-2:999999999999:targetgroup/k8s-clusteri-nginxing-31876d1c06/95acc1de082ce953, arn:aws:elasticloadbalancing:eu-west-2:999999999999:targetgroup/k8s-clusteri-ingressn-d0c0eab272/75e99e51e02155f4"}
- the same approach seems to work fine for ALB
Expected Behavior
Separate target groups and target group bindings should be created for each NLB.
Actual Behavior
Controller fails to reconcile the NLB service with aforementioned error.
Regression
Was the functionality working correctly in a previous version ? No, but this is very new functionality in 2.17.0 only.
Current Workarounds
No workarounds available that I have been able to identify.
Environment
- AWS Load Balancer controller version: 2.17.0
- Kubernetes version: 1.33
- Using EKS (yes/no), if so version?: yes, 1.33
- Using Service or Ingress: service
- AWS region: eu-west-2
- How was the aws-load-balancer-controller installed:
- If helm was used then please show output of
helm ls -A | grep -i aws-load-balancer-controller
aws-load-balancer-controller aws-load-balancer-controller 28 2025-12-30 15:19:25.732666574 +0000 UTC deployed aws-load-balancer-controller-1.17.0 v2.17.0
- If helm was used then please show output of
helm -n <controllernamespace> get values <helmreleasename>
USER-SUPPLIED VALUES:
clusterName: eks-dda01-ew2-d-digital
enableBackendSecurityGroup: false
image:
repository: 999999999999.dkr.ecr.eu-west-2.amazonaws.com/amazon/aws-load-balancer-controller
keepTLSSecret: true
nodeSelector:
WorkerType: private
podMutatorWebhookConfig:
failurePolicy: Fail
region: eu-west-2
replicaCount: 1
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::999999999999:role/iamr-dda01-svc-d-aws-lb-controller
serviceMutatorWebhookConfig:
failurePolicy: Fail
- If helm was not used, then copy/paste the exact command used to install the controller, including flags and options.
- Current state of the Controller configuration:
kubectl -n <controllernamespace> describe deployment aws-load-balancer-controller
Name: aws-load-balancer-controller
Namespace: aws-load-balancer-controller
CreationTimestamp: Thu, 14 Jan 2021 16:47:02 +0000
Labels: app.kubernetes.io/instance=aws-load-balancer-controller
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=aws-load-balancer-controller
app.kubernetes.io/version=v2.17.0
helm.sh/chart=aws-load-balancer-controller-1.17.0
Annotations: chc.co.uk/initial-replicas-count: 1
deployment.kubernetes.io/revision: 15
meta.helm.sh/release-name: aws-load-balancer-controller
meta.helm.sh/release-namespace: aws-load-balancer-controller
Selector: app.kubernetes.io/instance=aws-load-balancer-controller,app.kubernetes.io/name=aws-load-balancer-controller
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app.kubernetes.io/instance=aws-load-balancer-controller
app.kubernetes.io/name=aws-load-balancer-controller
Annotations: kubectl.kubernetes.io/restartedAt: 2024-10-16T14:18:11+01:00
prometheus.io/port: 8080
prometheus.io/scrape: true
Service Account: aws-load-balancer-controller
Containers:
aws-load-balancer-controller:
Image: 999999999999.dkr.ecr.eu-west-2.amazonaws.com/amazon/aws-load-balancer-controller:v2.17.0
Ports: 9443/TCP, 8080/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--cluster-name=XXXXXXXXXXXX
--ingress-class=alb
--aws-region=eu-west-2
--enable-backend-security-group=false
Liveness: http-get http://:61779/healthz delay=30s timeout=10s period=10s #success=1 #failure=2
Readiness: http-get http://:61779/readyz delay=10s timeout=10s period=10s #success=1 #failure=2
Environment: <none>
Mounts:
/tmp/k8s-webhook-server/serving-certs from cert (ro)
Volumes:
cert:
Type: Secret (a volume populated by a Secret)
SecretName: aws-load-balancer-tls
Optional: false
Priority Class Name: system-cluster-critical
Node-Selectors: WorkerType=private
Tolerations: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: aws-load-balancer-controller-574965c99d (0/0 replicas created), aws-load-balancer-controller-c896b5465 (0/0 replicas created), aws-load-balancer-controller-74865768fd (0/0 replicas created), aws-load-balancer-controller-659f674d9d (0/0 replicas created), aws-load-balancer-controller-7f67cdb79b (0/0 replicas created), aws-load-balancer-controller-86dc9d494b (0/0 replicas created), aws-load-balancer-controller-74765d54bf (0/0 replicas created), aws-load-balancer-controller-c9f7f4759 (0/0 replicas created)
NewReplicaSet: aws-load-balancer-controller-8cbf8cc89 (1/1 replicas created)
Events: <none>
- Current state of the Ingress/Service configuration:
kubectl describe ingressclasses
Name: alb
Labels: app.kubernetes.io/instance=aws-load-balancer-controller
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=aws-load-balancer-controller
app.kubernetes.io/version=v2.17.0
helm.sh/chart=aws-load-balancer-controller-1.17.0
Annotations: meta.helm.sh/release-name: aws-load-balancer-controller
meta.helm.sh/release-namespace: aws-load-balancer-controller
Controller: ingress.k8s.aws/alb
Events: <none>
Name: nginx
Labels: app.kubernetes.io/instance=nginx-ingress
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=nginx-ingress
app.kubernetes.io/version=5.3.1
helm.sh/chart=nginx-ingress-2.4.1
Annotations: meta.helm.sh/release-name: nginx-ingress
meta.helm.sh/release-namespace: cluster-ingress
Controller: nginx.org/ingress-controller
Events: <none>
Name: nginx-alb
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx-alb
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.14.0
helm.sh/chart=ingress-nginx-4.14.0
Annotations: meta.helm.sh/release-name: ingress-nginx-alb
meta.helm.sh/release-namespace: cluster-ingress
Controller: k8s.io/ingress-nginx
Events: <none>
kubectl -n <appnamespace> describe ingress <ingressname>
N/A
kubectl -n <appnamespace> describe svc <servicename>
Name: aws-load-balancers-nlb-external
Namespace: cluster-ingress
Labels: app.kubernetes.io/instance=aws-load-balancers
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=aws-load-balancers
helm.sh/chart=aws-load-balancers-1.0.0-beta
Annotations: meta.helm.sh/release-name: aws-load-balancers
meta.helm.sh/release-namespace: cluster-ingress
service.beta.kubernetes.io/actions.TCP-443:
{
"type": "forward",
"forwardConfig": {
"baseServiceWeight": 0,
"targetGroups": [
{
"serviceName": "ingress-nginx-alb-controller",
"servicePort": 443,
"weight": 50
},
{
"serviceName": "nginx-ingress-controller",
"servicePort": 443,
"weight": 50
}
]
,
"targetGroupStickinessConfig": {
"enabled": true,
"durationSeconds": 15
}
}
}
service.beta.kubernetes.io/actions.TCP-80:
{
"type": "forward",
"forwardConfig": {
"baseServiceWeight": 0,
"targetGroups": [
{
"serviceName": "ingress-nginx-alb-controller",
"servicePort": 80,
"weight": 50
},
{
"serviceName": "nginx-ingress-controller",
"servicePort": 80,
"weight": 50
}
]
,
"targetGroupStickinessConfig": {
"enabled": true,
"durationSeconds": 15
}
}
}
service.beta.kubernetes.io/aws-load-balancer-attributes:
deletion_protection.enabled=true
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-eip-allocations:
eipalloc-0fc81a9c7e85ef82c,eipalloc-07cdc467bbc931730,eipalloc-06b615cb533c263b8
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-04eeff8594f286e0e,subnet-00d2353c78064c98b,subnet-00ddfc2df2f225907
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip
Selector: app=nginx-alb-ingress-controller
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.55.60
IPs: 172.20.55.60
LoadBalancer Ingress: k8s-clusteri-awsloadb-61dcacee84-319350a7c50b9ef0.elb.eu-west-2.amazonaws.com
Port: tcp-80 80/TCP
TargetPort: 80/TCP
NodePort: tcp-80 31341/TCP
Endpoints: 10.177.113.143:80
Port: tcp-443 443/TCP
TargetPort: 443/TCP
NodePort: tcp-443 32330/TCP
Endpoints: 10.177.113.143:443
Session Affinity: None
External Traffic Policy: Cluster
Internal Traffic Policy: Cluster
LoadBalancer Source Ranges: <redacted>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 17s (x19 over 21d) service-controller Ensuring load balancer
Warning FailedDeployModel 15s service Failed deploy model due to operation error Elastic Load Balancing v2: ModifyListener, https response error StatusCode: 400, RequestID: ac1e39dd-3e7e-4901-b06e-e7b423eeed0b, TargetGroupAssociationLimit: The following target groups cannot be associated with more than one load balancer: arn:aws:elasticloadbalancing:eu-west-2:999999999999:targetgroup/k8s-clusteri-nginxing-31876d1c06/0deaecbc2716963e, arn:aws:elasticloadbalancing:eu-west-2:999999999999:targetgroup/k8s-clusteri-ingressn-d0c0eab272/b4aaf8efb4a6aa29
...
Warning FailedDeployModel 1s service Failed deploy model due to operation error Elastic Load Balancing v2: ModifyListener, https response error StatusCode: 400, RequestID: 9b155ddc-e885-4362-b912-3791335abc12, TargetGroupAssociationLimit: The following target groups cannot be associated with more than one load balancer: arn:aws:elasticloadbalancing:eu-west-2:999999999999:targetgroup/k8s-clusteri-nginxing-31876d1c06/0deaecbc2716963e, arn:aws:elasticloadbalancing:eu-west-2:999999999999:targetgroup/k8s-clusteri-ingressn-d0c0eab272/b4aaf8efb4a6aa29
Name: aws-load-balancers-nlb-internal
Namespace: cluster-ingress
Labels: app.kubernetes.io/instance=aws-load-balancers
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=aws-load-balancers
helm.sh/chart=aws-load-balancers-1.0.0-beta
Annotations: meta.helm.sh/release-name: aws-load-balancers
meta.helm.sh/release-namespace: cluster-ingress
service.beta.kubernetes.io/actions.TCP-443:
{
"type": "forward",
"forwardConfig": {
"baseServiceWeight": 0,
"targetGroups": [
{
"serviceName": "ingress-nginx-alb-controller",
"servicePort": 443,
"weight": 50
},
{
"serviceName": "nginx-ingress-controller",
"servicePort": 443,
"weight": 50
}
]
,
"targetGroupStickinessConfig": {
"enabled": true,
"durationSeconds": 15
}
}
}
service.beta.kubernetes.io/actions.TCP-80:
{
"type": "forward",
"forwardConfig": {
"baseServiceWeight": 0,
"targetGroups": [
{
"serviceName": "ingress-nginx-alb-controller",
"servicePort": 80,
"weight": 50
},
{
"serviceName": "nginx-ingress-controller",
"servicePort": 80,
"weight": 50
}
]
,
"targetGroupStickinessConfig": {
"enabled": true,
"durationSeconds": 15
}
}
}
service.beta.kubernetes.io/aws-load-balancer-attributes:
deletion_protection.enabled=true
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip
Selector: app=nginx-alb-ingress-controller
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.51.163
IPs: 172.20.51.163
LoadBalancer Ingress: <redacted>
Port: tcp-80 80/TCP
TargetPort: 80/TCP
NodePort: tcp-80 30169/TCP
Endpoints: 10.177.113.143:80
Port: tcp-443 443/TCP
TargetPort: 443/TCP
NodePort: tcp-443 31436/TCP
Endpoints: 10.177.113.143:443
Session Affinity: None
External Traffic Policy: Cluster
Internal Traffic Policy: Cluster
LoadBalancer Source Ranges: <redacted>
Events: <none>
Possible Solution (Optional)
Contribution Intention (Optional)
- No, I cannot work on a PR at this time
Additional Context
Bug Description
I'm not sure if this is a bug exactly, but forward actions with weighted target groups appear to behave differently for ALB and NLB.
For ALB, I can use the same service names / service ports in target group specifications on multiple load balancer ingress definitions (e.g. internet facing and internal). For NLB, if I try to use the same service names / service ports in target group specifications on multiple load balancer service definitions I instead get an error
TargetGroupAssociationLimit: The following target groups cannot be associated with more than one load balancer.I haven't had the opportunity to investigate in depth yet but wanted to check here at the same time as well. As far as I can see each target group should be tagged with load balancer based stack name, so the same targets in several load balancers should create new target groups, but perhaps the binding mechanism isn't differentiating between them and attempting to bind only one of the target groups multiple times. Or perhaps I'm misunderstanding something about NLB specifically and this simply isn't possible for another technical reason.
Steps to Reproduce
ingress-nginx-alb-controllerandnginx-ingress-controlleraws-load-balancers-nlb-externalandaws-load-balancers-nlb-internalactions.TCP-443on both load balancer service definitions, e.g:Expected Behavior
Separate target groups and target group bindings should be created for each NLB.
Actual Behavior
Controller fails to reconcile the NLB service with aforementioned error.
Regression
Was the functionality working correctly in a previous version ? No, but this is very new functionality in 2.17.0 only.
Current Workarounds
No workarounds available that I have been able to identify.
Environment
helm ls -A | grep -i aws-load-balancer-controllerhelm -n <controllernamespace> get values <helmreleasename>kubectl -n <controllernamespace> describe deployment aws-load-balancer-controllerkubectl describe ingressclasseskubectl -n <appnamespace> describe ingress <ingressname>N/A
kubectl -n <appnamespace> describe svc <servicename>Possible Solution (Optional)
Contribution Intention (Optional)
Additional Context