kops: is not authorized to perform: elasticloadbalancing:ModifyListener
/kind bug
**1. What kops version are you running? 1.22.2
**2. What Kubernetes version are you running?
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.14", GitCommit:"57a3aa3f13699cf3db9c52d228c18db94fa81876", GitTreeState:"clean", BuildDate:"2021-12-15T14:47:10Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
3. What cloud provider are you using? AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
- Upgrade my cluster to 1.19 to 1.20
- Fix all ingress-nginx part
5. What happened after the commands executed?
- Have an error in intress-service due to IAM policy :
Error updating load balancer with new hosts map[ip-xxx.xxx.xxx.xxx.xxx.compute.internal:{}]: error updating load balancer listener: "AccessDenied: User: arn:aws:sts::xxxxxxxxx:assumed-role/masters.rancher-mgmt.aquassay.net/i-xxxxxxxxx is not authorized to perform: elasticloadbalancing:ModifyListener on resource: arn:aws:elasticloadbalancing:eu-west-1:xxxxxxxxx:listener/net/xxxxxxxxx/xxxxxxxxx/xxxxxxxxx because no identity-based policy allows the elasticloadbalancing:ModifyListener action\n\tstatus code: 403, request id: xxxx-xxxx-xxxx-xxxx"
6. What did you expect to happen?
All this part works fine
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: "2020-05-29T15:16:16Z"
generation: 7
name: my-cluster.local
spec:
api:
dns: {}
authorization:
rbac: {}
channel: stable
cloudProvider: aws
configBase: s3://my-bucket/my-cluster.local
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- instanceGroup: master-eu-west-1a
name: a
memoryRequest: 100Mi
name: main
- cpuRequest: 100m
etcdMembers:
- instanceGroup: master-eu-west-1a
name: a
memoryRequest: 100Mi
name: events
iam:
allowContainerRegistry: true
legacy: false
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 0.0.0.0/0
kubernetesVersion: 1.20.14
masterInternalName: api.internal.my-cluster.local
masterPublicName: api.my-cluster.local
networkCIDR: 172.20.0.0/16
networking:
kubenet: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 0.0.0.0/0
subnets:
- cidr: 172.20.32.0/19
name: eu-west-1a
type: Public
zone: eu-west-1a
- cidr: 172.20.64.0/19
name: eu-west-1b
type: Public
zone: eu-west-1b
- cidr: 172.20.96.0/19
name: eu-west-1c
type: Public
zone: eu-west-1c
topology:
dns:
type: Public
masters: public
nodes: public
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2020-05-29T15:16:17Z"
labels:
kops.k8s.io/cluster: my-cluster.local
name: master-eu-west-1a
spec:
image: ami-06ce3edf0cff21f07
machineType: t3a.medium
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-eu-west-1a
role: Master
subnets:
- eu-west-1a
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2020-05-29T15:16:17Z"
labels:
kops.k8s.io/cluster: my-cluster.local
name: nodes
spec:
image: ami-06ce3edf0cff21f07
machineType: t3a.medium
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: nodes
role: Node
subnets:
- eu-west-1a
- eu-west-1b
- eu-west-1c
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
I don’t know if the problem is from ingress-nginx or kops but, I think, the problem is on the master IAM role. But the master role have the correct rights. The only things, I think made this error, are the conditions
{
"Statement": [
{
"Action": "ec2:AttachVolume",
"Condition": {
"StringEquals": {
"aws:ResourceTag/KubernetesCluster": "my-cluster.local",
"aws:ResourceTag/k8s.io/role/master": "1"
}
},
"Effect": "Allow",
"Resource": [
"*"
]
},
{
"Action": [
"s3:Get*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-bucket/my-cluster.local/*"
},
{
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-bucket/my-cluster.local/backups/etcd/main/*"
},
{
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-bucket/my-cluster.local/backups/etcd/events/*"
},
{
"Action": [
"s3:GetBucketLocation",
"s3:GetEncryptionConfiguration",
"s3:ListBucket",
"s3:ListBucketVersions"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-bucket"
]
},
{
"Action": [
"route53:ChangeResourceRecordSets",
"route53:ListResourceRecordSets",
"route53:GetHostedZone"
],
"Effect": "Allow",
"Resource": [
"arn:aws:route53:::hostedzone/XXXXXXXX"
]
},
{
"Action": [
"route53:GetChange"
],
"Effect": "Allow",
"Resource": [
"arn:aws:route53:::change/*"
]
},
{
"Action": [
"route53:ListHostedZones",
"route53:ListTagsForResource"
],
"Effect": "Allow",
"Resource": [
"*"
]
},
{
"Action": "ec2:CreateTags",
"Condition": {
"StringEquals": {
"ec2:CreateAction": [
"CreateVolume",
"CreateSnapshot"
]
}
},
"Effect": "Allow",
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:snapshot/*"
]
},
{
"Action": "ec2:CreateTags",
"Condition": {
"StringEquals": {
"ec2:CreateAction": [
"CreateVolume",
"CreateSnapshot"
]
}
},
"Effect": "Allow",
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:snapshot/*"
]
},
{
"Action": "ec2:DeleteTags",
"Condition": {
"StringEquals": {
"aws:ResourceTag/KubernetesCluster": "my-cluster.local"
}
},
"Effect": "Allow",
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:snapshot/*"
]
},
{
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:DescribeAccountAttributes",
"ec2:DescribeInstanceTypes",
"ec2:DescribeInstances",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DescribeRegions",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DescribeVolumesModifications",
"ec2:DescribeVpcs",
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:DescribeRepositories",
"ecr:GetAuthorizationToken",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:ListImages",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:RegisterTargets",
"iam:GetServerCertificate",
"iam:ListServerCertificates",
"kms:DescribeKey",
"kms:GenerateRandom"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyVolume",
"ec2:RevokeSecurityGroupIngress",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener"
],
"Condition": {
"StringEquals": {
"aws:ResourceTag/KubernetesCluster": "my-cluster.local"
}
},
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"ec2:CreateSecurityGroup",
"ec2:CreateVolume",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateTargetGroup"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/KubernetesCluster": "my-cluster.local"
}
},
"Effect": "Allow",
"Resource": "*"
}
],
"Version": "2012-10-17"
}
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 30 (12 by maintainers)
@olemarkus can you please reference the commit that fixed this problem?
We had the same problem that @grem11n described: The TLS Listeners from NLBs stopped to register new instances.
PS: The problem occurs with the
aws-cloud-controller-manager, using kops 1.25.This may be an issue with legacy cloud controller manager. You may want to try the out of tree one. It is enabled by default when you use K8s 1.24, or you can enable it manually setting
spec.cloudControllerManager: {}Just had a nasty surprise due to this. @spohner thanks for the lifesaving snippet!
We encountered the same issue. Ended up adding to the additionalPolicies/master spec for now:
additionalPolicies: masters: { "Action": "elasticloadbalancing:ModifyListener", "Resource": [ "*" ], "Effect": "Allow" }We were afraid that removing the condition manually will get reverted with a kops update. It would also be less tight in regards to the IAM rights.