kops: AWS LBC 2.2 -> 2.4 permission problems
/kind bug
1. What kops version are you running? The command kops version, will display
this information.
1.21.4 and 1.22.4
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
1.21.11
3. What cloud provider are you using? AWS
4. What commands did you run? What is the simplest way to reproduce this issue? Created a new cluster using kOps 1.21.4 with AWS LBC using IRSA, updated the cluster using kOps 1.22.4 still using K8S 1.21.11.
5. What happened after the commands executed? AWS LBC fails to update TargetGroupBinding and SG’s with the following errors:
aws-load-balancer-controller-7bf57d968b-5jbj9 controller {"level":"info","ts":1648535569.9157078,"msg":"registering targets","arn":"arn:aws:elasticloadbalancing:eu-north-1:72490225xxxx:targetgroup/k8s-default-echoserv-4da402dbf8/e653ea7e5ae5bc27","targets":[{"AvailabilityZone":null,"Id":"i-07cf74b2abdbc5761","Port":31814},{"AvailabilityZone":null,"Id":"i-08bdd4e1645d184b8","Port":31814},{"AvailabilityZone":null,"Id":"i-09f640f725755e459","Port":31814}]}
aws-load-balancer-controller-7bf57d968b-5jbj9 controller {"level":"error","ts":1648535569.9278822,"logger":"controller-runtime.manager.controller.targetGroupBinding","msg":"Reconciler error","reconciler group":"elbv2.k8s.aws","reconciler kind":"TargetGroupBinding","name":"k8s-default-echoserv-4da402dbf8","namespace":"default","error":"AccessDenied: User: arn:aws:sts::72490225xxxx:assumed-role/aws-load-balancer-controller.kube-system.sa.k8s.xxx/1648535569479694102 is not authorized to perform: elasticloadbalancing:RegisterTargets on resource: arn:aws:elasticloadbalancing:eu-north-1:72490225xxxx:targetgroup/k8s-default-echoserv-4da402dbf8/e653ea7e5ae5bc27 because no identity-based policy allows the elasticloadbalancing:RegisterTargets action\n\tstatus code: 403, request id: 50b0ea7e-ffce-4c40-b5b2-9eb7f509fe15"}
aws-load-balancer-controller-7bf57d968b-5jbj9 controller {"level":"info","ts":1648535570.1269672,"logger":"backend-sg-provider","msg":"creating securityGroup","name":"k8s-traffic-k8sxxx-f7e7bf6b76"}
aws-load-balancer-controller-7bf57d968b-5jbj9 controller {"level":"info","ts":1648535570.1364439,"msg":"registering targets","arn":"arn:aws:elasticloadbalancing:eu-north-1:72490225xxxx:targetgroup/k8s-default-echoserv-4da402dbf8/e653ea7e5ae5bc27","targets":[{"AvailabilityZone":null,"Id":"i-07cf74b2abdbc5761","Port":31814},{"AvailabilityZone":null,"Id":"i-08bdd4e1645d184b8","Port":31814},{"AvailabilityZone":null,"Id":"i-09f640f725755e459","Port":31814}]}
aws-load-balancer-controller-7bf57d968b-5jbj9 controller {"level":"error","ts":1648535570.274041,"logger":"backend-sg-provider","msg":"Failed to auto-create backend SG","error":"UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: qMU-16sa2Wi3Uvct_pPuKL3oebQiMy3Q7hdBWu335hD6jGeYT1w83Dxsl3cpN5Em3H0cdL6Eb98sxGpskZd4w9F94mKYeAFoGQMFdLVL2lz2_KHFg2YCq7MdGKwzUB16CaEem_r0Y5HwxJeq0JDTGPXFoDXlpTuwhSssGlDwXBmd8CCif2OjxsdEQIV6uVDuIqEJZzxTC6RLB7Vm6yZCDpnyyLPQN1naHJQ5G1uW2OcyZO89XFYnGHogqs6TLGX8d_to_Lbkvd00xF6ncQSEvqi5l47jBsdQSsNjSIx0qW3dx30MPqfX1WmXyJqMgeQQqkteX930mag0U9l_o7X3iGlgtgI5EpNGteNqFXNtlB1VRQHdhf7ypVrMkZfLXLKIYSgi6bVpMWTQ2MkMG0C9l4r48LpT8WoHAnXEs73DdRgvXZLGuauE92tWoLIWN75vMjep0AAJUOBzSTQu5CQVr1Bl74-DcwfhkqS4S5A3VrjNWI1NBsEZnB6IcsRq0ZeLr6b43WTyIewZYmBT_A3xAGK-5L1LzAchFqx_MzfvlvilkhAR2mM5Cy5phEU4VhJtVNMURVlY1N75BZIIgG5EMM6b9kiBnA9zWvwPbVz9vWvjWNB-pzldT54YAeiDsopI1bNFqp9QMytUVvdos_RJDQVw1ek\n\tstatus code: 403, request id: a4b7b722-84a9-4d2e-b6b6-bf5d75b3a748"}
aws-load-balancer-controller-7bf57d968b-5jbj9 controller {"level":"error","ts":1648535570.2744029,"logger":"controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"unbound","namespace":"","error":"UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: qMU-16sa2Wi3Uvct_pPuKL3oebQiMy3Q7hdBWu335hD6jGeYT1w83Dxsl3cpN5Em3H0cdL6Eb98sxGpskZd4w9F94mKYeAFoGQMFdLVL2lz2_KHFg2YCq7MdGKwzUB16CaEem_r0Y5HwxJeq0JDTGPXFoDXlpTuwhSssGlDwXBmd8CCif2OjxsdEQIV6uVDuIqEJZzxTC6RLB7Vm6yZCDpnyyLPQN1naHJQ5G1uW2OcyZO89XFYnGHogqs6TLGX8d_to_Lbkvd00xF6ncQSEvqi5l47jBsdQSsNjSIx0qW3dx30MPqfX1WmXyJqMgeQQqkteX930mag0U9l_o7X3iGlgtgI5EpNGteNqFXNtlB1VRQHdhf7ypVrMkZfLXLKIYSgi6bVpMWTQ2MkMG0C9l4r48LpT8WoHAnXEs73DdRgvXZLGuauE92tWoLIWN75vMjep0AAJUOBzSTQu5CQVr1Bl74-DcwfhkqS4S5A3VrjNWI1NBsEZnB6IcsRq0ZeLr6b43WTyIewZYmBT_A3xAGK-5L1LzAchFqx_MzfvlvilkhAR2mM5Cy5phEU4VhJtVNMURVlY1N75BZIIgG5EMM6b9kiBnA9zWvwPbVz9vWvjWNB-pzldT54YAeiDsopI1bNFqp9QMytUVvdos_RJDQVw1ek\n\tstatus code: 403, request id: a4b7b722-84a9-4d2e-b6b6-bf5d75b3a748"}
6. What did you expect to happen? AWS LBC still functioning
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: null
generation: 2
name: k8s.xxx
spec:
additionalPolicies:
master: '[{"Action":["acm:ListCertificates","acm:DescribeCertificate"],"Effect":"Allow","Resource":"*"},{"Action":["s3:GetObject"],"Effect":"Allow","Resource":["arn:aws:s3:::72490225xxxx-eu-north-1-kops-storage/k8s.xxx-addons/*"]}]'
addons:
- manifest: s3://72490225xxxx-eu-north-1-kops-storage/k8s.xxx-addons/addon.yaml
api:
dns: {}
authentication:
aws: {}
authorization:
rbac: {}
awsLoadBalancerController:
enabled: true
certManager:
enabled: true
managed: true
channel: stable
cloudProvider: aws
clusterAutoscaler:
enabled: true
skipNodesWithLocalStorage: false
skipNodesWithSystemPods: false
configBase: s3://72490225xxxx-eu-north-1-kops-storage/k8s.xxx
dnsZone: xxx
etcdClusters:
- etcdMembers:
- encryptedVolume: true
instanceGroup: master-eu-north-1a
name: a
- encryptedVolume: true
instanceGroup: master-eu-north-1b
name: b
- encryptedVolume: true
instanceGroup: master-eu-north-1c
name: c
name: main
- etcdMembers:
- encryptedVolume: true
instanceGroup: master-eu-north-1a
name: a
- encryptedVolume: true
instanceGroup: master-eu-north-1b
name: b
- encryptedVolume: true
instanceGroup: master-eu-north-1c
name: c
name: events
iam:
allowContainerRegistry: true
legacy: false
serviceAccountExternalPermissions:
- aws:
inlinePolicy: |2
[
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/*"
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets"
],
"Resource": [
"*"
]
}
]
name: external-dns
namespace: kube-system
- aws:
inlinePolicy: |2
[
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetRandomPassword",
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds",
"secretsmanager:ListSecrets"
],
"Resource": "*"
}
]
name: kubernetes-external-secrets
namespace: kube-system
useServiceAccountExternalPermissions: true
kubeDNS:
provider: CoreDNS
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
kubernetesApiAccess:
- 0.0.0.0/0
kubernetesVersion: 1.21.11
masterPublicName: api.k8s.xxx
metricsServer:
enabled: true
insecure: false
networkCIDR: 172.20.0.0/16
networkID: vpc-0b65dcbe6fefa08d8
networking:
calico: {}
nodeTerminationHandler:
enableRebalanceDraining: true
enableRebalanceMonitoring: true
enableSQSTerminationDraining: true
enableSpotInterruptionDraining: false
enabled: true
managedASGTag: aws-node-termination-handler/managed
nonMasqueradeCIDR: 100.64.0.0/10
serviceAccountIssuerDiscovery:
discoveryStore: s3://k8s-xxx-irsa-issuer
enableAWSOIDCProvider: true
sshAccess:
- 0.0.0.0/0
subnets:
- cidr: 172.20.0.0/19
id: subnet-0f1dd4f9aee870b6e
name: utility-eu-north-1a
type: Utility
zone: eu-north-1a
- cidr: 172.20.32.0/19
id: subnet-06b1f5443a3b808b4
name: utility-eu-north-1b
type: Utility
zone: eu-north-1b
- cidr: 172.20.64.0/19
id: subnet-0b156b5df12cdad7b
name: utility-eu-north-1c
type: Utility
zone: eu-north-1c
topology:
dns:
type: Public
masters: public
nodes: public
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-03-29T04:29:45Z"
generation: 2
labels:
kops.k8s.io/cluster: k8s.xxx
name: master-eu-north-1a
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220308
machineType: t3.medium
maxPrice: "0.03"
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-eu-north-1a
role: Master
subnets:
- utility-eu-north-1a
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-03-29T04:29:45Z"
generation: 2
labels:
kops.k8s.io/cluster: k8s.xxx
name: master-eu-north-1b
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220308
machineType: t3.medium
maxPrice: "0.03"
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-eu-north-1b
role: Master
subnets:
- utility-eu-north-1b
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-03-29T04:29:45Z"
generation: 2
labels:
kops.k8s.io/cluster: k8s.xxx
name: master-eu-north-1c
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220308
machineType: t3.medium
maxPrice: "0.03"
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-eu-north-1c
role: Master
subnets:
- utility-eu-north-1c
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-03-29T04:29:45Z"
generation: 2
labels:
kops.k8s.io/cluster: k8s.xxx
name: nodes-a
spec:
cloudLabels:
k8s.io/cluster-autoscaler/enabled: "true"
k8s.io/cluster-autoscaler/k8s.xxx: "true"
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220308
machineType: t3.medium
maxPrice: "0.03"
maxSize: 2
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: nodes-a
role: Node
subnets:
- utility-eu-north-1a
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-03-29T04:29:45Z"
generation: 2
labels:
kops.k8s.io/cluster: k8s.xxx
name: nodes-b
spec:
cloudLabels:
k8s.io/cluster-autoscaler/enabled: "true"
k8s.io/cluster-autoscaler/k8s.xxx: "true"
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220308
machineType: t3.medium
maxPrice: "0.03"
maxSize: 2
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: nodes-b
role: Node
subnets:
- utility-eu-north-1b
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-03-29T04:29:45Z"
generation: 2
labels:
kops.k8s.io/cluster: k8s.xxx
name: nodes-c
spec:
cloudLabels:
k8s.io/cluster-autoscaler/enabled: "true"
k8s.io/cluster-autoscaler/k8s.xxx: "true"
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220308
machineType: t3.medium
maxPrice: "0.03"
maxSize: 2
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: nodes-c
role: Node
subnets:
- utility-eu-north-1c
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 35 (19 by maintainers)
This should have been backported. I’ll ensure it is in the next release if not
Aaah, of course, it’s only the K8S versions that is recommended to go step-by-step, right? Misremembered. Thanks. Will report back shortly as soon as my test-cluster has completed rolling.