kops: Instance IAM role doesn't have access to the KMS key used to encrypt S3 State Store
1. What kops version are you running? The command kops version, will display
this information.
Version 1.10.0-beta.1 (git-dc9154528)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
Client Version: version.Info{Major:“1”, Minor:“10”, GitVersion:“v1.10.6”, GitCommit:“a21fdbd78dde8f5447f5f6c331f7eb6f80bd684e”, GitTreeState:“clean”, BuildDate:“2018-07-26T10:17:47Z”, GoVersion:“go1.9.3”, Compiler:“gc”, Platform:“linux/amd64”}
3. What cloud provider are you using?
aws
4. What commands did you run? What is the simplest way to reproduce this issue?
aws s3api create-bucket \ --region "us-west-2" \ --create-bucket-configuration LocationConstraint="us-west-2" \ --bucket "<REDACTED>" \ --acl "private"
aws s3api put-bucket-versioning \ --region "us-west-2" \ --bucket "<REDACTED>" \ --versioning-configuration Status=Enabled
aws s3api put-bucket-encryption \ --region "us-west-2" \ --bucket "<REDACTED>" \ --server-side-encryption-configuration '{ "Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "aws:kms", "KMSMasterKeyID": "arn:aws:kms:<REDACTED>"}}]}'
5. What happened after the commands executed? kops-configuration.service was unable to access the state files stored in the s3 bucket
` systemctl status kops-configuration.service ● kops-configuration.service - Run kops bootstrap (nodeup) Loaded: loaded (/etc/systemd/system/kops-configuration.service; disabled; vendor preset: disabled) Active: activating (start) since Fri 2018-07-27 00:07:33 UTC; 3min 49s ago Docs: https://github.com/kubernetes/kops Main PID: 881 (nodeup) Tasks: 6 (limit: 32767) Memory: 274.8M CGroup: /system.slice/kops-configuration.service └─881 /var/cache/kubernetes-install/nodeup --conf=/var/cache/kubernetes-install/kube_env.yaml --v=8
Jul 27 00:11:12 ip-10-65-129-161.ec2.internal nodeup[881]: I0727 00:11:12.540408 881 assetstore.go:313] added asset “ptp” for &{“/var/cache/nodeup/extracted/sha1:REDACTED_htt> Jul 27 00:11:12 ip-10-65-129-161.ec2.internal nodeup[881]: I0727 00:11:12.540429 881 assetstore.go:313] added asset “sample” for &{”/var/cache/nodeup/extracted/sha1:REDACTED_> Jul 27 00:11:12 ip-10-65-129-161.ec2.internal nodeup[881]: I0727 00:11:12.540448 881 assetstore.go:313] added asset “tuning” for &{“/var/cache/nodeup/extracted/sha1:REDACTED_> Jul 27 00:11:12 ip-10-65-129-161.ec2.internal nodeup[881]: I0727 00:11:12.540468 881 assetstore.go:313] added asset “vlan” for &{”/var/cache/nodeup/extracted/sha1:REDACTED_ht> Jul 27 00:11:12 ip-10-65-129-161.ec2.internal nodeup[881]: I0727 00:11:12.541551 881 files.go:100] Hash matched for “/var/cache/nodeup/sha1:REDACTED_https___kubeupv2_s3_amazo> Jul 27 00:11:12 ip-10-65-129-161.ec2.internal nodeup[881]: I0727 00:11:12.541573 881 assetstore.go:203] added asset “utils.tar.gz” for &{”/var/cache/nodeup/sha1:REDACTED> Jul 27 00:11:12 ip-10-65-129-161.ec2.internal nodeup[881]: I0727 00:11:12.541663 881 assetstore.go:313] added asset “socat” for &{"/var/cache/nodeup/extracted/sha1:REDACTED> Jul 27 00:11:12 ip-10-65-129-161.ec2.internal nodeup[881]: I0727 00:11:12.541694 881 s3fs.go:216] Reading file “s3://<REDACTED>/cluster.spec” Jul 27 00:11:12 ip-10-65-129-161.ec2.internal nodeup[881]: W0727 00:11:12.961693 881 main.go:142] got error running nodeup (will retry in 30s): error loading Cluster "<REDACTED> Jul 27 00:11:12 ip-10-65-129-161.ec2.internal nodeup[881]: status code: 403, request id: `
Manually providing the IAM roles created by kops access to the KMS key used to encrypt the S3 bucket allows the kops-configuration.service to start and the cluster to boot
6. What did you expect to happen? It seems that when encryption is used in the S3 bucket used for KOPS_STATE_STORE, the nodes are not given access to the encryption key used in the bucket I didn’t encounter any problem with kops-1.9.1 Cluster was created using ‘–target=terraform’
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 7
- Comments: 17 (1 by maintainers)
Hit by this as well
For those looking for a quick fix to this issue, using @lukyanetsv’s policy in your cluster configuration as follows will work. Ensure that you update the ARN for your KMS key:
I just hit this issue. It doesn’t seem like this should be closed (even though there is a workaround). It would be great if there was a
--kms-key-arnor similar flag that would create the above workaround in the cluster spec for the user./remove-lifecycle stale
This issue is still occurring.