external-secrets: failed calling webhook "validate.externalsecret.external-secrets.io": Post "https://external-secrets-webhook.default.svc:443/validate-external-secrets-io-v1beta1-externalsecret?timeout=5s"
Hi,
i’m running external-secrets helm chart 0.5.0 on an EKS 1.22 cluster, and i encountering this error when i create an ExternalService manifest:
failed calling webhook "validate.externalsecret.external-secrets.io": Post "https://external-secrets-webhook.default.svc:443/validate-external-secrets-io-v1beta1-externalsecret?timeout=5s"
This is my ExternalSecret:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: my-secret
spec:
refreshInterval: 1m
secretStoreRef:
name: secrets-manager-store
kind: ClusterSecretStore
target:
name: my-secret
creationPolicy: Owner
data:
- secretKey: key
remoteRef:
key: my-secret-provider-key
And this is my ClusterSecretStore
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: secrets-manager-store
spec:
provider:
aws:
service: SecretsManager
region: eu-central-1
auth:
secretRef:
accessKeyIDSecretRef:
name: my-aws-creds
key: id
namespace: default
secretAccessKeySecretRef:
name: my-aws-creds
key: key
namespace: default
Anyone can help? Best.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 1
- Comments: 30 (12 by maintainers)
Yep, forgot about private GKE firewall. Just example terraform setup for it if somebody needs that:
That solved the issue.
@ZeroDeth @Ravelin
The EKS cluster created with
terraform-aws-modules
v18+ got some hidden security group rules between the control plane and the node groups.In order to overwrite these rules you need to specify the rule (or rules) like this (it’s quite loose):
The
source_cluster_security_group
attributes indicates that the source of the communication is the control plane security group.You can find a similar issue here: https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1748
Yeah, i confirm that.
Adding an aws security group rule (allow all ingress traffic between control plane and the node groups) solves the issue.
This is not a bug.
Ok, I found the
values.yaml
file in this repo: https://github.com/external-secrets/external-secrets/blob/b3f7b7ac947ab171852202fe684e4374cd2c81c6/deploy/charts/external-secrets/values.yaml(i suggest to link this file or report the available attributes in the docs, it’s always a pain to find out the helm customization options 😄)
Then, I added these lines to my
values.yaml
:And now it works smoothly.
Glad to hear that! I think we should add a troubleshooting guide / FAQ to the docs. The information in this issue would be a good candidate to go there 😃
@Skullflow
Thanks, I worked it out about 20 minutes after I posted but now i’m not getting my secrets to appear in my namespace. I think I need to adjust something. I approached it slightly differently and tagged those into my
eks
usingvpc_security_group_ids = [aws_security_group.additional.id]
since I had some other rules setup there alsoThank you for the help and the other issue, gives me some reading to do and work out which is better
Hi this is not a bug. I had the same problem the other day.
The problem is happening due to the private eks cluster on gcp. When you create a private cluster the control plane is located in a different subnet then the nodes with the kublet. When you are trying to create a secret a webhook event is send however because by default the traffic between the control place and the kublet on the node is forbidden you are getting the error.
SOLUTION
Add a firewall rule that allow traffic between the master control plane CIDR block and the subnet where your resources are located.
reference: https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules