kops: x509: certificate signed by unknown authority when installing cluster with kops

Hi,

Getting this error when executing any kubectl command: Unable to connect to the server: x509: certificate signed by unknown authority

Did some digging around and found that it is because of self signed certificates. This can be solved by adding --insecure-skip-tls-verify=true to every kubectl command or (the preferred way) adding:

--kubelet-certificate-authority=/srv/kubernetes/ca.crt \
--kubelet-client-certificate=/var/run/kubernetes/kubelet.crt \
--kubelet-client-key=/var/run/kubernetes/kubelet.key 

to the kube-apiserver startup shell script.

My question: how can I get these configuration options added automatically added to the kube-apiserver startup script when I install the cluster with kops?

(Or is there another way of dealing with these certificates?)

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 18
  • Comments: 22 (2 by maintainers)

Most upvoted comments

Update: Removing the embedded root certificate from ~/.kube/config and running this config command:

kubectl config set-cluster ${KUBE_CONTEXT} --insecure-skip-tls-verify=true \ --server=${KUBE_CONTEXT}

is the equivalent of adding --insecure-skip-tls-verify=true to every kubectl command.

This will happen if you recreate a cluster and you do not copy the new configuration to the regular user.

When you create a new cluster it prompts the following:

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Be sure to execute the line sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config every time you recreate your cluster.

You really shouldn’t have to do this. The kubecfg configuration includes the (self-signed) CA certificate and this ensures that you aren’t being MITM-ed.

This sounds more like an installation problem when running kops. Were you doing anything unusual?

Finally found the source of this error. Essentially my .kube/config was getting lost resulting in this error. It was a bug in my scripts, outside of kops. Now that I have fixed, it, I do not expect to have this error.

if you are facing this error, try kops export kubecfg --name $CLUSTER_NAME hopefully that should fix it.

run “gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400” here devops1-218400 is my project name.replace it with your project name

This will happen is you are intentionally MITMing eg if you are putting your cluster behind an external system that does SSL termination on a different CA then the cluster uses.

For example create cluster with kube generated CA, but need to put the UI and API behind an IT CA. Not sure how to deal with this yet

I have been able to reproduce lots of times. I think it happens when I destroy a cluster and within a few mins, re-create the same cluster again.

If you’re using self-signed certificate and --insecure-skip-tls-verify=true doesn’t work, there is a chance that your network doesn’t allow unsecure self signed cert. Try doing it over a VPN.

/open

appending --insecure-skip-tls-verify=true to the end of kubectl get all did the trick…

thanks @mayank-dixit

I got this now on EKS

I noticed my problem was resolved by regenerating the certificate which was expired/changed. Its just about regenerating the kube config file. I was using AKS (Azure Kubernetes) cluster so the belwo command regenerated the config file.

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

In my case, I got this error with “kubectl version”. I had installed minikube in my linux machine, and kubectl was configured to use the minikube. It got resolved when I added the minikube server (192.168.99.101 in the kube config file below) to the NO_PROXY env variable:

cat ~/.kube/config apiVersion: v1 clusters:

  • cluster: certificate-authority: /home/ssriram/.minikube/ca.crt server: https://192.168.99.101:8443 …

I resolved by performing these steps on /root/.kub directory:

  1. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 2 . sudo chown $(id -u)😒(id -g) $HOME/.kube/config

Since I tried to start the cluster using kubeadm init many times but I did not update /root/.kube directory with the new entries. Soi performed these steps when I got a certification error… remember that after running first command. write “yes” for overwrite process. Otherwise, it will not work.

Getting the same issue on a fresh / blank install of kops 1.9.0.

$ kops create cluster --name=example.mycompany.com --state=s3://myproj-kubestate
I0511 12:22:20.648461   38921 create_cluster.go:1318] Using SSH public key: /Users/me/.ssh/id_rsa.pub

error reading cluster configuration "example.mycompany.com": error reading s3://myproj-kubestate/example.mycompany.com/config: error fetching s3://myproj-kubestate/example.mycompany.com/config: RequestError: send request failed
caused by: Get https://myproj-kubestate.s3.amazonaws.com/example.mycompany.com/config: x509: certificate signed by unknown authority

I’m not doing anything unusual as far as I’m aware, just following the tutorial / steps. What other info can I provide?

This should not happen. Closing pending further details, but if we get them, we should reopen.