argo-cd: "cluster add" on rancher cluster fails with "REST config invalid: the server has asked for the client to provide credentials"
We want to use ArgoCD with two clusters created using Rancher RKE (running on our own hardware).
ArgoCD is running on Cluster “int01”. Deployments within the same cluster work fine.
When we try to add a second cluster “rz01” via the CLI we receive an error:
pm$ argocd cluster add rz01 --server localhost:8080 --insecure --loglevel DEBUG
INFO[0000] ServiceAccount "argocd-manager" already exists
INFO[0000] ClusterRole "argocd-manager-role" updated
INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" already exists
FATA[0001] rpc error: code = Unknown desc = REST config invalid: the server has asked for the client to provide credentials
In our Rancher log files we see
"2019-03-07T15:41:57.000Z","fluentd-hkn7l","fluentd-hkn7l Rancher: log:10.130.94.100 - [10.130.94.100] - - [07/Mar/2019:15:41:53 +0000] ""GET /k8s/clusters/c-bsjhn/version?timeout=32s HTTP/2.0"" 401 62 ""-"" ""argocd-server/v0.0.0 (linux/amd64)
And the argocd-server has this information regarding the failed request
time="2019-03-07T15:46:51Z" level=info msg="received unary call /cluster.ClusterService/Create" grpc.method=Create grpc.request.claims="{\"iat\":1551972868,\"iss\":\"argocd\",\"nbf\":1551972868,\"sub\":\"admin\"}" grpc.request.content="cluster:<server:\"https://rancher.ariva.k8s/k8s/clusters/c-bsjhn\" name:\"rz01\" config:<username:\"\" password:\"\" bearerToken:\"eyJhbGciOiJSUz......... CUT .......oDOBb1khnGj91E8jWHeHcy2ZTGODIUYJY9H2Z9UBJkxsNGPH3dEiat1wQC6TRFY\\njVfj4bdBBM00gvkqcHSrsY+rL6D9dUahl0eyj2frl8HvdXYKkhG2X6Lnk9Yx2t5J\\n6rKtRG78bD7U0/tnYLtIPrvaaGc/CKcScH6IksT/yAIjWA==\\n-----END CERTIFICATE-----\" > > connectionState:<status:\"\" message:\"\" > > " grpc.service=cluster.ClusterService grpc.start_time="2019-03-07T15:46:51Z" span.kind=server system=grpc
7.3.2019 16:46:51 time="2019-03-07T15:46:51Z" level=error msg="finished unary call with code Unknown" error="REST config invalid: the server has asked for the client to provide credentials" grpc.code=Unknown grpc.method=Create grpc.service=cluster.ClusterService grpc.start_time="2019-03-07T15:46:51Z" grpc.time_ms=14.475 span.kind=server system=grpc
About this issue
- Original URL
- State: open
- Created 5 years ago
- Reactions: 7
- Comments: 23 (8 by maintainers)
Anyone still looking for a clean solution that doesn’t involve doing stuff with the local admin account (no pun intended 😄 ): https://gist.github.com/janeczku/b16154194f7f03f772645303af8e9f80
To summarize:
argocd cluster add
creates a service account (plus token) in the target cluster and uses that as a bearer token credential while constructing the kubeconfig used to interact with the target.argocd cluster add
work the kubeconfig context must specify the downstream API endpoint in theserver
option.argocd
CLI cannot be used but the user can create the required secret resource directly usingkubectl
.Suggestion: It would be great if
argocd cluster add
would support specifying a bearer token that was created out of band, e.g. a service account token or Rancher API token.Interested in a real fix, not this alternate strange fix that bypasses Rancher to access directly the cluster.
A real fix to me is just run argocd cluster add <context> and argo deal with any other complexity to connect to the cluster. Another fix could be a session on the documentation how to proceed with Rancher. This authorized endpoints is not enable by default, to me is the same to not out-of-box support. Using this way you need to create another load balancer pointing to the directly to kubernetes nodes with the api server role (masters), you will have a spof without a balancer and it need to use the internal kubernetes certificate that is not the same used by the users to connect the cluster via Rancher. You said that is the recommended, I think that is a workaround.
Maybe this documentation session could help Rancher users to go ahead without the need to digging around the solution, but native Rancher support with just the command to add the cluster is what I call real fix.
Also per https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#authorized-cluster-endpoint “The authorized cluster endpoint only works on Rancher-launched Kubernetes clusters. In other words, it only works in clusters where Rancher used RKE to provision the cluster. It is not available for clusters in a hosted Kubernetes provider, such as Amazon’s EKS.” So that doesn’t work with hosted clusters.
There is a workaround without having to enable authorized endpoints. You can use this script: https://gist.githubusercontent.com/superseb/f6cd637a7ad556124132ca39961789a4/raw/a833ce5548eded9b110f1b5d4dc1896562338975/get_kubeconfig_custom_cluster_rancher2.sh Get the local admin kubeconfig for the Rancher managed custom cluster, then use ArgoCD CLI to add the cluster using the kubeconfig. You will likely need to edit the cluster secret created after though as it created the cluster with the name “local”. You just need to
echo name_you_want | base64
and replace the name value in the secret.