kubernetes: [Federation] kubefed init roles.rbac.authorization error

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Kubernetes version (use kubectl version): kubernetes-1.6.0-beta.1

Environment:

  • Cloud provider or hardware configuration:hardware configuration
  • OS (e.g. from /etc/os-release): Ubuntu 14.04.1 LTS
  • Kernel (e.g. uname -a): 4.2.0-27-generic
  • Install tools: kubefed init
  • Others:

What happened:

I use kubefed init to init federation in kubernetes-1.6.0-beta.1.

kubefed init k8s-federation --host-cluster-context=cluster-cloud92
–api-server-service-type=‘NodePort’ --api-server-advertise-address=‘172.31.8.29’
–dns-provider=‘coredns’ --dns-provider-config=‘/var/run/kubernetes/federation-codedns-config.conf’ --dns-zone-name=‘cluster-cloud92.com’
–etcd-persistent-storage=false --kubeconfig=‘/var/run/kubernetes/kubeconfig’

It show this error:

Error from server (Forbidden): roles.rbac.authorization.k8s.io “federation-system:federation-controller-manager” is forbidden: attempt to grant extra privileges: [{[get] [] [secrets] [] []} {[list] [] [secrets] [] []} {[watch] [] [secrets] [] []}] user=&{admin admin [system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /swaggerapi

It created federation-apiserver successful, but failed create federation-controller-manager.

kubectl get pod --namespace=federation-system NAME READY STATUS RESTARTS AGE k8s-federation-apiserver-2603477136-7srv0 2/2 Running 0 1m

kubectl get secret --namespace=federation-system NAME TYPE DATA AGE default-token-vwmcj kubernetes.io/service-account-token 2 2m federation-controller-manager-token-vgsg8 kubernetes.io/service-account-token 2 2m k8s-federation-apiserver-credentials Opaque 3 2m k8s-federation-controller-manager-kubeconfig Opaque 1 2m

Here is my k8s cluster configuration:

cat /var/run/kubernetes/kubeconfig apiVersion: v1 kind: Config clusters:

  • cluster: insecure-skip-tls-verify: true server: https://172.31.8.29:443 name: cluster-cloud92 contexts:
  • context: cluster: cluster-cloud92 user: cluster-cloud92 name: cluster-cloud92 current-context: cluster-cloud92 users:
  • name: cluster-cloud92 user: token: tSjgxDGkhDe8iweWU2fWC7a64clc7MR8

docker@bjkjy-ite-cloud92:~$ kubectl config view apiVersion: v1 clusters:

  • cluster: insecure-skip-tls-verify: true server: https://172.31.8.29:443 name: cluster-cloud92 contexts:
  • context: cluster: cluster-cloud92 user: cluster-cloud92 name: cluster-cloud92 current-context: cluster-cloud92 kind: Config preferences: {} users:
  • name: cluster-cloud92 user: token: tSjgxDGkhDe8iweWU2fWC7a64clc7MR8

cat /var/run/kubernetes/federation-codedns-config.conf [Global] etcd-endpoints=http://172.31.8.29:2379 zones=cluster-cloud92.com

I have enabled serviceaccount in kube-apiserver and specify --service_account_private_key_file in kibe-controller-manager.

I didn’t use ca, just use token for kube-apiserver.–secure-port=443 --token-auth-file=/etc/kubernetes/token_auth_file.

ps -ef | grep kube /hyperkube federation-apiserver --admission-control=NamespaceLifecycle --advertise-address=172.31.8.29 --bind-address=0.0.0.0 --client-ca-file=/etc/federation/apiserver/ca.crt --etcd-servers=http://localhost:2379 --secure-port=443 --tls-cert-file=/etc/federation/apiserver/server.crt --tls-private-key-file=/etc/federation/apiserver/server.key

/opt/bin/kube-apiserver --insecure-bind-address=0.0.0.0 --insecure-port=8080 –secure-port=443 --token-auth-file=/etc/kubernetes/token_auth_file --etcd-servers=http://172.31.8.29:2379 --logtostderr=true --service-cluster-ip-range=172.31.96.0/22 --admission-control=NamespaceLifecycle,LimitRanger,ResourceQuota,DenyEscalatingExec,SecurityContextDeny,ServiceAccount --service-node-port-range=30000-32767 --v=4

/opt/bin/kube-controller-manager --master=127.0.0.1:8080 –service_account_private_key_file=/var/run/kubernetes/apiserver.key --leader-elect=true --pod-eviction-timeout=1m0s --logtostderr=true --v=2

/opt/bin/kube-scheduler --master=127.0.0.1:8080 --leader-elect=true --logtostderr=true --v=2

/opt/bin/kubelet --address=0.0.0.0 --port=10250 --pod-infra-container-image=registry-dev.baiwei.baidu.com/google_containers/pause:latest --api-servers=http://172.31.8.29:8080 --logtostderr=true --cluster-dns=172.31.96.100 --cluster-domain=cluster-cloud92.com --hostname-override=bjkjy-ite-cloud92.bjkjy --resolv-conf=

/opt/bin/kube-proxy --master=http://172.31.8.29:8080 --logtostderr=true

kubectl get serviceaccount NAME SECRETS AGE default 1 3d

Question:

How to track and fix this issue? How to config the host cluster for init federation about authorization? Do i must use ca for host cluster?

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 19 (9 by maintainers)

Most upvoted comments

For the benefit of others who might run into this issue, I worked around the problem by running:

$ gcloud config set container/use_client_certificate True $ export CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True

Before running $ gcloud container clusters get-credentials ...

I can confirm I was having the same issue today running on GKE with cops, v.1.7.8.

I used the workaround of @dgpc.

One thing that I noticed is that kubefed does not clean up after itself on failure, it just crashes and then I had to manually go and clean out the Deplyments, namespace and service that was created by kubefed, so that I could rerun the kubefed init again. We should probably write logic so that kubefed can clean after itself on a crash

@dgpc solution works in GKE, but you can also do

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>|--group=<group-name>]

as described in https://cloud.google.com/container-engine/docs/role-based-access-control