kubernetes: error: You must be logged in to the server - the server has asked for the client to provide credentials - "kubectl logs" command gives error

Is this a BUG REPORT or FEATURE REQUEST?: /kind bug

What happened: We had setup kubernetes 1.10.1 on CoreOS with three nodes. Setup is successfull

NAME                STATUS    ROLES     AGE       VERSION
node1.example.com   Ready     master    19h       v1.10.1+coreos.0
node2.example.com   Ready     node      19h       v1.10.1+coreos.0
node3.example.com   Ready     node      19h       v1.10.1+coreos.0
NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE
default            pod-nginx2-689b9cdffb-qrpjn       1/1       Running   0          16h
kube-system   calico-kube-controllers-568dfff588-zxqjj    1/1       Running   0          18h
kube-system   calico-node-2wwcg                           2/2       Running   0          18h
kube-system   calico-node-78nzn                           2/2       Running   0          18h
kube-system   calico-node-gbvkn                           2/2       Running   0          18h
kube-system   calico-policy-controller-6d568cc5f7-fx6bv   1/1       Running   0          18h
kube-system   kube-apiserver-x66dh                        1/1       Running   4          18h
kube-system   kube-controller-manager-787f887b67-q6gts    1/1       Running   0          18h
kube-system   kube-dns-79ccb5d8df-b9skr                   3/3       Running   0          18h
kube-system   kube-proxy-gb2wj                            1/1       Running   0          18h
kube-system   kube-proxy-qtxgv                            1/1       Running   0          18h
kube-system   kube-proxy-v7wnf                            1/1       Running   0          18h
kube-system   kube-scheduler-68d5b648c-54925              1/1       Running   0          18h
kube-system   pod-checkpointer-vpvg5                      1/1       Running   0          18h

But when i tries to see the logs of any pods kubectl gives the following error:

kubectl logs -f pod-nginx2-689b9cdffb-qrpjn
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log pod-nginx2-689b9cdffb-qrpjn))

And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:

kubectl exec -ti pod-nginx2-689b9cdffb-qrpjn bash
error: unable to upgrade connection: Unauthorized

What you expected to happen:

1. It will display the logs of the pods
2. We can do exec for the pods

Environment:

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g. from /etc/os-release):
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1576.4.0
VERSION_ID=1576.4.0
BUILD_ID=2017-12-06-0449
PRETTY_NAME="Container Linux by CoreOS 1576.4.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
  • Kernel (e.g. uname -a):

Linux node1.example.com 4.13.16-coreos-r2 #1 SMP Wed Dec 6 04:27:34 UTC 2017 x86_64 Intel(R) Xeon(R) CPU L5640 @ 2.27GHz GenuineIntel GNU/Linux

  • Install tools:
  1. Kubelet
Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
  --volume=resolv,kind=host,source=/etc/resolv.conf \
  --mount volume=resolv,target=/etc/resolv.conf \
  --volume var-lib-cni,kind=host,source=/var/lib/cni \
  --mount volume=var-lib-cni,target=/var/lib/cni \
  --volume var-log,kind=host,source=/var/log \
  --mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --kubeconfig=/etc/kubernetes/kubeconfig \
  --config=/etc/kubernetes/config \
  --cni-conf-dir=/etc/kubernetes/cni/net.d \
  --network-plugin=cni \
  --allow-privileged \
  --lock-file=/var/run/lock/kubelet.lock \
  --exit-on-lock-contention \
  --hostname-override=node1.example.com \
  --node-labels=node-role.kubernetes.io/master \
  --register-with-taints=node-role.kubernetes.io/master=:NoSchedule
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
  1. KubeletConfig
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodPath: "/etc/kubernetes/manifests"
clusterDomain: "cluster.local"
clusterDNS: [ "10.3.0.10" ]
nodeStatusUpdateFrequency: "5s"
clientCAFile: "/etc/kubernetes/ca.crt"

We have also specified “–kubelet-client-certificate” and “–kubelet-client-key” flags into kube-apiserver.yaml files:

- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key

So what we are missing here? Thanks in advance 😃

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 10
  • Comments: 27 (3 by maintainers)

Commits related to this issue

Most upvoted comments

same issue - how about telling us how you solved it.

Issue has been solved 😃

Check the kubelet logs, it will tell you the deprecated flags, just remove it and put it into the kubelet config files.

It solved my problems 😃

/close

किसान भाई कुछ भी बेचे या खरीदे जैसे पुराना ट्रैक्टर , भैंस, गाय , मशीनें आदि। Visit www.krishifarm.in/front/home/post_info/198

For anyone who hasn’t solved this, I’ve been upgrading our clusters from 1.9 to 1.10, changing kubelet from command line flags to a configuration file.

The default Authentication and Authorization to Kubelet’s API differs between cli args and config files, so you should make sure to set the “legacy defaults” in the config file to preserve existing behaviour.

This is a snippet from my kubelet config that restores the old defaults:

# Restore default authentication and authorization modes from K8s < 1.9
authentication:
  anonymous:
    enabled: true # Defaults to false as of 1.10
  webhook:
    enabled: false # Deafults to true as of 1.10
authorization:
  mode: AlwaysAllow # Deafults to webhook as of 1.10
readOnlyPort: 10255 # Used by heapster. Defaults to 0 (disabled) as of 1.10. Needed for metrics.

^^ Constructed from: https://github.com/kubernetes/kubernetes/blob/b71966aceaa3c38040236bc0decc6fad36eeb762/cmd/kubelet/app/options/options.go#L279-L291

This is a relevant issue that lead me to this discovery: https://github.com/kubernetes/kubernetes/pull/59666

Hi, Thanks for sharing information Find Something That Keeps You Going: Catch Up With the 2019 Graduate Student Research Winner such as useful information.

New Holland 3037

@lenartj It turned out that deleting the kube-apiserver pod was not enough to restart the apiserver for some reason. Although it had been deleted and recreated successfully, the apiserver process / docker container remained untouched, so that it hadn’t picked up the new certificates, yet. Using docker stop on the apiserver instance successfully restarted it and authorization was successful afterwards. Thanks for your help.

We got the same issue today on our self-hosted cluster and in our case we found that admin.conf and .kube/config files were not matching wrt client-certificate-data and client-key-data keys. Try the below steps: kubectl get po --kubeconfig=~/.kube/config(not working) kubectl get po --kubeconfig=/etc/kubernetes/admin.conf (working)

Copied and pasted the admin.conf’s client-certificate-data and client-key-data to .kube/config and it started working. Didn’t understand why they mismatched even though both files were not touched on the day of issue. Hope this helps

PS: Whole Cluster is at the latest version 1.18 when the issue surfaced

@lenartj It turned out that deleting the kube-apiserver pod was not enough to restart the apiserver for some reason. Although it had been deleted and recreated successfully, the apiserver process / docker container remained untouched, so that it hadn’t picked up the new certificates, yet. Using docker stop on the apiserver instance successfully restarted it and authorization was successful afterwards. Thanks for your help.

@pehlert Thanks for your sharing, we met the same problem. I renewed the nearly expired certificate apiserver-kubelet-client.crt and delete the static apiserver pod. Then I left the company and began my Lunar New Year Holiday. After that, the old certificate expired silently while the 2019-nCoV sweeping across China. One day in these bad days, some one reported that kubectl log/exec not work. And kubelet log said, certificate has expired or is not yet valid. We checked all the certificate but only found that all the certicates are valid. It keeps disturbing me until I found that the apiserver process never restarted even we deleted the pod. Killing the processs with cmd docker stop <container_id> perfectly solved this problem just now! Thank you again!

@CaoShuFeng, in one case I’ve tracked down this issue to an expired apiserver-kubelet-client.crt. Renewed the cert, restarted apiserver and it went back to normal.

@mmack I deploy a cluster use kubeadm just now and I find that kubeadm give the apiserver-kubelet-client.crt ‘system:masters’ group so I think the permission might be ok. tim 20181019154452

@ronakpandya7 same issue - how you check your kubelet logs,systemctl status kubelet or journalctl -u kubelet -f ,but i didn`t get some useful information