talos: Migration from kubeadm error

Bug Report

Migration from Kubeadm, using the documentation, does not work.

Description

My Kubernetes cluster (3 controlplanes, 3 workers) is built on Fedora Server nodes using Kubeadm, and is very much vanilla. Kubeadm is also used for k8s upgrades. I’d like to migrate to Talos, but the first Talos controlplane I deploy using the documentation is not able to talk to other kube-apiserver because of a token problem. Still, it’s marked as “Ready” on the Talos node console, but is never ready from “kubectl get nodes” command.

Logs

I’m at step 10 from the “step-by-step guide”.

[thomas@master01 talos]$ talosctl -n 192.168.199.10 logs -k kube-system/kube-apiserver-controlplane01:kube-apiserver:ef7619c82095                                                                                                                                                                    
192.168.199.10: 2023-07-09T07:22:58.792041905Z stderr F I0709 07:22:58.791745       1 server.go:565] external host was not specified, using 192.168.199.10                                                                                                                                           
192.168.199.10: 2023-07-09T07:22:58.792358228Z stderr F I0709 07:22:58.792331       1 server.go:162] Version: v1.25.11
[...]
192.168.199.10: 2023-07-09T07:23:04.490815114Z stderr F W0709 07:23:04.490716       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.199.1 192.168.199.10 192.168.199.2 192.168.199.3]
192.168.199.10: 2023-07-09T07:23:04.494527837Z stderr F I0709 07:23:04.494475       1 controller.go:616] quota admission added evaluator for: endpoints
192.168.199.10: 2023-07-09T07:23:04.505095015Z stderr F I0709 07:23:04.505011       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
192.168.199.10: 2023-07-09T07:23:30.836015202Z stderr F E0709 07:23:30.835920       1 authentication.go:70] "Unable to authenticate the request" err="invalid bearer token"
192.168.199.10: 2023-07-09T07:23:54.963785968Z stderr F E0709 07:23:54.963694       1 authentication.go:70] "Unable to authenticate the request" err="invalid bearer token"
192.168.199.10: 2023-07-09T07:23:59.975387151Z stderr F E0709 07:23:59.975290       1 authentication.go:70] "Unable to authenticate the request" err="invalid bearer token"
192.168.199.10: 2023-07-09T07:24:00.825680313Z stderr F E0709 07:24:00.825583       1 authentication.go:70] "Unable to authenticate the request" err="invalid bearer token"
[thomas@master01 talos]$ talosctl -n 192.168.199.10 containers --kubernetes                                                                                                                                                                   
                                                                                                                                                                                                                                              
NODE             NAMESPACE   ID                                                                                           IMAGE                                              PID    STATUS                                                    
192.168.199.10   k8s.io      kube-system/calico-node-nrhlz                                                                registry.k8s.io/pause:3.6                          2309   SANDBOX_READY                                             
192.168.199.10   k8s.io      └─ kube-system/calico-node-nrhlz:install-cni:1237de1fbe34                                    docker.io/calico/cni:v3.25.0                       0      CONTAINER_EXITED                                          
192.168.199.10   k8s.io      └─ kube-system/calico-node-nrhlz:upgrade-ipam:4db387cdabf6                                   docker.io/calico/cni:v3.25.0                       0      CONTAINER_EXITED                                          
192.168.199.10   k8s.io      kube-system/kube-apiserver-controlplane01                                                    registry.k8s.io/pause:3.6                          1959   SANDBOX_READY                                             
192.168.199.10   k8s.io      └─ kube-system/kube-apiserver-controlplane01:kube-apiserver:ef7619c82095                     registry.k8s.io/kube-apiserver:v1.25.11            2050   CONTAINER_RUNNING                                         
192.168.199.10   k8s.io      kube-system/kube-controller-manager-controlplane01                                           registry.k8s.io/pause:3.6                          1964   SANDBOX_READY                                             
192.168.199.10   k8s.io      └─ kube-system/kube-controller-manager-controlplane01:kube-controller-manager:4c75a1a1328a   registry.k8s.io/kube-controller-manager:v1.25.11   0      CONTAINER_EXITED                                          
192.168.199.10   k8s.io      └─ kube-system/kube-controller-manager-controlplane01:kube-controller-manager:d59a7ed19fd6   registry.k8s.io/kube-controller-manager:v1.25.11   2198   CONTAINER_RUNNING
192.168.199.10   k8s.io      kube-system/kube-proxy-7l846                                                                 registry.k8s.io/pause:3.6                          2349   SANDBOX_READY
192.168.199.10   k8s.io      └─ kube-system/kube-proxy-7l846:kube-proxy:0d851f958148                                      registry.k8s.io/kube-proxy:v1.25.11                2429   CONTAINER_RUNNING
192.168.199.10   k8s.io      kube-system/kube-scheduler-controlplane01                                                    registry.k8s.io/pause:3.6                          1957   SANDBOX_READY
192.168.199.10   k8s.io      └─ kube-system/kube-scheduler-controlplane01:kube-scheduler:ce98ad8e46d2                     registry.k8s.io/kube-scheduler:v1.25.11            0      CONTAINER_EXITED
192.168.199.10   k8s.io      └─ kube-system/kube-scheduler-controlplane01:kube-scheduler:cf144644fe5a                     registry.k8s.io/kube-scheduler:v1.25.11            2200   CONTAINER_RUNNING
192.168.199.10   k8s.io      monitoring/global-monitoring-prometheus-node-exporter-5v84m                                  registry.k8s.io/pause:3.6                          2265   SANDBOX_READY
192.168.199.10   k8s.io      └─ monitoring/global-monitoring-prometheus-node-exporter-5v84m:node-exporter:672a7f39c064    quay.io/prometheus/node-exporter:v1.5.0            2379   CONTAINER_RUNNING
[thomas@master01 k8s]$ kubectl get nodes
NAME                         STATUS     ROLES           AGE     VERSION
controlplane01               NotReady   control-plane   94m     v1.25.11
master01.k8s.lemarchand.io   Ready      control-plane   3y66d   v1.25.11
master02.k8s.lemarchand.io   Ready      control-plane   3y66d   v1.25.11
master03.k8s.lemarchand.io   Ready      control-plane   3y66d   v1.25.11
worker01.k8s.lemarchand.io   Ready      <none>          3y67d   v1.25.11
worker02.k8s.lemarchand.io   Ready      <none>          3y66d   v1.25.11
worker03.k8s.lemarchand.io   Ready      <none>          3y66d   v1.25.11

Environment

  • Talos version: v1.4.6
  • Kubernetes version: v1.25.11
  • Platform: Proxmox

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 19 (18 by maintainers)

Most upvoted comments

@janvotava I’ve postponed another project to have some time to work on my Talos migration. You sir are a genius ! Everything is working now as intended. @smira I believe it’s another small oversight in the documentation. I can create a PR if you wish to document this.

Thank you @janvotava , I’ll try that ! Not before a few months at least, but I’ll reply here as soon I know if it solves my problem.