rke: FATA[0079] [controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck

RKE version: rke version v0.1.0

Docker version: (docker version,docker info preferred)

Client:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:38:45 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:38:45 2017
 OS/Arch:      linux/amd64

Operating system and kernel: (cat /etc/os-release, uname -r preferred)

NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
UBUNTU_CODENAME=xenial

Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO) private cloud

cluster.yml file:

---
auth:
  strategy: x509
  options:
    foo: bar

authorization:
  mode: rbac
  options:

network:
  plugin: flannel
  options:
    flannel_image: 10.143.145.213/rancher/flannel:v0.9.1
    flannel_cni_image: 10.143.145.213/rancher/flannel-cni:v0.2.0

nodes:
  - address: 100.104.179.42
    user: root
    role: [controlplane, etcd,worker]

services:
  etcd:
    image: 10.143.145.213/rancher/etcd:v2.3.7-13
  kube-api:
    image: 10.143.145.213/rancher/k8s:v1.8.3-rancher2
    service_cluster_ip_range: 10.233.0.0/18
    pod_security_policy: false
  kube-controller:
    image: 10.143.145.213/rancher/k8s:v1.8.3-rancher2
    cluster_cidr: 10.233.64.0/18
    service_cluster_ip_range: 10.233.0.0/18
  scheduler:
    image: 10.143.145.213/rancher/k8s:v1.8.3-rancher2
  kubelet:
    image: 10.143.145.213/rancher/k8s:v1.8.3-rancher2
    cluster_domain: cluster.local
    extra_args: {"fail-swap-on":"false"}
    cluster_dns_server: 10.233.0.3
    infra_container_image: 10.143.145.213/google_containers/pause-amd64:3.0
  kubeproxy:
    image: 10.143.145.213/rancher/k8s:v1.8.3-rancher2

system_images:
  alpine: 10.143.145.213/rancher/alpine
  nginx_proxy: 10.143.145.213/rancher/rke-nginx-proxy:v0.1.1
  cert_downloader: 10.143.145.213/rancher/rke-cert-deployer:v0.1.1
  service_sidekick_image: 10.143.145.213/rancher/rke-service-sidekick:v0.1.0
  kubedns_image: 10.143.145.213/google_containers/k8s-dns-kube-dns-amd64:1.14.5
  dnsmasq_image: 10.143.145.213/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
  kubedns_sidecar_image: 10.143.145.213/google_containers/k8s-dns-sidecar-amd64:1.14.5
  kubedns_autoscaler_image: 10.143.145.213/google_containers/cluster-proportional-autoscaler-amd64:1.0.0

Steps to Reproduce: rke up

Results:

INFO[0000] Building Kubernetes cluster                  
INFO[0000] [dialer] Setup tunnel for host [100.104.179.42] 
INFO[0000] [network] Deploying port listener containers 
INFO[0000] [network] Successfully started [rke-etcd-port-listener] container on host [100.104.179.42] 
INFO[0000] [network] Successfully started [rke-cp-port-listener] container on host [100.104.179.42] 
INFO[0001] [network] Successfully started [rke-worker-port-listener] container on host [100.104.179.42] 
INFO[0001] [network] Port listener containers deployed successfully 
INFO[0001] [network] Running all -> etcd port checks    
INFO[0001] [network] Successfully started [rke-port-checker] container on host [100.104.179.42] 
INFO[0001] [network] Successfully started [rke-port-checker] container on host [100.104.179.42] 
INFO[0002] [network] Running control plane -> etcd port checks 
INFO[0002] [network] Successfully started [rke-port-checker] container on host [100.104.179.42] 
INFO[0002] [network] Running workers -> control plane port checks 
INFO[0003] [network] Successfully started [rke-port-checker] container on host [100.104.179.42] 
INFO[0003] [network] Checking KubeAPI port Control Plane hosts 
INFO[0003] [network] Removing port listener containers  
INFO[0003] [remove/rke-etcd-port-listener] Successfully removed container on host [100.104.179.42] 
INFO[0003] [remove/rke-cp-port-listener] Successfully removed container on host [100.104.179.42] 
INFO[0004] [remove/rke-worker-port-listener] Successfully removed container on host [100.104.179.42] 
INFO[0004] [network] Port listener containers removed successfully 
INFO[0004] [certificates] Attempting to recover certificates from backup on host [100.104.179.42] 
INFO[0004] [certificates] Successfully started [cert-fetcher] container on host [100.104.179.42] 
INFO[0004] [certificates] No Certificate backup found on host [100.104.179.42] 
INFO[0004] [certificates] Generating kubernetes certificates 
INFO[0004] [certificates] Generating CA kubernetes certificates 
INFO[0005] [certificates] Generating Kubernetes API server certificates 
INFO[0005] [certificates] Generating Kube Controller certificates 
INFO[0006] [certificates] Generating Kube Scheduler certificates 
INFO[0006] [certificates] Generating Kube Proxy certificates 
INFO[0006] [certificates] Generating Node certificate   
INFO[0006] [certificates] Generating admin certificates and kubeconfig 
INFO[0007] [certificates] Temporarily saving certs to etcd host [100.104.179.42] 
INFO[0012] [certificates] Saved certs to etcd host [100.104.179.42] 
INFO[0012] [reconcile] Reconciling cluster state        
INFO[0012] [reconcile] This is newly generated cluster  
INFO[0012] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0028] Successfully Deployed local admin kubeconfig at [./.kube_config_cluster.yml] 
INFO[0028] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0028] [etcd] Building up Etcd Plane..              
INFO[0028] [etcd] Successfully started [etcd] container on host [100.104.179.42] 
INFO[0028] [etcd] Successfully started Etcd Plane..     
INFO[0028] [controlplane] Building up Controller Plane.. 
INFO[0028] [controlplane] Successfully started [kube-api] container on host [100.104.179.42] 
INFO[0028] [healthcheck] Start Healthcheck on service [kube-api] on host [100.104.179.42] 
FATA[0079] [controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: service [kube-api] is not healthy response code: [403], response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 4
  • Comments: 49 (11 by maintainers)

Most upvoted comments

@gtirloni Do you have SELinux enabled?