rancher: Error in importing existing cluster: "Failed to connect to proxy" error="websocket: bad handshake"

What kind of request is this (question/bug/enhancement/feature request): bug

Steps to reproduce (least amount of steps as possible): I start rancher as a container using a bash script:

#!/bin/bash

if [ -n "$1" ];then
  HTTP_HOST_PORT=$1
else
  HTTP_HOST_PORT=8080
fi
if [ -n "$2" ];then
  HTTPS_HOST_PORT=$2
else
  HTTPS_HOST_PORT=8443
fi
printf 'Attention: using %s for HTTP port and %s for HTTPS port on the host\n' $HTTP_HOST_PORT $HTTPS_HOST_PORT

docker run --name rancher -d --restart=unless-stopped -p $HTTP_HOST_PORT:80 -p $HTTPS_HOST_PORT:443 -e HTTP_PROXY="http://<PROXY HOST>:<PROXY HOST>" -e HTTPS_PROXY="<PROXY HOST>:<PROXY HOST>" -e NO_PROXY=".internal.epo.org,luu154d,localhost,127.0.0.1,0.0.0.0,10.14.1.150,10.14.0.0/20,192.168.10.0/24" rancher/rancher:stable

if [ $? -eq 0 ];then
  HOST=`hostname`
  printf  '\n\nplease proceed to https://%s:%s on your web-browser in a minute or so...\n\n' $HOST $HTTPS_HOST_PORT
else
  docker rm rancher;
fi

Then I try to import an existing vanilla (bootstrapped by kubeadm) single master 2 worker k8s cluster.

Result: Rancher starts fine and I can access the UI and go to the import cluster page without any issues, however once I apply the yaml file for the import I have the following error on cattle node agents and the state of the imported cluster is always “waiting”:

time="2019-03-12T14:50:08Z" level=info msg="Connecting to wss://luu154d:8443/v3/connect with token 9rq9jk44tlfh62qbscnfxwb6zvts6bhv2nqmxqm67crzxfvtcpjp6q"
time="2019-03-12T14:50:08Z" level=info msg="Connecting to proxy" url="wss://luu154d:8443/v3/connect"
time="2019-03-12T14:50:08Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2019-03-12T14:50:08Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"

How can I increase the log level of the cattle node agent?

The full log of the node agent:

INFO: Environment: CATTLE_ADDRESS=10.14.10.117 CATTLE_AGENT_CONNECT=true CATTLE_CA_CHECKSUM=52f2f34be065f70568c1ac39d1f786710e414c526282d911d9ff2a0dc0bba81f CATTLE_CLUSTER=false CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cored02 CATTLE_SERVER=https://luu154d:8443
INFO: Using resolv.conf: nameserver 10.0.10.11 nameserver 10.20.10.11 search internal.epo.org
INFO: https://luu154d:8443/ping is accessible
INFO: Value from https://luu154d:8443/v3/settings/cacerts is an x509 certificate
time="2019-03-12T14:49:48Z" level=info msg="Rancher agent version v2.1.7 is starting"
time="2019-03-12T14:49:48Z" level=info msg="Option worker=false"
time="2019-03-12T14:49:48Z" level=info msg="Option requestedHostname=cored02"
time="2019-03-12T14:49:48Z" level=info msg="Option customConfig=map[address:10.14.10.117 internalAddress: roles:[] label:map[]]"
time="2019-03-12T14:49:48Z" level=info msg="Option etcd=false"
time="2019-03-12T14:49:48Z" level=info msg="Option controlPlane=false"
time="2019-03-12T14:49:48Z" level=info msg="Listening on /tmp/log.sock"
time="2019-03-12T14:49:48Z" level=info msg="Connecting to wss://luu154d:8443/v3/connect with token 9rq9jk44tlfh62qbscnfxwb6zvts6bhv2nqmxqm67crzxfvtcpjp6q"
time="2019-03-12T14:49:48Z" level=info msg="Connecting to proxy" url="wss://luu154d:8443/v3/connect"
time="2019-03-12T14:49:48Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2019-03-12T14:49:48Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2019-03-12T14:49:58Z" level=info msg="Connecting to wss://luu154d:8443/v3/connect with token 9rq9jk44tlfh62qbscnfxwb6zvts6bhv2nqmxqm67crzxfvtcpjp6q"
time="2019-03-12T14:49:58Z" level=info msg="Connecting to proxy" url="wss://luu154d:8443/v3/connect"
time="2019-03-12T14:49:58Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2019-03-12T14:49:58Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2019-03-12T14:50:08Z" level=info msg="Connecting to wss://luu154d:8443/v3/connect with token 9rq9jk44tlfh62qbscnfxwb6zvts6bhv2nqmxqm67crzxfvtcpjp6q"
time="2019-03-12T14:50:08Z" level=info msg="Connecting to proxy" url="wss://luu154d:8443/v3/connect"
time="2019-03-12T14:50:08Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2019-03-12T14:50:08Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"

the log in cattle cluster agent:

INFO: Environment: CATTLE_ADDRESS=172.17.129.13 CATTLE_CA_CHECKSUM=52f2f34be065f70568c1ac39d1f786710e414c526282d911d9ff2a0dc0bba81f CATTLE_CLUSTER=true CATTLE_INTERNAL_ADDRESS= CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-7866788dfb-lwxrp CATTLE_SERVER=https://luu154d:8443
INFO: Using resolv.conf: nameserver 172.17.224.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local internal.epo.org options ndots:5
INFO: https://luu154d:8443/ping is accessible
INFO: Value from https://luu154d:8443/v3/settings/cacerts is an x509 certificate
time="2019-03-12T14:49:48Z" level=info msg="Rancher agent version v2.1.7 is starting"
time="2019-03-12T14:49:48Z" level=info msg="Listening on /tmp/log.sock"
time="2019-03-12T14:49:48Z" level=info msg="Connecting to wss://luu154d:8443/v3/connect/register with token 9rq9jk44tlfh62qbscnfxwb6zvts6bhv2nqmxqm67crzxfvtcpjp6q"
time="2019-03-12T14:49:48Z" level=info msg="Connecting to proxy" url="wss://luu154d:8443/v3/connect/register"

In the meantime log from the rancher container:

2019/03/12 13:52:46 [INFO] Rancher version v2.1.7 is starting
2019/03/12 13:52:46 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0}
2019/03/12 13:52:46 [INFO] Listening on /tmp/log.sock
2019/03/12 13:52:46 [INFO] [certificates] Generating CA kubernetes certificates
2019/03/12 13:52:47 [INFO] [certificates] Generating Kubernetes API server certificates
2019/03/12 13:52:47 [INFO] [certificates] Generating Kube Controller certificates
2019/03/12 13:52:48 [INFO] [certificates] Generating Kube Scheduler certificates
2019/03/12 13:52:49 [INFO] [certificates] Generating Kube Proxy certificates
2019/03/12 13:52:49 [INFO] [certificates] Generating Node certificate
2019/03/12 13:52:49 [INFO] [certificates] Generating admin certificates and kubeconfig
2019/03/12 13:52:50 [INFO] [certificates] Generating etcd-127.0.0.1 certificate and key
2019/03/12 13:52:51 [INFO] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates
2019/03/12 13:52:51 [INFO] [certificates] Generating Kubernetes API server proxy client certificates
2019/03/12 13:52:52 [INFO] Running etcd --peer-client-cert-auth --client-cert-auth --initial-cluster-token=etcd-cluster-1 --initial-advertise-peer-urls=https://127.0.0.1:2380 --initial-cluster=etcd-master=https://127.0.0.1:2380 --initial-cluster-state=new --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem --name=etcd-master --advertise-client-urls=https://127.0.0.1:2379,https://127.0.0.1:4001 --listen-peer-urls=https://0.0.0.0:2380 --key-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem --heartbeat-interval=500 --data-dir=/var/lib/rancher/etcd/ --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --listen-client-urls=https://0.0.0.0:2379 --cert-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem --election-timeout=5000
2019-03-12 13:52:52.237069 I | etcdmain: etcd Version: 3.2.13
2019-03-12 13:52:52.237103 I | etcdmain: Git SHA: Not provided (use ./build instead of go build)
2019-03-12 13:52:52.237117 I | etcdmain: Go Version: go1.11
2019-03-12 13:52:52.237123 I | etcdmain: Go OS/Arch: linux/amd64
2019-03-12 13:52:52.237129 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-03-12 13:52:52.238640 I | embed: peerTLS: cert = /etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem, key = /etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem, ca = , trusted-ca = /etc/kubernetes/ssl/kube-ca.pem, client-cert-auth = true
2019-03-12 13:52:52.244279 I | embed: listening for peers on https://0.0.0.0:2380
2019-03-12 13:52:52.244354 I | embed: listening for client requests on 0.0.0.0:2379
2019-03-12 13:52:52.258768 I | etcdserver: name = etcd-master
2019-03-12 13:52:52.258794 I | etcdserver: data dir = /var/lib/rancher/etcd/
2019-03-12 13:52:52.258803 I | etcdserver: member dir = /var/lib/rancher/etcd/member
2019-03-12 13:52:52.258809 I | etcdserver: heartbeat = 500ms
2019-03-12 13:52:52.258815 I | etcdserver: election = 5000ms
2019-03-12 13:52:52.258821 I | etcdserver: snapshot count = 100000
2019-03-12 13:52:52.258834 I | etcdserver: advertise client URLs = https://127.0.0.1:2379,https://127.0.0.1:4001
2019-03-12 13:52:52.258850 I | etcdserver: initial advertise peer URLs = https://127.0.0.1:2380
2019-03-12 13:52:52.258862 I | etcdserver: initial cluster = etcd-master=https://127.0.0.1:2380
2019-03-12 13:52:52.262083 I | etcdserver: starting member e92d66acd89ecf29 in cluster 7581d6eb2d25405b
2019-03-12 13:52:52.262124 I | raft: e92d66acd89ecf29 became follower at term 0
2019-03-12 13:52:52.262136 I | raft: newRaft e92d66acd89ecf29 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2019-03-12 13:52:52.262151 I | raft: e92d66acd89ecf29 became follower at term 1
2019-03-12 13:52:52.289617 W | auth: simple token is not cryptographically signed
2019-03-12 13:52:52.299987 I | etcdserver: starting server... [version: 3.2.13, cluster version: to_be_decided]
2019-03-12 13:52:52.301394 I | embed: ClientTLS: cert = /etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem, key = /etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem, ca = , trusted-ca = /etc/kubernetes/ssl/kube-ca.pem, client-cert-auth = true
2019-03-12 13:52:52.304324 I | etcdserver/membership: added member e92d66acd89ecf29 [https://127.0.0.1:2380] to cluster 7581d6eb2d25405b
2019-03-12 13:52:52.762853 I | raft: e92d66acd89ecf29 is starting a new election at term 1
2019-03-12 13:52:52.763039 I | raft: e92d66acd89ecf29 became candidate at term 2
2019-03-12 13:52:52.763088 I | raft: e92d66acd89ecf29 received MsgVoteResp from e92d66acd89ecf29 at term 2
2019-03-12 13:52:52.763574 I | raft: e92d66acd89ecf29 became leader at term 2
2019-03-12 13:52:52.763599 I | raft: raft.node: e92d66acd89ecf29 elected leader e92d66acd89ecf29 at term 2
2019-03-12 13:52:52.764455 I | etcdserver: setting up the initial cluster version to 3.2
2019-03-12 13:52:52.765562 N | etcdserver/membership: set the initial cluster version to 3.2
2019-03-12 13:52:52.765642 I | etcdserver/api: enabled capabilities for version 3.2
2019-03-12 13:52:52.765696 I | etcdserver: published {Name:etcd-master ClientURLs:[https://127.0.0.1:2379 https://127.0.0.1:4001]} to cluster 7581d6eb2d25405b
2019-03-12 13:52:52.765758 I | embed: ready to serve client requests
2019-03-12 13:52:52.766218 I | embed: serving client requests on [::]:2379
2019/03/12 13:52:52 [INFO] Running kube-apiserver --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --storage-backend=etcd3 --requestheader-username-headers= --service-account-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-client-ca-file= --proxy-client-key-file= --requestheader-extra-headers-prefix= --proxy-client-cert-file= --authorization-mode=Node,RBAC --requestheader-allowed-names= --service-node-port-range=30000-32767 --etcd-prefix=/registry --requestheader-group-headers= --allow-privileged=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --endpoint-reconciler-type=lease --insecure-bind-address=127.0.0.1 --secure-port=6443 --service-cluster-ip-range=10.43.0.0/16 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-servers=https://127.0.0.1:2379 --advertise-address=10.43.0.1 --insecure-port=0 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --bind-address=127.0.0.1 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem -v=1 --logtostderr=false --alsologtostderr=false
2019/03/12 13:52:52 [INFO] Activating driver import
2019/03/12 13:52:52 [INFO] Activating driver import done
2019/03/12 13:52:52 [INFO] Activating driver rke
2019/03/12 13:52:52 [INFO] Activating driver rke done
2019/03/12 13:52:52 [INFO] Activating driver gke
2019/03/12 13:52:52 [INFO] Activating driver gke done
2019/03/12 13:52:52 [INFO] Activating driver aks
2019/03/12 13:52:52 [INFO] Activating driver aks done
2019/03/12 13:52:52 [INFO] Activating driver eks
2019/03/12 13:52:52 [INFO] Activating driver eks done
[restful] 2019/03/12 13:52:58 log.go:33: [restful/swagger] listing is available at https://10.43.0.1:6443/swaggerapi
[restful] 2019/03/12 13:52:58 log.go:33: [restful/swagger] https://10.43.0.1:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2019/03/12 13:53:01 log.go:33: [restful/swagger] listing is available at https://10.43.0.1:6443/swaggerapi
[restful] 2019/03/12 13:53:01 log.go:33: [restful/swagger] https://10.43.0.1:6443/swaggerui/ is mapped to folder /swagger-ui/
2019/03/12 13:53:02 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version?timeout=32s: net/http: TLS handshake timeout
2019/03/12 13:53:02 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version?timeout=30s: net/http: TLS handshake timeout
2019/03/12 13:53:09 [INFO] Running kube-controller-manager --leader-elect=true --enable-hostpath-provisioner=false --v=2 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-account-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --configure-cloud-routes=false --allocate-node-cidrs=true --allow-untagged-cloud=true --node-monitor-grace-period=40s --cluster-cidr=10.42.0.0/16 --service-cluster-ip-range=10.43.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --address=0.0.0.0 --cloud-provider= --pod-eviction-timeout=5m0s --use-service-account-credentials=true -v=1 --logtostderr=false --alsologtostderr=false --controllers * --controllers -resourcequota --controllers -service
2019/03/12 13:53:09 [INFO] Running in single server mode, will not peer connections
2019/03/12 13:53:09 [INFO] Creating CRD authconfigs.management.cattle.io
2019/03/12 13:53:09 [INFO] Creating CRD apps.project.cattle.io
2019/03/12 13:53:09 [INFO] Creating CRD apprevisions.project.cattle.io
2019/03/12 13:53:09 [INFO] Creating CRD catalogs.management.cattle.io
2019/03/12 13:53:09 [INFO] Creating CRD pipelineexecutions.project.cattle.io
2019/03/12 13:53:09 [INFO] Creating CRD clusteralerts.management.cattle.io
2019/03/12 13:53:09 [INFO] Creating CRD clusterevents.management.cattle.io
2019/03/12 13:53:09 [INFO] Creating CRD pipelinesettings.project.cattle.io
2019/03/12 13:53:09 [INFO] Creating CRD pipelines.project.cattle.io
2019/03/12 13:53:09 [INFO] Creating CRD clusterloggings.management.cattle.io
2019/03/12 13:53:09 [INFO] Creating CRD sourcecodecredentials.project.cattle.io
E0312 13:53:09.951511       6 autoregister_controller.go:190] v3.project.cattle.io failed with : apiservices.apiregistration.k8s.io "v3.project.cattle.io" already exists
E0312 13:53:09.951570       6 autoregister_controller.go:190] v3.management.cattle.io failed with : apiservices.apiregistration.k8s.io "v3.management.cattle.io" already exists
2019/03/12 13:53:10 [INFO] Creating CRD clusterregistrationtokens.management.cattle.io
2019/03/12 13:53:10 [INFO] Creating CRD sourcecodeproviderconfigs.project.cattle.io
2019/03/12 13:53:10 [INFO] Creating CRD clusterroletemplatebindings.management.cattle.io
E0312 13:53:10.569975       6 leaderelection.go:270] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
2019/03/12 13:53:10 [INFO] Creating CRD sourcecoderepositories.project.cattle.io
2019/03/12 13:53:10 [INFO] Creating CRD clusters.management.cattle.io
2019/03/12 13:53:11 [INFO] Creating CRD composeconfigs.management.cattle.io
2019/03/12 13:53:11 [INFO] Creating CRD dynamicschemas.management.cattle.io
2019/03/12 13:53:11 [INFO] Creating CRD globalrolebindings.management.cattle.io
2019/03/12 13:53:12 [INFO] Creating CRD globalroles.management.cattle.io
2019/03/12 13:53:12 [INFO] Creating CRD groupmembers.management.cattle.io
2019/03/12 13:53:12 [INFO] Creating CRD groups.management.cattle.io
2019/03/12 13:53:12 [INFO] Creating CRD listenconfigs.management.cattle.io
2019/03/12 13:53:12 [INFO] Creating CRD nodedrivers.management.cattle.io
2019/03/12 13:53:13 [INFO] Creating CRD nodepools.management.cattle.io
2019/03/12 13:53:13 [INFO] Creating CRD nodetemplates.management.cattle.io
2019/03/12 13:53:13 [INFO] Creating CRD nodes.management.cattle.io
2019/03/12 13:53:13 [INFO] Creating CRD notifiers.management.cattle.io
2019/03/12 13:53:13 [INFO] Creating CRD podsecuritypolicytemplateprojectbindings.management.cattle.io
2019/03/12 13:53:14 [INFO] Creating CRD podsecuritypolicytemplates.management.cattle.io
2019/03/12 13:53:14 [INFO] Creating CRD preferences.management.cattle.io
2019/03/12 13:53:14 [INFO] Creating CRD projectalerts.management.cattle.io
2019/03/12 13:53:14 [INFO] Creating CRD projectloggings.management.cattle.io
2019/03/12 13:53:14 [INFO] Creating CRD projectnetworkpolicies.management.cattle.io
2019/03/12 13:53:15 [INFO] Creating CRD projectroletemplatebindings.management.cattle.io
2019/03/12 13:53:15 [INFO] Creating CRD projects.management.cattle.io
2019/03/12 13:53:15 [INFO] Creating CRD roletemplates.management.cattle.io
2019/03/12 13:53:15 [INFO] Creating CRD settings.management.cattle.io
2019/03/12 13:53:15 [INFO] Creating CRD templatecontents.management.cattle.io
2019/03/12 13:53:16 [INFO] Creating CRD templates.management.cattle.io
2019/03/12 13:53:16 [INFO] Creating CRD templateversions.management.cattle.io
2019/03/12 13:53:16 [INFO] Creating CRD tokens.management.cattle.io
2019/03/12 13:53:16 [INFO] Creating CRD userattributes.management.cattle.io
2019/03/12 13:53:16 [INFO] Creating CRD users.management.cattle.io
2019/03/12 13:53:17 [INFO] Starting API controllers
2019/03/12 13:53:18 [INFO] Starting catalog controller
2019/03/12 13:53:18 [INFO] Starting management controllers
2019/03/12 13:53:18 [INFO] Reconciling GlobalRoles
2019/03/12 13:53:18 [INFO] Listening on :443
2019/03/12 13:53:18 [INFO] Listening on :80
2019/03/12 13:53:18 [INFO] Creating admin
2019/03/12 13:53:18 [INFO] Creating user
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-admin for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] Creating catalogs-use
2019/03/12 13:53:18 [INFO] Creating users-manage
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-user for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-catalogs-use for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] Creating roles-manage
2019/03/12 13:53:18 [INFO] Creating authn-manage
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-users-manage for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] Creating podsecuritypolicytemplates-manage
2019/03/12 13:53:18 [INFO] Creating user-base
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-authn-manage for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-roles-manage for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-podsecuritypolicytemplates-manage for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] Creating clusters-create
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-user-base for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] Creating nodedrivers-manage
2019/03/12 13:53:18 [INFO] Creating catalogs-manage
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-clusters-create for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] Creating settings-manage
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-nodedrivers-manage for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-catalogs-manage for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] Reconciling RoleTemplates
2019/03/12 13:53:18 [INFO] Creating clusterroletemplatebindings-view
2019/03/12 13:53:18 [INFO] Creating services-view
2019/03/12 13:53:18 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-settings-manage for corresponding GlobalRole
2019/03/12 13:53:18 [INFO] Creating configmaps-view
2019/03/12 13:53:18 [INFO] Creating project-owner
2019/03/12 13:53:18 [INFO] Creating services-manage
2019/03/12 13:53:18 [INFO] Creating serviceaccounts-view
2019/03/12 13:53:18 [INFO] Creating edit
2019/03/12 13:53:18 [INFO] Creating cluster-member
2019/03/12 13:53:18 [INFO] Creating workloads-manage
2019/03/12 13:53:18 [INFO] Creating persistentvolumeclaims-manage
2019/03/12 13:53:18 [INFO] Creating projects-create
2019/03/12 13:53:18 [INFO] Creating clusterroletemplatebindings-manage
2019/03/12 13:53:18 [INFO] Creating workloads-view
2019/03/12 13:53:18 [INFO] Creating secrets-manage
2019/03/12 13:53:18 [INFO] Creating cluster-admin
2019/03/12 13:53:18 [INFO] Creating admin
2019/03/12 13:53:18 [INFO] Creating project-member
2019/03/12 13:53:18 [INFO] Creating ingress-view
2019/03/12 13:53:18 [INFO] Creating configmaps-manage
2019/03/12 13:53:18 [INFO] Creating persistentvolumeclaims-view
2019/03/12 13:53:18 [INFO] Creating serviceaccounts-manage
2019/03/12 13:53:18 [INFO] Creating view
2019/03/12 13:53:18 [INFO] Creating cluster-owner
2019/03/12 13:53:18 [INFO] Creating projects-view
2019/03/12 13:53:18 [INFO] Creating nodes-view
2019/03/12 13:53:18 [INFO] Creating create-ns
2019/03/12 13:53:18 [INFO] Creating projectroletemplatebindings-manage
2019/03/12 13:53:18 [INFO] Creating nodes-manage
2019/03/12 13:53:18 [INFO] Creating projectroletemplatebindings-view
2019/03/12 13:53:18 [INFO] Creating storage-manage
2019/03/12 13:53:18 [INFO] Creating read-only
2019/03/12 13:53:18 [INFO] Creating ingress-manage
2019/03/12 13:53:18 [INFO] Creating secrets-view
2019/03/12 13:53:19 [INFO] Creating new GlobalRoleBinding for GlobalRoleBinding globalrolebinding-nxst8
2019/03/12 13:53:19 [INFO] [mgmt-auth-grb-controller] Creating clusterRoleBinding for globalRoleBinding globalrolebinding-nxst8 for user user-wh8jg with role cattle-globalrole-admin
2019/03/12 13:53:19 [INFO] Creating node driver amazonec2
2019/03/12 13:53:19 [INFO] Creating node driver azure
2019/03/12 13:53:19 [INFO] Creating node driver digitalocean
2019/03/12 13:53:19 [INFO] Creating node driver exoscale
2019/03/12 13:53:19 [INFO] Creating node driver openstack
2019/03/12 13:53:19 [INFO] Creating node driver otc
2019/03/12 13:53:19 [INFO] Creating node driver packet
2019/03/12 13:53:19 [INFO] Creating node driver rackspace
2019/03/12 13:53:19 [INFO] Creating node driver softlayer
2019/03/12 13:53:19 [INFO] Creating node driver aliyunecs
2019/03/12 13:53:19 [INFO] Creating node driver vmwarevsphere
2019/03/12 13:53:19 [INFO] Rancher startup complete
2019/03/12 13:53:20 [INFO] uploading digitaloceanConfig to node schema
2019/03/12 13:53:20 [INFO] uploading digitaloceanConfig to node schema
2019/03/12 13:53:20 [INFO] uploading vmwarevsphereConfig to node schema
2019/03/12 13:53:20 [INFO] uploading vmwarevsphereConfig to node schema
2019/03/12 13:53:20 [INFO] uploading azureConfig to node schema
2019/03/12 13:53:20 [INFO] uploading azureConfig to node schema
2019/03/12 13:53:20 [INFO] uploading amazonec2Config to node schema
2019/03/12 13:53:20 [INFO] uploading amazonec2Config to node schema
2019/03/12 13:53:24 [INFO] Updating catalog library
2019/03/12 13:53:43 [INFO] Catalog sync done. 30 templates created, 0 templates updated, 0 templates deleted
2019/03/12 13:53:43 [ERROR] CatalogController library [catalog] failed with : failed to sync templates. Resetting commit. Multiple error occurred: [Error in HTTP GET of [file://../logo.png], error: Get file://../logo.png: unsupported protocol scheme "file"]
2019/03/12 13:53:46 [INFO] Updating catalog library
2019/03/12 13:53:46 [INFO] Catalog sync done. 0 templates created, 0 templates updated, 0 templates deleted
2019/03/12 13:56:08 [INFO] Creating token for user user-wh8jg
time="2019-03-12 13:56:08" level=info msg="Telemetry Client v0.5.1"
time="2019-03-12 13:56:08" level=info msg="Listening on 0.0.0.0:8114"
2019-03-12 14:02:54.463941 I | mvcc: store.index: compact 2287
2019-03-12 14:02:54.538828 I | mvcc: finished scheduled compaction at 2287 (took 70.660391ms)
E0312 14:07:28.665611       6 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
2019-03-12 14:07:54.476136 I | mvcc: store.index: compact 2618
2019-03-12 14:07:54.479496 I | mvcc: finished scheduled compaction at 2618 (took 1.733203ms)
2019-03-12 14:12:54.485603 I | mvcc: store.index: compact 2948
2019-03-12 14:12:54.490816 I | mvcc: finished scheduled compaction at 2948 (took 2.924353ms)
E0312 14:16:21.766168       6 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
2019-03-12 14:17:54.496335 I | mvcc: store.index: compact 3278
2019-03-12 14:17:54.499541 I | mvcc: finished scheduled compaction at 3278 (took 1.733445ms)
2019-03-12 14:22:54.503617 I | mvcc: store.index: compact 3608
2019-03-12 14:22:54.508755 I | mvcc: finished scheduled compaction at 3608 (took 3.296617ms)
E0312 14:24:49.866194       6 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
2019-03-12 14:27:54.510300 I | mvcc: store.index: compact 3938
2019-03-12 14:27:54.515050 I | mvcc: finished scheduled compaction at 3938 (took 2.297792ms)
2019-03-12 14:32:54.519819 I | mvcc: store.index: compact 4269
2019-03-12 14:32:54.523311 I | mvcc: finished scheduled compaction at 4269 (took 1.938135ms)
E0312 14:33:22.932669       6 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
2019-03-12 14:37:54.526065 I | mvcc: store.index: compact 4598
2019-03-12 14:37:54.531256 I | mvcc: finished scheduled compaction at 4598 (took 2.972723ms)
2019-03-12 14:42:54.534727 I | mvcc: store.index: compact 4929
2019-03-12 14:42:54.539116 I | mvcc: finished scheduled compaction at 4929 (took 2.545068ms)
E0312 14:46:57.011772       6 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
2019-03-12 14:47:54.541416 I | mvcc: store.index: compact 5259
2019-03-12 14:47:54.546836 I | mvcc: finished scheduled compaction at 5259 (took 3.263732ms)
2019/03/12 14:49:26 [INFO] [mgmt-cluster-rbac-delete] Creating namespace c-4ks5j
2019/03/12 14:49:26 [INFO] [mgmt-cluster-rbac-delete] Creating Default project for cluster c-4ks5j
2019/03/12 14:49:26 [INFO] [mgmt-project-rbac-create] Creating namespace p-d5tlm
2019/03/12 14:49:26 [INFO] [mgmt-project-rbac-create] Creating creator projectRoleTemplateBinding for user user-wh8jg for project p-d5tlm
2019/03/12 14:49:26 [INFO] [mgmt-cluster-rbac-delete] Creating System project for cluster c-4ks5j
2019/03/12 14:49:26 [INFO] [mgmt-auth-crtb-controller] Setting InitialRolesPopulated condition on project p-d5tlm
2019/03/12 14:49:26 [INFO] [mgmt-project-rbac-create] Updating project p-d5tlm
2019/03/12 14:49:26 [INFO] [mgmt-cluster-rbac-delete] Updating cluster c-4ks5j
2019/03/12 14:49:26 [INFO] [mgmt-project-rbac-create] Creating namespace p-9pkqp
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating clusterRole p-d5tlm-projectowner
2019/03/12 14:49:26 [INFO] [mgmt-project-rbac-create] Creating creator clusterRoleTemplateBinding for user user-wh8jg for cluster c-4ks5j
2019/03/12 14:49:26 [INFO] [mgmt-project-rbac-create] Creating creator projectRoleTemplateBinding for user user-wh8jg for project p-9pkqp
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for membership in project p-d5tlm for subject user-wh8jg
2019/03/12 14:49:26 [INFO] [mgmt-auth-crtb-controller] Creating clusterRole c-4ks5j-clusterowner
2019/03/12 14:49:26 [INFO] [mgmt-auth-crtb-controller] Setting InitialRolesPopulated condition on project p-9pkqp
2019/03/12 14:49:26 [INFO] [mgmt-project-rbac-create] Updating project p-d5tlm
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating clusterRole c-4ks5j-clustermember
2019/03/12 14:49:26 [INFO] [mgmt-auth-crtb-controller] Setting InitialRolesPopulated condition on cluster
2019/03/12 14:49:26 [INFO] [mgmt-cluster-rbac-delete] Updating cluster c-4ks5j
2019/03/12 14:49:26 [INFO] [mgmt-auth-crtb-controller] Creating clusterRoleBinding for membership in cluster c-4ks5j for subject user-wh8jg
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating clusterRole p-9pkqp-projectowner
2019/03/12 14:49:26 [INFO] [mgmt-project-rbac-create] Updating project p-9pkqp
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating clusterRoleBinding for membership in cluster c-4ks5j for subject user-wh8jg
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for membership in project p-9pkqp for subject user-wh8jg
2019/03/12 14:49:26 [INFO] [mgmt-project-rbac-create] Updating project p-d5tlm
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating role project-owner in namespace c-4ks5j
2019/03/12 14:49:26 [INFO] [mgmt-auth-crtb-controller] Creating role cluster-owner in namespace c-4ks5j
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Updating clusterRoleBinding clusterrolebinding-vrtwh for cluster membership in cluster c-4ks5j for subject user-wh8jg
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-wh8jg with role project-owner in namespace
2019/03/12 14:49:26 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject user-wh8jg with role cluster-owner in namespace
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-wh8jg with role project-owner in namespace
2019/03/12 14:49:26 [INFO] [mgmt-project-rbac-create] Updating project p-9pkqp
2019/03/12 14:49:26 [INFO] [mgmt-auth-crtb-controller] Creating role cluster-owner in namespace p-d5tlm
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating role project-owner in namespace p-d5tlm
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating role project-owner in namespace p-9pkqp
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating role admin in namespace p-d5tlm
2019/03/12 14:49:26 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject user-wh8jg with role cluster-owner in namespace
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating role admin in namespace p-9pkqp
2019/03/12 14:49:26 [INFO] [mgmt-project-rbac-create] Updating project p-9pkqp
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-wh8jg with role project-owner in namespace
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-wh8jg with role admin in namespace
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-wh8jg with role admin in namespace
2019/03/12 14:49:26 [INFO] [mgmt-auth-crtb-controller] Creating role cluster-owner in namespace p-9pkqp
2019/03/12 14:49:26 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-wh8jg with role project-owner in namespace
2019/03/12 14:49:26 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject user-wh8jg with role cluster-owner in namespace
2019/03/12 14:49:26 [INFO] [mgmt-cluster-rbac-delete] Updating cluster c-4ks5j
2019/03/12 14:49:48 [INFO] 2019/03/12 14:49:48 http: multiple response.WriteHeader calls
2019/03/12 14:49:48 [INFO] 2019/03/12 14:49:48 http: multiple response.WriteHeader calls
2019/03/12 14:49:48 [INFO] Handling backend connection request [c-4ks5j]
2019/03/12 14:49:58 [INFO] 2019/03/12 14:49:58 http: multiple response.WriteHeader calls
2019/03/12 14:49:58 [INFO] 2019/03/12 14:49:58 http: multiple response.WriteHeader calls
2019/03/12 14:50:08 [INFO] 2019/03/12 14:50:08 http: multiple response.WriteHeader calls
2019/03/12 14:50:08 [INFO] 2019/03/12 14:50:08 http: multiple response.WriteHeader calls

Other details that may be helpful: kube-api server (from the rancher container) startup message:

2019/03/12 13:52:52 [INFO] Running kube-apiserver --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --storage-backend=etcd3 --requestheader-username-headers= --service-account-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-client-ca-file= --proxy-client-key-file= --requestheader-extra-headers-prefix= --proxy-client-cert-file= --authorization-mode=Node,RBAC --requestheader-allowed-names= --service-node-port-range=30000-32767 --etcd-prefix=/registry --requestheader-group-headers= --allow-privileged=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --endpoint-reconciler-type=lease --insecure-bind-address=127.0.0.1 --secure-port=6443 --service-cluster-ip-range=10.43.0.0/16 --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --etcd-servers=https://127.0.0.1:2379 --advertise-address=10.43.0.1 --insecure-port=0 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --bind-address=127.0.0.1 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem -v=1 --logtostderr=false --alsologtostderr=false

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): v2.1.7
  • Installation option (single install/HA): Single install.

It is regarding to an importing an exiting cluster. Both rancher and the existing cluster run on VMs.

Cluster information

  • Cluster type (Hosted/Infrastructure Provider/Custom/Imported): Imported
  • Machine type (cloud/VM/metal) and specifications (CPU/memory): To be importerd cluster runs on VMs, each has 2CPU, 16GB memory. The rancher container runs on another VM with 2 CPU and 3 GB of memory
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version (use docker version):
Client:
 Version:      17.09.1-ce
 API version:  1.32
 Go version:   go1.8.5
 Git commit:   f4ffd2511ce9
 Built:        Mon Feb  5 09:44:15 2018
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.1-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.5
 Git commit:   f4ffd2511ce9
 Built:        Sat Oct 21 17:27:07 2017
 OS/Arch:      linux/amd64
 Experimental: false

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 18 (9 by maintainers)

Most upvoted comments

I had the same issue and resorted to dropping calico for flannel to make it work. Did anyone manage to make it work using calico?