rke: "rke up" got "ssh: rejected: administratively prohibited (open failed)"

rke version: rke version v0.0.7-dev

problem: when to run “./rke -d up” with the file “cluster.yml” in the same folder, I got:

INFO[0000] [certificates] Generating kubernetes certificates INFO[0000] [certificates] Generating CA kubernetes certificates … INFO[0003][certificates] Deploying kubernetes certificates to Cluster nodes DEBU[0003] [certificates] Pulling Certificate downloader Image on host [node1] FATA[0008] Can’t pull Docker image rancher/rke-cert-deployer:0.1.0 for host [node1]: error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.24/images/create?fromImage=rancher%2Frke-cert-deployer&tag=0.1.0: Error connecting to Docker socket on host [node1]: ssh: rejected: administratively prohibited (open failed)

Tried to fix it by enabling the ssh config,AllowTcpForwarding yes, but failed.

Any suggestions?

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 23 (10 by maintainers)

Most upvoted comments

if you run rke in centos7, you should not use the root user to open the ssh tunnel.you can try the following step to run rke in all nodes:

  1. update openssh to 7.4,and docker version v1.12.6
  2. set “AllowTcpForwarding yes” “PermitTunnel yes” to /etc/ssh/sshd_config,and then restart sshd service
  3. the host which run rke can ssh to all nodes without password
  4. run: “groupadd docker” to create docker group,while docker group is not exist.
  5. run: “useradd -g docker yourusername” to create yourusername user and set it’s group to docker
  6. set the docker.service’s MountFlags=shared (vi /xxx/xxx/docker.service)
  7. run:“su yourusername” to change current user,and then restart the docker service. so in the user yourusername session the docker.sock will be created in the path /var/run/docker.sock
  8. in cluster.yml set the ssh user to yourusername(in setup hosts)
  nodes:
  - address: x.x.x.x
     ...
    user:  yourusername
  - address: x.x.x.x
     ...
    user:  yourusername
  1. in cluster.yml set the kubelet to use the systemd cgroup(in setup hosts)
  services:
    kubelet:
      image: rancher/k8s:v1.8.3-rancher2
      extra_args: {"cgroup-driver":"systemd","fail-swap-on":"false"}

now you can run “rke -d up” to setup your k8s cluster. if you meet “Failed to Save Kubernetes certificates: Timeout waiting for K8s to be ready” when running rke,your can see something here #121

Is this on CentOS/RHEL by any chance?