openshift-ansible: Wait for control plane pods to appear fails

Description

Provide a brief description of your issue here. For example:

I am unable to complete my deployment with the deploy_cluster.yaml playbook.

TASK [openshift_control_plane : Wait for control plane pods to appear] **********************************************
Sunday 10 March 2019  07:51:12 -0400 (0:00:00.044)       0:01:37.098 ********** 
FAILED - RETRYING: Wait for control plane pods to appear (60 retries left).
FAILED - RETRYING: Wait for control plane pods to appear (59 retries left).



Version

Please put the following version information in the code block indicated below.

  • Your ansible version per ansible --version

[root@oc-master-node-1 ~]# ansible --version

ansible 2.7.8
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]

If you’re operating from a git clone:

  • The output of git describe
[root@oc-master-node-1 openshift-ansible]# git describe
openshift-ansible-3.11.93-1-2-gcac5fb1
Steps To Reproduce
  1. ansible-playbook -i /etc/ansible/inventory.download playbooks/deploy_cluster.yml
Expected Results

I expect the cluster to deployed without any issues.

Example command and output or error messages
Observed Results
FAILED - RETRYING: Wait for control plane pods to appear (1 retries left).
failed: [192.168.1.180] (item=etcd) => {"attempts": 60, "changed": false, "item": "etcd", "msg": {"cmd": "/usr/bin/oc get pod master-etcd-oc-master-node-1.cluster.local -o json -n kube-system", "results": [{}], "returncode": 1, "stderr": "The connection to the server oc-master-node-1:8443 was refused - did you specify the right host or port?\n", "stdout": ""}}

For long output or logs, consider using a gist

Additional Information

Provide any additional information which may help us diagnose the issue.


[root@oc-master-node-1 openshift-ansible]# cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core) 
[root@oc-master-node-1 openshift-ansible]# cat /etc/ansible/inventory.download 
[OSEv3:children]
masters
nodes
etcd


[masters]
192.168.1.180 openshift_ip=192.168.1.180 openshift_schedulable=true 

[etcd]
192.168.1.180 openshift_ip=192.168.1.180

[nodes]
192.168.1.180 openshift_ip=192.168.1.180 openshift_node_group_name='node-config-master'
192.168.1.182 openshift_ip=192.168.1.182 openshift_node_group_name='node-config-compute'
192.168.1.183 openshift_ip=192.168.1.183 openshift_node_group_name='node-config-compute'
192.168.1.182 openshift_ip=192.168.1.182 openshift_node_group_name='node-config-infra'

[OSEv3:vars]
openshift_additional_repos=[{'id': 'centos-paas', 'name': 'centos-paas', 'baseurl' :'https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311', 'gpgcheck' :'0', 'enabled' :'1'}]

openshift_set_node_ip=true
ansible_ssh_user=root
enable_excluders=False
enable_docker_excluder=False
ansible_service_broker_install=False

containerized=True
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability

deployment_type=origin
openshift_deployment_type=origin

template_service_broker_selector={"region":"infra"}
openshift_metrics_image_version="v3.11"
openshift_logging_image_version="v3.11"
openshift_logging_elasticsearch_proxy_image_version="v1.0.0"
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra":"true"}
logging_elasticsearch_rollout_override=false
osm_use_cockpit=true

openshift_metrics_install_metrics=True 
openshift_logging_install_logging=True

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_file='/etc/origin/master/htpasswd'

openshift_public_hostname=console.mwimp.local
openshift_master_default_subdomain=apps.mwimp.local

openshift_master_api_port=8443
openshift_master_console_port=8443

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 18 (2 by maintainers)

Most upvoted comments

I was able to resolve this by downgrading Ansible. I attached my hosts and inventory file plus my ansible version.

inventory.txt hosts.txt

[root@oc-master-node-1 openshift-ansible]# ansible --version
ansible 2.6.9
  config file = /root/openshift-ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]