openshift-ansible: FAILED - RETRYING: Verify API Server
Version
# oc version
oc v3.7.1+c2ce2c0-1
kubernetes v1.7.6+a08f5eeb62
features: Basic-Auth GSSAPI Kerberos SPNEGO
Steps To Reproduce
Run Playbook like this: sudo ansible-playbook -i /etc/ansible/hosts playbooks/deploy_cluster.yml
Observed Results
Describe what is actually happening.
FAILED - RETRYING: Verify API Server (103 retries left).Result was: {
"attempts": 18,
"changed": false,
"cmd": [
"curl",
"--silent",
"--tlsv1.2",
"--cacert",
"/etc/origin/master/ca-bundle.crt",
"https://master01.ex.xxx.es:8443/healthz/ready"
],
"delta": "0:00:00.034341",
"end": "2018-03-21 17:01:12.216737",
"invocation": {
"module_args": {
"_raw_params": "curl --silent --tlsv1.2 --cacert /etc/origin/master/ca-bundle.crt https://master01.ex.xxx.es:8443/healthz/ready",
"_uses_shell": false,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": false
}
},
"msg": "non-zero return code",
"rc": 6,
"retries": 121,
"start": "2018-03-21 17:01:12.182396",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": []
Additional Information
Thats my hosts File:
[OSEv3:children]
masters
etcd
nodes
[OSEv3:vars]
openshift_master_default_subdomain=apps.ex.xxx.es
ansible_ssh_user=root
ansible_become=yes
openshift_master_cluster_method=native
openshift_master_cluster_hostname=master01.ex.xxx.es
openshift_master_cluster_public_hostname=master01.ex.xxx.es
deployment_type=origin
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/htpasswd'}]
openshift_docker_options='--selinux-enabled --insecure-registry 172.30.0.0/16'
openshift_router_selector='region=infra'
openshift_registry_selector='region=infra'
openshift_master_api_port=8443
openshift_master_console_port=8443
openshift_disable_check=memory_availability,disk_availability,package_version
[nodes]
master openshift_schedulable=True ansible_connection=local ansible_become=yes
[masters]
master ansible_connection=local ansible_become=yes
[etcd]
master ansible_connection=local ansible_become=yes
Thats happend if i run
# curl --tlsv1.2 --cacert /etc/origin/master/ca-bundle.crt https://master01.ex.xxx.es:8443/healthz/ready
curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 21 (1 by maintainers)
@taegyunum Thank you! It wasn’t a multiple interface issue but was the Vagrant issue that has also been discovered.
Essentially it’s as follows:
My
/etc/hostsfile had127.0.0.1 master-1-openshift.isc.local master-1-openshiftas the first line. Whenever etcd would try to reach out tomaster-1-openshift.isc.localetcd would send the request to127.0.0.1because of the hosts file. This caused it to not work on single master clusters because it couldn’t establish trust. The certificate being returned was formaster-1-openshift.isc.localinstead of for127.0.0.1.This turned into an issue of DNS mixed with SSL intermingling and not letting things continue. Thanks all for the help.