openshift-ansible: openshift/origin-node:v3.9 not found

Description

Using an inventory based on the provided inventory/hosts.example fails during a deployment. With the openshift_node : Check status of node image pre-pull play failing, with a complaint that the image openshift/origin-node:v3.9 not found.

This is being deployed to CentOS atomic hosts.

Version
$ ansible --version
ansible 2.5.3
  config file = /root/openshift-ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Apr 11 2018, 07:36:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]

$ git describe
openshift-ansible-3.10.0-0.56.0-6-g2722c90
Steps To Reproduce
  1. Spin up CentOS atomic hosts
  2. Use inventory provided below.
  3. Run the playbook with:
$ ansible-playbook -i inventory/doug.inventory playbooks/prerequisites.yml playbooks/deploy_cluster.yml 
Expected Results

Playbook completes without failure.

Observed Results

Full log available in this paste.

Running with this command:

$ ansible-playbook -i inventory/doug.inventory playbooks/prerequisites.yml playbooks/deploy_cluster.yml 

Errors out with:

TASK [openshift_node : Check status of node image pre-pull] *********************************************************************************************************************************************************************************
Friday 01 June 2018  11:31:00 +0000 (0:00:01.582)       0:01:53.209 *********** 
fatal: [ose3-master.test.example.com]: FAILED! => {"ansible_job_id": "554524704637.16931", "attempts": 1, "changed": false, "finished": 1, "msg": "Error pulling docker.io/openshift/origin-node - code: None message: manifest for docker.io/openshift/origin-node:v3.9 not found"}
fatal: [ose3-node1.test.example.com]: FAILED! => {"ansible_job_id": "286554451804.16763", "attempts": 1, "changed": false, "finished": 1, "msg": "Error pulling docker.io/openshift/origin-node - code: None message: manifest for docker.io/openshift/origin-node:v3.9 not found"}
fatal: [ose3-infra.test.example.com]: FAILED! => {"ansible_job_id": "838496782697.16866", "attempts": 1, "changed": false, "finished": 1, "msg": "Error pulling docker.io/openshift/origin-node - code: None message: manifest for docker.io/openshift/origin-node:v3.9 not found"}

PLAY RECAP **********************************************************************************************************************************************************************************************************************************
Inventory file

Inventory file based on ./inventory/hosts.example/ inventory as provided in this repo.

ose3-master.test.example.com ansible_host=192.168.1.247
ose3-infra.test.example.com ansible_host=192.168.1.121
ose3-node1.test.example.com ansible_host=192.168.1.56
ose3-lb.test.example.com ansible_host=192.168.1.150

[masters]
ose3-master.test.example.com

[etcd]
ose3-master.test.example.com

[nodes]
ose3-master.test.example.com
ose3-infra.test.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
ose3-node1.test.example.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

[nfs]
ose3-master.test.example.com

[lb]
ose3-lb.test.example.com

[OSEv3:children]
masters
nodes
etcd
lb
nfs

[OSEv3:vars]
openshift_deployment_type=origin
openshift_release="3.9"
openshift_master_default_subdomain=apps.test.example.com
openshift_master_cluster_hostname=ose3-lb.test.example.com
openshift_disable_check=disk_availability,memory_availability #,docker_image_availability
debug_level=2
ansible_ssh_user=centos
ansible_become=yes
debug_level=2
ansible_ssh_private_key_file=/path/to/id_vm_rsa

Notably: Inclusion or exclusion of docker_image_availability didn’t appear to make a difference.

Additional Information

Provide any additional information which may help us diagnose the issue.

  • Your operating system and version, ie: RHEL 7.2, Fedora 23 ($ cat /etc/redhat-release)
[centos@ose3-master ~]$ cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core) 
[centos@ose3-master ~]$ uname -a
Linux ose3-master.test.example.com 3.10.0-862.2.3.el7.x86_64 #1 SMP Wed May 9 18:05:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 17 (7 by maintainers)

Most upvoted comments

I get the same issue on the latest release-3.9, with a previously working release-3.9 branch. This shouldn’t be closed.

For what it’s worth, this is a recently introduced issue, I had a friend who’s been having successful deploys from a forked openshift-ansible, and I took the basis commit of that fork, and I can have a success if I roll back to that commit, using the same inventory… It’s fairly old (late April), but, this is the version I used and had a successful deploy:

$ git describe
openshift-ansible-3.9.24-1-22-g56fdecf
$ git rev-parse HEAD
56fdecfbf6fe411455b91789bec0bab0abd91845

Master branch should reflect the correct version in the hosts.example inventory file. It is configured for v3.9 (instead of v.3.10). So it doesn’t work out-of-the-box apparently.

I got it working by just simply switching to the release-3.9 branch and run the appropriate playbooks, e.g.: prerequisites.yml and deploy_cluster.yml

Similar thing here but closed too fast without any reason. https://github.com/openshift/openshift-ansible/issues/8582 I am using clean RH 7.5 AMIs. I really cannot understand all these commits on release-3.9 without real testing changes. Even examples aren’t working.