origin: Access fail with 'oc cluster up --public-hostname=' and redirect to 127.0.0.1

Version

oc v3.10.0+dd10d17

Steps To Reproduce

oc cluster up --public-hostname=‘<public ip>’

Current Result
oc v3.10.0+dd10d17
kubernetes v1.10.0+b81c8f8
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://127.0.0.1:8443
openshift v3.10.0+a5e4ac9-10
kubernetes v1.10.0+b81c8f8

When I access https://<public ip>:8443,redirecting the address https://127.0.0.1:8443.

Expected Result

When I access https://<public ip>:8443, I could access normaly.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 9
  • Comments: 28 (9 by maintainers)

Most upvoted comments

  1. oc cluster down
  2. reboot the host
  3. rm -rf openshift.local.clusterup
  4. rm -rf .kube/
  5. oc cluster up --public-hostname=10.71.33.193
  6. oc cluster status
  7. https://10.71.33.193:8443/console/

Hello,

Yes, this does happen if you first start it up as a local cluster - and then realise you need to access it via a Private / Public IP Address.

To fix it, delete the directory: openshift.local.clusterup or even better, delete the entire top level directory openshift-origin-server-v3.11.0-0cbc58b-linux-64bit.

Then run tar -zxvf openshift-origin-server-v3.11.0-0cbc58b-linux-64bit.tar.gz again.

And start up the cluster using: oc cluster up --public-hostname=<Public IP>

You should now be able to access it via: https://<Public IP>:8443/console

Hi all, in the end, I access the console successful though https:/<public-ip>:8443/console/.

Even after you have run oc cluster up --public-hostname=<server-ip> --routing-suffix=<server-ip>.nip.io, you will need to access link as https:/<public-ip>:8443/console/ and not https:/<public-ip>:8443.

The default route https:/<public-ip>:8443/ will still bring you to 127.0.0.1.

The route https:/<public-ip>:8443/console/ will bring you to the dashboard.

Issue still present. Working on:

[root@localhost ~]# hostnamectl
   Static hostname: localhost.localdomain
         Icon name: computer-vm
           Chassis: vm
        Machine ID: e0f9f4d274fa4ef2b4e2b1670dafa645
           Boot ID: ef53e7fd0e984ea198d4878310678bc8
    Virtualization: microsoft
  Operating System: Fedora 29.20181210.0 (Atomic Host)
       CPE OS Name: cpe:/o:fedoraproject:fedora:29
            Kernel: Linux 4.19.6-300.fc29.x86_64
      Architecture: x86-64

After:

oc cluster up --public-hostname='okd'

web console is still bound to 127.0.0.1 (https://okd:8443 is redirected to https://127.0.0.1:8443):

[root@localhost ~]# oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://127.0.0.1:8443
kubernetes v1.11.0+d4cacc0

After launching:

fgrep -RIl 127.0.0.1:8443 openshift.local.clusterup/ | xargs sed -i 's/127.0.0.1:8443/okd:8443/g'

web console remains bound to 127.0.0.1 (instead of okd):

[root@localhost ~]# oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://127.0.0.1:8443
kubernetes v1.11.0+d4cacc0

https://okd:8443 still gets redirected to https://127.0.0.1:8443

Any hint?

Thanks, @jinalshah .

Going to https://<Public IP>:8443/console now works.

Though https://<Public IP>:8443/ is still redirected to https://127.0.0.1:8443/console

So, somehow it’s the /console that matters.

It seams a kind of leftover from a previous attempt, if I delete the whole ~/openshift.local.clusterup and retry it works as expected. Reproduction step: oc cluster up oc cluster down oc cluster up --public-hostname=<pubilc_hostname>