origin: openshift start with config: Failed to get supported resources from server

In a devenv when I write the config and then try to use it, I get a failure.

Version
$ oc version
oc v1.3.0-alpha.0-211-gf29f072
kubernetes v1.3.0-alpha.1-331-g0522e63
Steps To Reproduce
  1. openshift start --write-config=/etc/origin
  2. openshift start --master-config=/etc/origin/master/master-config.yaml --node-config=/etc/origin/node-ip-172-18-11-169.ec2.internal/node-config.yaml
Current Result
W0429 14:04:21.569311    3266 nodecontroller.go:671] Missing timestamp for Node ip-172-18-11-169.ec2.internal. Assuming now as a timestamp.
I0429 14:04:21.570197    3266 event.go:216] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-18-11-169.ec2.internal", UID:"ip-172-18-11-169.ec2.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ip-172-18-11-169.ec2.internal event: Registered Node ip-172-18-11-169.ec2.internal in NodeController
I0429 14:04:21.722832    3266 trace.go:57] Trace "etcdHelper::Create *api.Event" (started 2016-04-29 14:04:21.405469265 +0000 UTC):
[46.11µs] [46.11µs] Object encoded
[47.76µs] [1.65µs] Version checked
[317.227055ms] [317.179295ms] Object created
[317.323785ms] [96.73µs] END
I0429 14:04:21.722958    3266 trace.go:57] Trace "Create /api/v1/namespaces/default/events" (started 2016-04-29 14:04:21.384655935 +0000 UTC):
[21.474µs] [21.474µs] About to convert to expected version
[164.153µs] [142.679µs] Conversion done
[185.893µs] [21.74µs] About to store object in database
[338.204129ms] [338.018236ms] Object stored in database
[338.21553ms] [11.401µs] Self-link added
[338.280201ms] [64.671µs] END
I0429 14:04:21.869623    3266 trace.go:57] Trace "Get /api/v1/namespaces/default/services/docker-registry" (started 2016-04-29 14:04:21.567915062 +0000 UTC):
[301.688028ms] [301.688028ms] END
I0429 14:04:21.918322    3266 trace.go:57] Trace "etcdHelper::Create *api.Event" (started 2016-04-29 14:04:21.654379985 +0000 UTC):
[52.176µs] [52.176µs] Object encoded
[54.011µs] [1.835µs] Version checked
[263.816876ms] [263.762865ms] Object created
[263.911827ms] [94.951µs] END
I0429 14:04:21.918432    3266 trace.go:57] Trace "Create /api/v1/namespaces/default/events" (started 2016-04-29 14:04:21.643835663 +0000 UTC):
[22.39µs] [22.39µs] About to convert to expected version
[195.588µs] [173.198µs] Conversion done
[211.724µs] [16.136µs] About to store object in database
[274.511501ms] [274.299777ms] Object stored in database
[274.522982ms] [11.481µs] Self-link added
[274.576988ms] [54.006µs] END
F0429 14:04:21.942789    3266 master.go:92] Failed to get supported resources from server: the server has asked for the client to provide credentials

… and it halts.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 3
  • Comments: 17 (9 by maintainers)

Most upvoted comments

You’re likely running against an existing etcd with existing service account tokens, with a new config with new service account token verifying keys. Some of the controllers use tokens and will fatal if rejected. In general, the token controller should probably ensure tokens are valid with current config, and the controller setup should work harder to give the controller a token it can use on startup.

We would accomplish this via the API, not talking directly to etcd. The main question is whether we want a general solution in the token-generating controller, or a specific solution in the startup steps that create clients for the other controllers, or both.

Went on a field trip with this today, if you are restoring a cluster by restoring an etcd backup, you’ll hit this issue with origin-master-controllers failing to startup. Perhaps something needs to be documented.

I had this issue on a new install of Origin after I started it once, then tweaked the master config to change console hostnames and named certs, then started it again.

The solution was to delete the openshift directory completely (I had installed from tar.gz binaries in the github releases page) and install it again fresh. I had kept my custom master config from before though and started it up the first time with that custom config.