kubernetes: kube-proxy downgrade job applies 1.8 `ADMISSION_CONTROL` env on 1.7 master causes apiserver crashes looping

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened: From kube-proxy downgrade test suite: https://k8s-testgrid.appspot.com/sig-network#gci-gce-latest-downgrade-kube-proxy-ds&width=20.

After downgrade, test always fails on “Waiting for new master to respond to API requests”.

Checked apiserver’s log, found below error (sample log):

failed to initialize admission: Unknown admission plugin: Priority

What you expected to happen:

How to reproduce it (as minimally and precisely as possible): Downgrade master from 1.8+ to 1.7.X.

@kubernetes/sig-api-machinery-bugs (Hoping I’m pinging the right group.)

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 16 (16 by maintainers)

Most upvoted comments

Is that even supported? I wouldn’t expect it to be.

In a real scenario, metadata should have been locked to the 1.7-set vars and the priority admission plugin would not be enabled in 1.8

Seems like the e2e downgrade tests should resemble reality more

Does the e2e test framework install 1.7, then upgrade, then downgrade? Or does it install 1.8, then downgrade?

Do admission plugins get saved in a metadata var and persisted?

Checked a bit, this parameter is generated based on the ADMISSION_CONTROL env, which is stored on gce metadata server as part of kube-envs: https://github.com/kubernetes/kubernetes/blob/6737f505de1926fea162557fd5e38e3186e4e3e1/cluster/gce/gci/configure-helper.sh#L1341-L1342

Since we don’t explicitly change ADMISSION_CONTROL during downgrade, the same set of admission controllers are passed in and causes this failure.

I think we should fix the e2e downgrade framework then…