kubernetes: [Failing Test] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] (ci-cluster-api-provider-gcp-make-conformance-v1alpha3-k8s-ci-artifacts)
Which jobs are failing:
ci-cluster-api-provider-gcp-make-conformance-v1alpha3-k8s-ci-artifacts
Which test(s) are failing:
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
There is another issue #96055 for the other failing tests
Since when has it been failing: https://github.com/kubernetes-sigs/cluster-api-provider-gcp/compare/7a53ab5f1...2f98888b6 but that seems unrelated Nov 12 7AM PST
Testgrid link: https://testgrid.k8s.io/sig-release-master-informing#capg-conformance-v1alpha3-k8s-master
Reason for failure:
Looking at this particular test it looks like #96369 introduces the "a":null in crd_publish_openapi.go that this test seems to be struggling with
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Nov 12 21:01:53.236: failed to create random CR {"kind":"E2e-test-crd-publish-openapi-1604-crd","apiVersion":"crd-publish-openapi-test-unknown-in-nested.example.com/v1","metadata":{"name":"test-cr"},"spec":{"a":null,"b":[{"c":"d"}]}} for CRD that allows unknown properties in a nested object: error running /home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://34.120.129.82:443 --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8616 --namespace=crd-publish-openapi-8616 create -f -:
Command stdout:
stderr:
error: error validating "STDIN": error validating data: unknown object type "nil" in E2e-test-crd-publish-openapi-1604-crd.spec.a; if you choose to ignore these errors, turn validation off with --validate=false
error:
exit status 1
Anything else we need to know: Example spyglass links:
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-cluster-api-provider-gcp-make-conformance-v1alpha3-k8s-ci-artifacts/1326976422375329792
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-cluster-api-provider-gcp-make-conformance-v1alpha3-k8s-ci-artifacts/1327037072438988800
- https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-cluster-api-provider-gcp-make-conformance-v1alpha3-k8s-ci-artifacts/1327340295993430016
/sig api-machinery /area provider-gcp /priority important-soon /cc @kubernetes/ci-signal @gautierdelorme
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 1
- Comments: 19 (19 by maintainers)
Yeah, this test is actually expected to fail on pre-1.20 server versions. I would recommend to run test from the branch matching the server version.
/sig cluster-lifecycle
The last time I looked at these jobs, they were running tests from
masteragainst old server versions. That is unlikely to work well. cc @detiberWhat I can tell is that the script is downloading the conformance tests itself, even though the job configuration explicitly passes in the
masterbranch ofkubernetes/kubernetes- I can’t speak to this job specifically but seems like there’s a lot of confusion as to where and how the different sources of truth are pieced together, I’d suggest making the scripts either download or accept a pre-provided directory, so it’s clear what is in use…@amwat @cheftako I think this repo is intended to track the most recent stable branch going forward and probably shouldn’t be in this dashboard, but looking for confirmation
It was recently updated to use the
1.19version marker instead oflatest- https://github.com/kubernetes-sigs/cluster-api-provider-gcp/commit/52bea00a992e5c6b630913b05fee254b9ef597cepr where this happened - https://github.com/kubernetes-sigs/cluster-api-provider-gcp/pull/324 from the slack discussion linked it looks like it was using
1.17…? beforeI think it may be in this script? https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/master/hack/ci/e2e-conformance.sh#L27
According to the docs that script is how to run Conformance tests
from the log of a failed run:
that indicates it is running tests against a 1.19.x cluster