serving: Stuck in RevisionMissing and "Unknown" state
In what area(s)?
/area networking
What version of Knative?
v0.11.0
Expected Behavior
Service is created successfully
Actual Behavior
Resources stuck in various bad states:
$ k get all
NAME READY STATUS RESTARTS AGE
pod/knative-service-hl52c-deployment-76bd8bfbdb-vphtq 2/2 Running 0 6m7s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/knative-service-hl52c-deployment 1/1 1 1 6m7s
NAME DESIRED CURRENT READY AGE
replicaset.apps/knative-service-hl52c-deployment-76bd8bfbdb 1 1 1 6m7s
NAME URL LATESTCREATED LATESTREADY READY REASON
service.serving.knative.dev/knative-service http://knative-service.knative-1-8760.example.com knative-service-hl52c Unknown RevisionMissing
NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON
revision.serving.knative.dev/knative-service-hl52c knative-service 1 Unknown
NAME LATESTCREATED LATESTREADY READY REASON
configuration.serving.knative.dev/knative-service knative-service-hl52c Unknown
NAME URL READY REASON
route.serving.knative.dev/knative-service http://knative-service.knative-1-8760.example.com Unknown RevisionMissing
apiVersion: v1
items:
- apiVersion: serving.knative.dev/v1
kind: Revision
metadata:
creationTimestamp: "2019-12-19T23:19:27Z"
generateName: knative-service-
generation: 1
labels:
serving.knative.dev/configuration: knative-service
serving.knative.dev/configurationGeneration: "1"
serving.knative.dev/service: knative-service
name: knative-service-hl52c
namespace: knative-1-8760
ownerReferences:
- apiVersion: serving.knative.dev/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Configuration
name: knative-service
uid: 3f1f7137-89b6-45bc-88b0-babccd4d607a
resourceVersion: "1065"
selfLink: /apis/serving.knative.dev/v1/namespaces/knative-1-8760/revisions/knative-service-hl52c
uid: 4677f591-05b9-447b-8b29-4abb8d4ff663
spec:
containerConcurrency: 0
containers:
- image: gcr.io/istio-testing/app:latest
name: user-container
readinessProbe:
successThreshold: 1
tcpSocket:
port: 0
resources: {}
timeoutSeconds: 300
status:
conditions:
- lastTransitionTime: "2019-12-19T23:19:28Z"
reason: Deploying
severity: Info
status: Unknown
type: Active
- lastTransitionTime: "2019-12-19T23:19:28Z"
status: Unknown
type: ContainerHealthy
- lastTransitionTime: "2019-12-19T23:19:28Z"
status: Unknown
type: Ready
- lastTransitionTime: "2019-12-19T23:19:28Z"
status: "True"
type: ResourcesAvailable
imageDigest: gcr.io/istio-testing/app@sha256:80790eb8ab6a4453e3fa8c2bc84e2d6b0cef095c4ceec79bc17cbf01fceb72cc
logUrl: http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana#/discover?_a=(query:(match:(kubernetes.labels.knative-dev%2FrevisionUID:(query:'4677f591-05b9-447b-8b29-4abb8d4ff663',type:phrase))))
observedGeneration: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
apiVersion: v1
items:
- apiVersion: serving.knative.dev/v1
kind: Configuration
metadata:
annotations:
serving.knative.dev/forceUpgrade: "true"
creationTimestamp: "2019-12-19T23:19:27Z"
generation: 1
labels:
serving.knative.dev/route: knative-service
serving.knative.dev/service: knative-service
name: knative-service
namespace: knative-1-8760
ownerReferences:
- apiVersion: serving.knative.dev/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Service
name: knative-service
uid: 0d7555d6-d266-4886-884a-440e789f1a49
resourceVersion: "1042"
selfLink: /apis/serving.knative.dev/v1/namespaces/knative-1-8760/configurations/knative-service
uid: 3f1f7137-89b6-45bc-88b0-babccd4d607a
spec:
template:
metadata:
creationTimestamp: null
spec:
containerConcurrency: 0
containers:
- image: gcr.io/istio-testing/app:latest
name: user-container
readinessProbe:
successThreshold: 1
tcpSocket:
port: 0
resources: {}
timeoutSeconds: 300
status:
conditions:
- lastTransitionTime: "2019-12-19T23:19:27Z"
status: Unknown
type: Ready
latestCreatedRevisionName: knative-service-hl52c
observedGeneration: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Basically everything is unknown with no real indication about what is going wrong
Steps to Reproduce the Problem
I am trying to get some knative smoke tests integrated into Istio’s tests so we don’t break things accidentally. See PR https://github.com/istio/istio/pull/19675. It seems fairly reproducible on a fresh cluster, running those steps.
I am probably doing something wrong, but all of the status messages and logs are not leading me in the right direction
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 3
- Comments: 43 (18 by maintainers)
Same problem on v0.12.0,kubeadm(v.17.0),centos7.7.1908. Pods info below
sample app info below
Revision describe info
Same problem on v0.11.0,k8s v.15.4(binary setup),centos7.7.1908. but kubeadm(v1.17.0) works fine.
Same problem on v0.12.0 - totally a showstopper