kubernetes: kubectl get events doesnt sort events by last seen time.

Here is one example see below:-

LASTSEEN   FIRSTSEEN   COUNT     NAME                                KIND         SUBOBJECT                         TYPE      REASON              SOURCE                                   MESSAGE
13m        13m         1         nginx-deployment-3392909933-1tfwp   Pod                                            Normal    Scheduled           {default-scheduler }                     Successfully assigned nginx-deployment-3392909933-1tfwp to kubernetes-minion-group-kcbf
13m        13m         1         nginx-deployment-3392909933-1tfwp   Pod          spec.containers{nginx}            Normal    Pulling             {kubelet kubernetes-minion-group-kcbf}   pulling image "nginx"
13m        13m         1         nginx-deployment-3392909933-1tfwp   Pod          spec.containers{nginx}            Normal    Pulled              {kubelet kubernetes-minion-group-kcbf}   Successfully pulled image "nginx"
13m        13m         1         nginx-deployment-3392909933-1tfwp   Pod          spec.containers{nginx}            Normal    Created             {kubelet kubernetes-minion-group-kcbf}   Created container with docker id 0f5fc6ae789f
13m        13m         1         nginx-deployment-3392909933-1tfwp   Pod          spec.containers{nginx}            Normal    Started             {kubelet kubernetes-minion-group-kcbf}   Started container with docker id 0f5fc6ae789f
13m        13m         1         nginx-deployment-3392909933-3ct1t   Pod                                            Normal    Scheduled           {default-scheduler }                     Successfully assigned nginx-deployment-3392909933-3ct1t to kubernetes-minion-group-9q61
13m        13m         1         nginx-deployment-3392909933-3ct1t   Pod          spec.containers{nginx}            Normal    Pulling             {kubelet kubernetes-minion-group-9q61}   pulling image "nginx"
13m        13m         1         nginx-deployment-3392909933-3ct1t   Pod          spec.containers{nginx}            Normal    Pulled              {kubelet kubernetes-minion-group-9q61}   Successfully pulled image "nginx"
13m        13m         1         nginx-deployment-3392909933-3ct1t   Pod          spec.containers{nginx}            Normal    Created             {kubelet kubernetes-minion-group-9q61}   Created container with docker id 291683894adc
13m        13m         1         nginx-deployment-3392909933-3ct1t   Pod          spec.containers{nginx}            Normal    Started             {kubelet kubernetes-minion-group-9q61}   Started container with docker id 291683894adc
13m        13m         1         nginx-deployment-3392909933-d70zp   Pod                                            Normal    Scheduled           {default-scheduler }                     Successfully assigned nginx-deployment-3392909933-d70zp to kubernetes-minion-group-5bdr
13m        13m         1         nginx-deployment-3392909933-d70zp   Pod          spec.containers{nginx}            Normal    Pulling             {kubelet kubernetes-minion-group-5bdr}   pulling image "nginx"
13m        13m         1         nginx-deployment-3392909933-d70zp   Pod          spec.containers{nginx}            Normal    Pulled              {kubelet kubernetes-minion-group-5bdr}   Successfully pulled image "nginx"
13m        13m         1         nginx-deployment-3392909933-d70zp   Pod          spec.containers{nginx}            Normal    Created             {kubelet kubernetes-minion-group-5bdr}   Created container with docker id 823f95040680
13m        13m         1         nginx-deployment-3392909933-d70zp   Pod          spec.containers{nginx}            Normal    Started             {kubelet kubernetes-minion-group-5bdr}   Started container with docker id 823f95040680
13m        13m         1         nginx-deployment-3392909933         ReplicaSet                                     Normal    SuccessfulCreate    {replicaset-controller }                 Created pod: nginx-deployment-3392909933-1tfwp
13m        13m         1         nginx-deployment-3392909933         ReplicaSet                                     Normal    SuccessfulCreate    {replicaset-controller }                 Created pod: nginx-deployment-3392909933-3ct1t
13m        13m         1         nginx-deployment-3392909933         ReplicaSet                                     Normal    SuccessfulCreate    {replicaset-controller }                 Created pod: nginx-deployment-3392909933-d70zp
13m        13m         1         nginx-deployment                    Deployment                                     Normal    ScalingReplicaSet   {deployment-controller }                 Scaled up replica set nginx-deployment-3392909933 to 3
8m         8m          1         secret-env-pod                      Pod                                            Normal    Scheduled           {default-scheduler }                     Successfully assigned secret-env-pod to kubernetes-minion-group-5bdr
8m         8m          1         secret-env-pod                      Pod          spec.containers{test-container}   Normal    Pulling             {kubelet kubernetes-minion-group-5bdr}   pulling image "gcr.io/google_containers/busybox"
8m         8m          1         secret-env-pod                      Pod          spec.containers{test-container}   Normal    Pulled              {kubelet kubernetes-minion-group-5bdr}   Successfully pulled image "gcr.io/google_containers/busybox"
8m         8m          1         secret-env-pod                      Pod          spec.containers{test-container}   Normal    Created             {kubelet kubernetes-minion-group-5bdr}   Created container with docker id 06390f89067f
8m         8m          1         secret-env-pod                      Pod          spec.containers{test-container}   Normal    Started             {kubelet kubernetes-minion-group-5bdr}   Started container with docker id 06390f89067f
10m        10m         1         secret-test-pod                     Pod                                            Normal    Scheduled           {default-scheduler }                     Successfully assigned secret-test-pod to kubernetes-minion-group-5bdr
10m        10m         1         secret-test-pod                     Pod          spec.containers{test-container}   Normal    Pulling             {kubelet kubernetes-minion-group-5bdr}   pulling image "kubernetes/mounttest:0.1"
10m        10m         1         secret-test-pod                     Pod          spec.containers{test-container}   Normal    Pulled              {kubelet kubernetes-minion-group-5bdr}   Successfully pulled image "kubernetes/mounttest:0.1"
10m        10m         1         secret-test-pod                     Pod          spec.containers{test-container}   Normal    Created             {kubelet kubernetes-minion-group-5bdr}   Created container with docker id c865b528a5ea
10m        10m         1         secret-test-pod                     Pod          spec.containers{test-container}   Normal    Started             {kubelet kubernetes-minion-group-5bdr}   Started container with docker id c865b528a5ea

After 13m, it should be 10m and then 8m. This feature would allow us to see whats wrong easily. I have had other examples where its all mixed up and its not easy to see where your just deployed pod is failing. Tested on 1.3.0

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 90
  • Comments: 62 (17 by maintainers)

Most upvoted comments

Newbie k8s user here, was wondering why running kubectl get events doesn’t sort events by time, definitely confused me a bit at first.

Looks like kubectl get events --sort-by='.metadata.creationTimestamp' does the trick though (posting this for others who may run into this issue)—unless I’m missing something, sorta seems that should be the default though 😃

This is still happening, is there any timeline to fix this?

Related: https://github.com/kubernetes/kubernetes/issues/36304#issuecomment-581864033

Sometimes (quite often) there’s an event with "lastTimestamp": null (see kubectl get event -o json).

This causes kubectl get event --sort-by=lastTimestamp to fail: sorter.go:354] Field {.lastTimestamp} in [][][]reflect.Value is an unsortable type: interface, err: unsortable interface: interface

Still an issue and still confusing to the user. I’d love to see a fix for this!

kubectl get events --sort-by='.lastTimestamp' is doing the trick for me.

Sort by lastTimestamp works on my cluster, but why it’s not the default? Isn’t it the most meaningful order?

This is still driving me nuts everytime I’m investigating a failing pod, so I’d like to keep this open.

Seriously?

Looking at a history with a random order of the event isn’t an issue for you @llech ?

@PaulCapestany I think you should sort by .lastTimestamp rather than .metadata.creationTimestamp in case there was a compaction (duplicate events):

kubectl get events --sort-by='{.lastTimestamp}' --field-selector involvedObject.kind=Pod,involvedObject.name=mypod

Agree that this is very confusing.

@AdoHe @bgrant0607 @lavalamp @pwittrock Here are my findings so far using printf debugging:-

1: sorting of events across different resources can be achieved using the command kubectl get events --sort-by='{.lastTimestamp}' otherwise they are all mixed up

2: doing kubectl get events would show up events in mixed order across all resources(tested with pods). This should show events in sorted order by default, since when events from different resources are shown, the user is only interested in the latest events or events in chronological order. Per resource events are anyways shown in kubectl describe

3:k8s.io/kubernetes/pkg/kubectl/resource_printer.go has handlers for different kinds of objects

func (h *HumanReadablePrinter) addDefaultHandlers() {
....
  h.Handler(eventColumns, printEvent)
 h.Handler(eventColumns, printEventList)

but when i tried to print the stack trace, i only see printEvent being called, and printEventList is not used at all, even though i see the kubectl getting a response with api.EventList object.

Here is a sample stack trace for kubectl get events

goroutine 1 [running]:
runtime/debug.Stack(0x6, 0xc4201e1c00, 0x252a720)
    /usr/local/go/src/runtime/debug/stack.go:24 +0x79
runtime/debug.PrintStack()
    /usr/local/go/src/runtime/debug/stack.go:16 +0x22
k8s.io/kubernetes/pkg/kubectl.printEvent(0xc4205eb5e8, 0x24e7e80, 0xc4204d8800, 0x0, 0x0, 0x0, 0x25575d8, 0x0, 0x0, 0x0, ...)
    /Users/mayank.kumar/src/golang/src/k8s.io/kubernetes/pkg/kubectl/resource_printer.go:1594 +0x42
reflect.Value.call(0x171f0c0, 0x1a43480, 0x13, 0x1945ebb, 0x4, 0xc4205c3658, 0x3, 0x3, 0x18aee00, 0x18aee00, ...)
    /usr/local/go/src/reflect/value.go:434 +0x5c8
reflect.Value.Call(0x171f0c0, 0x1a43480, 0x13, 0xc4205c3658, 0x3, 0x3, 0x0, 0xc4200b0f18, 0x1744a40)
    /usr/local/go/src/reflect/value.go:302 +0xa4
k8s.io/kubernetes/pkg/kubectl.(*HumanReadablePrinter).PrintObj(0xc420274a00, 0x24de8c0, 0xc4205eb5e8, 0x24e7e80, 0xc4204d8800, 0x0, 0x0)
    /Users/mayank.kumar/src/golang/src/k8s.io/kubernetes/pkg/kubectl/resource_printer.go:2237 +0x323
k8s.io/kubernetes/pkg/kubectl/cmd.RunGet(0xc4203f8360, 0x24e7a80, 0xc420086008, 0x24e7a80, 0xc420086010, 0xc4200a98c0, 0xc42037c8e0, 0x1, 0x1, 0xc420160e40, ...)
    /Users/mayank.kumar/src/golang/src/k8s.io/kubernetes/pkg/kubectl/cmd/get.go:416 +0x131a
k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdGet.func1(0xc4200a98c0, 0xc42037c8e0, 0x1, 0x1)
    /Users/mayank.kumar/src/golang/src/k8s.io/kubernetes/pkg/kubectl/cmd/get.go:102 +0x89
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc4200a98c0, 0xc42037c820, 0x1, 0x1, 0xc4200a98c0, 0xc42037c820)
    /Users/mayank.kumar/src/golang/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x439
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420406b40, 0xc42037c380, 0x1, 0x1)
    /Users/mayank.kumar/src/golang/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x367
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420406b40, 0x24e7a40, 0xc420086000)
    /Users/mayank.kumar/src/golang/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b
k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0)
    /Users/mayank.kumar/src/golang/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:38 +0xcb
main.main()

Here is the corresponding kubectl http response with -v=10 debugging ,

I0904 01:48:36.314251   86778 request.go:908] Response Body: {"kind":"EventList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/events","resourceVersion":"2593"},"items":[{"metadata":{"name":"my-nginx-2494149703-ce2vy.147111955b698a53","namespace":"default","selfLink":"/api/v1/namespaces/default/events/my-nginx-2494149703-ce2vy.147111955b698a53","uid":"f842f93b-7279-11e6-8686-42010a800002","resourceVersion":"2588","creationTimestamp":"2016-09-04T08:31:26Z"},"involvedObject":{"kind":"Pod","namespace":"default","name":"my-nginx-2494149703-ce2vy","uid":"f83d7533-7279-11e6-8686-42010a800002","apiVersion":"v1","resourceVersion":"2489739"},"reason":"Scheduled","message":"Successfully assigned my-nginx-2494149703-ce2vy to kubernetes-minion-group-5bdr","source":{"component":"default-scheduler"},"firstTimestamp":"2016-09-04T08:31:26Z","lastTimestamp":"2016-09-04T08:31:26Z","count":1,"type":"Normal"},{"metadata":{"name":"my-nginx-2494149703-ce2vy.147111959345d5d9","namespace":"default","selfLink":"/api/v1/namespaces/default/events/my-nginx-2494149703-ce2vy.147111959345d5d9","uid":"f8d19330-7279-11e6-8686-42010a800002","resourceVersion":"2590","creationTimestamp":"2016-09-04T08:31:27Z"},"involvedObject":{"kind":"Pod","namespace":"default","name":"my-nginx-2494149703-ce2vy","uid":"f83d7533-7279-11e6-8686-42010a800002","apiVersion":"v1","resourceVersion":"2489743","fieldPath":"spec.containers{my-nginx}"},"reason":"Pulli

4: I saw similar behavior with kubectl get pods, where although there are handlers called printPodList, they are never called.

5: the printEventList handler is never called and that is where the sorting was added as part of this PR https://github.com/kubernetes/kubernetes/issues/2948 I think that PR only solved it for kubectl describe pods, but did not solve it fot kubectl get events or i am still missing something.

// Sorts and prints the EventList in a human-friendly format.
1637 func printEventList(list *api.EventList, w io.Writer, options PrintOptions) error {
1638     sort.Sort(SortableEvents(list.Items))
1639     for i := range list.Items {
1640         if err := printEvent(&list.Items[i], w, options); err != nil {
1641             return err
1642         }
1643     }
1644     return nil
1645 }
1646 

6: Someone more familiar with the framework can confirm how these printXXXList functions are supposed to be used. But for sure in the kubectl get events path, the printEventList is not being called.

7: k8s.io/kubernetes/pkg/kubectl/cmd/get.go as logic which figures if the object is ListType only for watches, but for regular flow, it breaks down the response into individual objs and then calls PrintObj on each obj(rather than calling it on the uber EventList obj), which ensures that the corresponding handlers for api.EventList will never be called even though the response was api.EventList.

The new command kubectl alpha events is now out in Kubernetes 1.23 - give it a try! If you have any feedback or thoughts on how it could evolve, please raise a new issue and tag me.

Hey all! Just so you know the aforementioned command kubectl get events --sort-by='.lastTimestamp' really works but it only works without the `–field-selector involvedObject.name=.

Once the involvedObject is added to the command, the sorting goes crazy again =(.

There’s still some weird behavior that happens with this.

It will work initially, but every so often it will print an event that occurred days ago.

Upon further research, this works:

$ kubectl get event --sort-by .lastTimestamp

However, this is not sorted, and old events are randomly printed as if they just happened:

$ kubectl get event --sort-by .lastTimestamp -w

@zedtux

Seriously?

Looking at a history with a random order of the event isn’t an issue for you @llech ?

I think you misread @llech’s comment - they are arguing in favor of events being time ordered.

Yes, imo this is a bugfix and not a feature request.

One that has been open for multiple years 😦

--sort-by='{.lastTimestamp}' doesn’t seem to sort correctly :

$ kubectl get events --field-selector type=Warning --namespace=logging --sort-by='{.lastTimestamp}'
58m         Warning   BackOff                           Pod                   Back-off restarting failed container
58m         Warning   BackOff                           Pod                   Back-off restarting failed container
55m         Warning   FailedScheduling                  Pod                   0/3 nodes are available: 1 Insufficient cpu, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules.
55m         Warning   CalculateExpectedPodCountFailed   PodDisruptionBudget   Failed to calculate the number of expected pods: found no controllers for pod "elasticsearch-master-0"
55m         Warning   NoControllers                     PodDisruptionBudget   found no controllers for pod "elasticsearch-master-0"
53m         Warning   FailedScheduling                  Pod                   pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
53m         Warning   FailedScheduling                  Pod                   pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
55m         Warning   FailedScheduling                  Pod                   skip schedule deleting pod: logging/elasticsearch-master-1
53m         Warning   FailedScheduling                  Pod                   pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
52m         Warning   FailedMount                       Pod                   MountVolume.SetUp failed for volume "pvc-b366c968-290a-11e9-9714-6e4e5abdddf2" : mount command failed, status: Failure, reason: Rook: Mount volume failed: failed to attach volume replicapool/pvc-b366c968-290a-11e9-9714-6e4e5abdddf2: failed to map image replicapool/pvc-b366c968-290a-11e9-9714-6e4e5abdddf2 cluster rook-ceph. failed to map image replicapool/pvc-b366c968-290a-11e9-9714-6e4e5abdddf2: Failed to complete 'rbd': signal: interrupt. . output:
20m         Warning   FailedScheduling                  Pod                   pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
42m         Warning   BackOff                           Pod                   Back-off restarting failed container
37m         Warning   FailedScheduling                  Pod                   pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
22m         Warning   BackOff                           Pod                   Back-off restarting failed container
38m         Warning   FailedScheduling                  Pod                   skip schedule deleting pod: logging/elasticsearch-master-2
37m         Warning   FailedScheduling                  Pod                   pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
21m         Warning   FailedScheduling                  Pod                   0/3 nodes are available: 1 Insufficient cpu, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules.
4m57s       Warning   BackOff                           Pod                   Back-off restarting failed container
22m         Warning   BackOff                           Pod                   Back-off restarting failed container
38m         Warning   FailedScheduling                  Pod                   0/3 nodes are available: 1 Insufficient cpu, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules.
37m         Warning   FailedScheduling                  Pod                   pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
38m         Warning   NoControllers                     PodDisruptionBudget   found no controllers for pod "elasticsearch-master-2"
43m         Warning   BackOff                           Pod                   Back-off restarting failed container
38m         Warning   CalculateExpectedPodCountFailed   PodDisruptionBudget   Failed to calculate the number of expected pods: found no controllers for pod "elasticsearch-master-2"
38m         Warning   NoControllers                     PodDisruptionBudget   found no controllers for pod "elasticsearch-master-1"
38m         Warning   CalculateExpectedPodCountFailed   PodDisruptionBudget   Failed to calculate the number of expected pods: found no controllers for pod "elasticsearch-master-0"
38m         Warning   CalculateExpectedPodCountFailed   PodDisruptionBudget   Failed to calculate the number of expected pods: found no controllers for pod "elasticsearch-master-1"
21m         Warning   NoControllers                     PodDisruptionBudget   found no controllers for pod "elasticsearch-master-0"
38m         Warning   NoControllers                     PodDisruptionBudget   found no controllers for pod "elasticsearch-master-0"
20m         Warning   FailedScheduling                  Pod                   0/3 nodes are available: 1 Insufficient cpu, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules.
21m         Warning   FailedScheduling                  Pod                   skip schedule deleting pod: logging/elasticsearch-master-1
20m         Warning   FailedScheduling                  Pod                   pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
1s          Warning   BackOff                           Pod                   Back-off restarting failed container
20m         Warning   FailedScheduling                  Pod                   pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
21m         Warning   CalculateExpectedPodCountFailed   PodDisruptionBudget   Failed to calculate the number of expected pods: found no controllers for pod "elasticsearch-master-0"

you need to first do a kubectl get events -o=json to see what fields you can use for it. in some cases lastTimestamp for example is being included in metadata and is empty or duplicated (i think helm includes it in some “managedFields”). You need to find the proper time field that can be used for sort and use that one. as example, in some projects i use get event --sort-by='.lastTimestamp' and in others i use get event --sort-by='.metadata.creationTimestamp'

While waiting for the new oc events command, I came up with the following:

function oc-events {
	{
		echo $'TIME\tNAMESPACE\tTYPE\tREASON\tOBJECT\tSOURCE\tMESSAGE';
		oc get events -o json "$@" \
			| jq -r  '.items | map(. + {t: (.eventTime//.lastTimestamp)}) | sort_by(.t)[] | [.t, .metadata.namespace, .type, .reason, .involvedObject.kind + "/" + .involvedObject.name, .source.component + "," + (.source.host//"-"), .message] | @tsv';
	} \
		| column -s $'\t' -t \
		| less -S
}

You can use it like: oc-events -A, oc events -n foo, etc.

As of Kubernetes 1.18 all new objects have metadata for server-side apply, which gives us a new way to sort events:

kubectl get events --sort-by=".metadata.managedFields[0].time"

Kudos to @stefanprodan who told me about this.

Sometimes (quite often) there’s an event with “lastTimestamp”: null

I’ve seen this with events from kube-scheduler - they have eventTimestamp set but not firstTimestamp or lastTimestamp.

And there doesn’t seem to be a way to tell jsonpath to take one field or fall back to another.

kubectl get event --sort-by .lastTimestamp
F0911 11:43:49.391713   15640 sorter.go:354] Field {.lastTimestamp} in [][][]reflect.Value is an unsortable type: interface, err: unsortable type: <nil>

The kubectl get event --sort-by .lastTimestamp fails for me but this command works:

kubectl get events --sort-by='.metadata.creationTimestamp'

Does not work for me on Gke with version v1.18.15-gke.1500 😦

 kubectl get events --sort-by=".metadata.managedFields[0].time"
LAST SEEN   TYPE      REASON                                                                                               OBJECT                                                       MESSAGE
4m          Normal    UpToDate                                                                                             configconnector/configconnector.core.cnrm.cloud.google.com   ConfigConnector is up to date
37m         Normal    NodeNotReady                                                                                         node/gke-playground-default-c33b0f13-w0z8                    Node gke-playground-default-c33b0f13-w0z8 status is now: NodeNotReady
37m         Normal    Deleting node gke-playground-default-c33b0f13-w0z8 because it does not exist in the cloud provider   node/gke-playground-default-c33b0f13-w0z8                    Node gke-playground-default-c33b0f13-w0z8 event: DeletingNode
37m         Normal    RemovingNode                                                                                         node/gke-playground-default-c33b0f13-w0z8                    Node gke-playground-default-c33b0f13-w0z8 event: Removing Node gke-playground-default-c33b0f13-w0z8 from Controller
36m         Normal    Starting                                                                                             node/gke-playground-default-c33b0f13-w0z8                    Starting kubelet.
36m         Normal    NodeHasSufficientMemory                                                                              node/gke-playground-default-c33b0f13-w0z8                    Node gke-playground-default-c33b0f13-w0z8 status is now: NodeHasSufficientMemory
36m         Normal    NodeHasSufficientPID                                                                                 node/gke-playground-default-c33b0f13-w0z8                    Node gke-playground-default-c33b0f13-w0z8 status is now: NodeHasSufficientPID
36m         Normal    NodeHasNoDiskPressure                                                                                node/gke-playground-default-c33b0f13-w0z8                    Node gke-playground-default-c33b0f13-w0z8 status is now: NodeHasNoDiskPressure
36m         Normal    NodeAllocatableEnforced                                                                              node/gke-playground-default-c33b0f13-w0z8                    Updated Node Allocatable limit across pods
36m         Normal    RegisteredNode                                                                                       node/gke-playground-default-c33b0f13-w0z8                    Node gke-playground-default-c33b0f13-w0z8 event: Registered Node gke-playground-default-c33b0f13-w0z8 in Controller
36m         Normal    Starting                                                                                             node/gke-playground-default-c33b0f13-w0z8                    Starting kube-proxy.
36m         Warning   ContainerdStart                                                                                      node/gke-playground-default-c33b0f13-w0z8                    Starting containerd container runtime...
36m         Warning   DockerStart                                                                                          node/gke-playground-default-c33b0f13-w0z8                    Starting Docker Application Container Engine...
36m         Warning   KubeletStart                                                                                         node/gke-playground-default-c33b0f13-w0z8                    Started Kubernetes kubelet.
36m         Warning   NodeSysctlChange                                                                                     node/gke-playground-default-c33b0f13-w0z8                    
36m         Normal    NodeReady                                                                                            node/gke-playground-default-c33b0f13-w0z8                    Node gke-playground-default-c33b0f13-w0z8 status is now: NodeReady
13m         Warning   NodeSysctlChange                                                                                     node/gke-playground-default-c33b0f13-j3s6                    
3m47s       Warning   NodeSysctlChange                                                                                     node/gke-playground-default-c33b0f13-sq03                    
40s         Normal    NodeNotReady                                                                                         node/gke-playground-default-c33b0f13-sq03                    Node gke-playground-default-c33b0f13-sq03 status is now: NodeNotReady
2m13s       Warning   KubeletStart                                                                                         node/gke-playground-default-c33b0f13-sq03                    Started Kubernetes kubelet.
2m13s       Warning   NodeSysctlChange                                                                                     node/gke-playground-default-c33b0f13-sq03                    
2m13s       Warning   ContainerdStart                                                                                      node/gke-playground-default-c33b0f13-sq03                    Starting containerd container runtime...
2m13s       Warning   DockerStart                                                                                          node/gke-playground-default-c33b0f13-sq03                    Starting Docker Application Container Engine...
2m12s       Normal    NodeAllocatableEnforced                                                                              node/gke-playground-default-c33b0f13-sq03                    Updated Node Allocatable limit across pods
2m12s       Normal    Starting                                                                                             node/gke-playground-default-c33b0f13-sq03                    Starting kubelet.
2m12s       Normal    NodeHasSufficientMemory                                                                              node/gke-playground-default-c33b0f13-sq03                    Node gke-playground-default-c33b0f13-sq03 status is now: NodeHasSufficientMemory
2m12s       Normal    NodeHasNoDiskPressure                                                                                node/gke-playground-default-c33b0f13-sq03                    Node gke-playground-default-c33b0f13-sq03 status is now: NodeHasNoDiskPressure
2m12s       Normal    NodeHasSufficientPID                                                                                 node/gke-playground-default-c33b0f13-sq03                    Node gke-playground-default-c33b0f13-sq03 status is now: NodeHasSufficientPID
2m12s       Warning   Rebooted                                                                                             node/gke-playground-default-c33b0f13-sq03                    Node gke-playground-default-c33b0f13-sq03 has been rebooted, boot id: 695d4e5c-1639-4a0b-8d08-7d72e8b955f0
2m12s       Normal    NodeNotReady                                                                                         node/gke-playground-default-c33b0f13-sq03                    Node gke-playground-default-c33b0f13-sq03 status is now: NodeNotReady
2m9s        Normal    Starting                                                                                             node/gke-playground-default-c33b0f13-sq03                    Starting kube-proxy.

/lifecycle frozen

@krmayankk This issue is now assigned to you.