kubernetes: The apiserver can't work

What happened: The apiserver is always restarting and I can’t access it with any commands looks like kubectl xxx. When I check the kubectl’s status, the journalctl -b -u kubectl tells me that:

11月 11 20:24:43 frodez-virtual-machine kubelet[945]: E1111 20:24:43.894880     945 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get "https://192.168.75.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/frodez-virtual-machine?timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:45 frodez-virtual-machine kubelet[945]: W1111 20:24:45.278487     945 status_manager.go:550] Failed to get status for pod "kube-apiserver-frodez-virtual-machine_kube-system(70469b823b8145dec932497bc2353a4d)": Get "https://192.168.75.137:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-frodez-virtual-machine": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:45 frodez-virtual-machine kubelet[945]: W1111 20:24:45.278876     945 status_manager.go:550] Failed to get status for pod "kube-controller-manager-frodez-virtual-machine_kube-system(929bd0d134ed517e06910955791c4170)": Get "https://192.168.75.137:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-frodez-virtual-machine": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:45 frodez-virtual-machine kubelet[945]: E1111 20:24:45.985820     945 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "frodez-virtual-machine": Get "https://192.168.75.137:6443/api/v1/nodes/frodez-virtual-machine?resourceVersion=0&timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:45 frodez-virtual-machine kubelet[945]: E1111 20:24:45.986583     945 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "frodez-virtual-machine": Get "https://192.168.75.137:6443/api/v1/nodes/frodez-virtual-machine?timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:45 frodez-virtual-machine kubelet[945]: E1111 20:24:45.987152     945 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "frodez-virtual-machine": Get "https://192.168.75.137:6443/api/v1/nodes/frodez-virtual-machine?timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:45 frodez-virtual-machine kubelet[945]: E1111 20:24:45.987505     945 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "frodez-virtual-machine": Get "https://192.168.75.137:6443/api/v1/nodes/frodez-virtual-machine?timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:45 frodez-virtual-machine kubelet[945]: E1111 20:24:45.987707     945 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "frodez-virtual-machine": Get "https://192.168.75.137:6443/api/v1/nodes/frodez-virtual-machine?timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:45 frodez-virtual-machine kubelet[945]: E1111 20:24:45.987728     945 kubelet_node_status.go:429] Unable to update node status: update node status exceeds retry count
11月 11 20:24:50 frodez-virtual-machine kubelet[945]: E1111 20:24:50.895985     945 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get "https://192.168.75.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/frodez-virtual-machine?timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:51 frodez-virtual-machine kubelet[945]: I1111 20:24:51.276942     945 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 3e497699f9958d8e302104325fd268dc9d9e8eba204b72460975f777e6ba90b6
11月 11 20:24:51 frodez-virtual-machine kubelet[945]: E1111 20:24:51.278011     945 pod_workers.go:191] Error syncing pod 70469b823b8145dec932497bc2353a4d ("kube-apiserver-frodez-virtual-machine_kube-system(70469b823b8145dec932497bc2353a4d)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-frodez-virtual-machine_kube-system(70469b823b8145dec932497bc2353a4d)"
11月 11 20:24:53 frodez-virtual-machine kubelet[945]: E1111 20:24:53.228685     945 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.75.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dfrodez-virtual-machine&resourceVersion=629": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:55 frodez-virtual-machine kubelet[945]: W1111 20:24:55.278868     945 status_manager.go:550] Failed to get status for pod "kube-apiserver-frodez-virtual-machine_kube-system(70469b823b8145dec932497bc2353a4d)": Get "https://192.168.75.137:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-frodez-virtual-machine": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:55 frodez-virtual-machine kubelet[945]: W1111 20:24:55.279379     945 status_manager.go:550] Failed to get status for pod "kube-controller-manager-frodez-virtual-machine_kube-system(929bd0d134ed517e06910955791c4170)": Get "https://192.168.75.137:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-frodez-virtual-machine": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:55 frodez-virtual-machine kubelet[945]: E1111 20:24:55.989181     945 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "frodez-virtual-machine": Get "https://192.168.75.137:6443/api/v1/nodes/frodez-virtual-machine?resourceVersion=0&timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:55 frodez-virtual-machine kubelet[945]: E1111 20:24:55.989802     945 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "frodez-virtual-machine": Get "https://192.168.75.137:6443/api/v1/nodes/frodez-virtual-machine?timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:55 frodez-virtual-machine kubelet[945]: E1111 20:24:55.990120     945 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "frodez-virtual-machine": Get "https://192.168.75.137:6443/api/v1/nodes/frodez-virtual-machine?timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:55 frodez-virtual-machine kubelet[945]: E1111 20:24:55.990310     945 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "frodez-virtual-machine": Get "https://192.168.75.137:6443/api/v1/nodes/frodez-virtual-machine?timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:55 frodez-virtual-machine kubelet[945]: E1111 20:24:55.990387     945 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "frodez-virtual-machine": Get "https://192.168.75.137:6443/api/v1/nodes/frodez-virtual-machine?timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:24:55 frodez-virtual-machine kubelet[945]: E1111 20:24:55.990393     945 kubelet_node_status.go:429] Unable to update node status: update node status exceeds retry count
11月 11 20:24:57 frodez-virtual-machine kubelet[945]: E1111 20:24:57.897147     945 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get "https://192.168.75.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/frodez-virtual-machine?timeout=10s": dial tcp 192.168.75.137:6443: connect: connection refused
11月 11 20:25:01 frodez-virtual-machine kubelet[945]: E1111 20:25:01.028023     945 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.75.137:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=626": dial tcp 192.168.75.137:6443: connect: connection refused

And then I checked the container’s status and log of the apiserver: status:

[
    {
        "Id": "03fb1a8316983f3b465122cf564f74c65eaf511cb1f7410409cbf26e9c16aa9d",
        "Created": "2020-11-11T12:09:21.292711661Z",
        "Path": "kube-apiserver",
        "Args": [
            "--advertise-address=192.168.75.137",
            "--allow-privileged=true",
            "--authorization-mode=Node,RBAC",
            "--client-ca-file=/etc/kubernetes/pki/ca.crt",
            "--enable-admission-plugins=NodeRestriction",
            "--enable-bootstrap-token-auth=true",
            "--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt",
            "--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt",
            "--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key",
            "--etcd-servers=https://127.0.0.1:2379",
            "--insecure-port=0",
            "--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt",
            "--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key",
            "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
            "--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt",
            "--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key",
            "--requestheader-allowed-names=front-proxy-client",
            "--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt",
            "--requestheader-extra-headers-prefix=X-Remote-Extra-",
            "--requestheader-group-headers=X-Remote-Group",
            "--requestheader-username-headers=X-Remote-User",
            "--secure-port=6443",
            "--service-account-key-file=/etc/kubernetes/pki/sa.pub",
            "--service-cluster-ip-range=10.96.0.0/12",
            "--tls-cert-file=/etc/kubernetes/pki/apiserver.crt",
            "--tls-private-key-file=/etc/kubernetes/pki/apiserver.key"
        ],
        "State": {
            "Status": "exited",
            "Running": false,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 0,
            "ExitCode": 255,
            "Error": "",
            "StartedAt": "2020-11-11T12:09:21.401649027Z",
            "FinishedAt": "2020-11-11T12:09:43.756951315Z"
        },
        "Image": "sha256:a301be0cd44bb11162da49b9c55fc5d137f493bdefcf80226378204be403fa41",
        "ResolvConfPath": "/var/lib/docker/containers/5afa6e9d43885ef0d2c920fb9d247aff12c0b4ab4bae4b6f8e825c2ec36e8c9e/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/5afa6e9d43885ef0d2c920fb9d247aff12c0b4ab4bae4b6f8e825c2ec36e8c9e/hostname",
        "HostsPath": "/var/lib/kubelet/pods/70469b823b8145dec932497bc2353a4d/etc-hosts",
        "LogPath": "/var/lib/docker/containers/03fb1a8316983f3b465122cf564f74c65eaf511cb1f7410409cbf26e9c16aa9d/03fb1a8316983f3b465122cf564f74c65eaf511cb1f7410409cbf26e9c16aa9d-json.log",
        "Name": "/k8s_kube-apiserver_kube-apiserver-frodez-virtual-machine_kube-system_70469b823b8145dec932497bc2353a4d_14",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "docker-default",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/etc/ssl/certs:/etc/ssl/certs:ro",
                "/etc/ca-certificates:/etc/ca-certificates:ro",
                "/etc/pki:/etc/pki:ro",
                "/etc/kubernetes/pki:/etc/kubernetes/pki:ro",
                "/usr/local/share/ca-certificates:/usr/local/share/ca-certificates:ro",
                "/usr/share/ca-certificates:/usr/share/ca-certificates:ro",
                "/var/lib/kubelet/pods/70469b823b8145dec932497bc2353a4d/etc-hosts:/etc/hosts",
                "/var/lib/kubelet/pods/70469b823b8145dec932497bc2353a4d/containers/kube-apiserver/1f23d1b8:/dev/termination-log"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "container:5afa6e9d43885ef0d2c920fb9d247aff12c0b4ab4bae4b6f8e825c2ec36e8c9e",
            "PortBindings": null,
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Capabilities": null,
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "container:5afa6e9d43885ef0d2c920fb9d247aff12c0b4ab4bae4b6f8e825c2ec36e8c9e",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": -998,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "seccomp=unconfined"
            ],
            "UTSMode": "host",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 256,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "/kubepods/burstable/pod70469b823b8145dec932497bc2353a4d",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 100000,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": null,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/asound",
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/dc476b8fcff11c2a445efb5ee301d8c2ab1ae885ae2c1785569f15c797657a95-init/diff:/var/lib/docker/overlay2/ac1ab8a98e5352f5f31dc2aacd6a8235bfe8fbd1ed09f7ce1b9c90bcbc77119c/diff:/var/lib/docker/overlay2/e65c854bdc5fbb2b71775069e3bf1d34b1f9798a30e83890e555aa05a823e4f2/diff:/var/lib/docker/overlay2/4e3115f096c2ab425a6e30c7ebb81855f1780931d456f9355ef652fd4ef8d745/diff",
                "MergedDir": "/var/lib/docker/overlay2/dc476b8fcff11c2a445efb5ee301d8c2ab1ae885ae2c1785569f15c797657a95/merged",
                "UpperDir": "/var/lib/docker/overlay2/dc476b8fcff11c2a445efb5ee301d8c2ab1ae885ae2c1785569f15c797657a95/diff",
                "WorkDir": "/var/lib/docker/overlay2/dc476b8fcff11c2a445efb5ee301d8c2ab1ae885ae2c1785569f15c797657a95/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/usr/local/share/ca-certificates",
                "Destination": "/usr/local/share/ca-certificates",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/usr/share/ca-certificates",
                "Destination": "/usr/share/ca-certificates",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/var/lib/kubelet/pods/70469b823b8145dec932497bc2353a4d/etc-hosts",
                "Destination": "/etc/hosts",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/var/lib/kubelet/pods/70469b823b8145dec932497bc2353a4d/containers/kube-apiserver/1f23d1b8",
                "Destination": "/dev/termination-log",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/etc/ssl/certs",
                "Destination": "/etc/ssl/certs",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/etc/ca-certificates",
                "Destination": "/etc/ca-certificates",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/etc/pki",
                "Destination": "/etc/pki",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/etc/kubernetes/pki",
                "Destination": "/etc/kubernetes/pki",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "frodez-virtual-machine",
            "Domainname": "",
            "User": "0",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "KUBE_DNS_PORT_53_UDP=udp://10.96.0.10:53",
                "KUBE_DNS_PORT_53_UDP_PROTO=udp",
                "KUBERNETES_PORT=tcp://10.96.0.1:443",
                "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443",
                "KUBERNETES_PORT_443_TCP_PROTO=tcp",
                "KUBERNETES_PORT_443_TCP_PORT=443",
                "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1",
                "KUBE_DNS_SERVICE_PORT=53",
                "KUBE_DNS_PORT_9153_TCP_ADDR=10.96.0.10",
                "KUBERNETES_SERVICE_HOST=10.96.0.1",
                "KUBERNETES_SERVICE_PORT=443",
                "KUBE_DNS_SERVICE_PORT_METRICS=9153",
                "KUBE_DNS_PORT_53_TCP=tcp://10.96.0.10:53",
                "KUBE_DNS_PORT_53_TCP_ADDR=10.96.0.10",
                "KUBE_DNS_PORT_9153_TCP_PORT=9153",
                "KUBE_DNS_PORT_9153_TCP=tcp://10.96.0.10:9153",
                "KUBE_DNS_SERVICE_HOST=10.96.0.10",
                "KUBE_DNS_SERVICE_PORT_DNS=53",
                "KUBE_DNS_SERVICE_PORT_DNS_TCP=53",
                "KUBE_DNS_PORT_53_UDP_PORT=53",
                "KUBE_DNS_PORT_53_UDP_ADDR=10.96.0.10",
                "KUBE_DNS_PORT_53_TCP_PROTO=tcp",
                "KUBERNETES_SERVICE_PORT_HTTPS=443",
                "KUBE_DNS_PORT=udp://10.96.0.10:53",
                "KUBE_DNS_PORT_53_TCP_PORT=53",
                "KUBE_DNS_PORT_9153_TCP_PROTO=tcp",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt"
            ],
            "Cmd": null,
            "Healthcheck": {
                "Test": [
                    "NONE"
                ]
            },
            "Image": "sha256:a301be0cd44bb11162da49b9c55fc5d137f493bdefcf80226378204be403fa41",
            "Volumes": null,
            "WorkingDir": "/",
            "Entrypoint": [
                "kube-apiserver",
                "--advertise-address=192.168.75.137",
                "--allow-privileged=true",
                "--authorization-mode=Node,RBAC",
                "--client-ca-file=/etc/kubernetes/pki/ca.crt",
                "--enable-admission-plugins=NodeRestriction",
                "--enable-bootstrap-token-auth=true",
                "--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt",
                "--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt",
                "--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key",
                "--etcd-servers=https://127.0.0.1:2379",
                "--insecure-port=0",
                "--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt",
                "--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key",
                "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
                "--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt",
                "--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key",
                "--requestheader-allowed-names=front-proxy-client",
                "--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt",
                "--requestheader-extra-headers-prefix=X-Remote-Extra-",
                "--requestheader-group-headers=X-Remote-Group",
                "--requestheader-username-headers=X-Remote-User",
                "--secure-port=6443",
                "--service-account-key-file=/etc/kubernetes/pki/sa.pub",
                "--service-cluster-ip-range=10.96.0.0/12",
                "--tls-cert-file=/etc/kubernetes/pki/apiserver.crt",
                "--tls-private-key-file=/etc/kubernetes/pki/apiserver.key"
            ],
            "OnBuild": null,
            "Labels": {
                "annotation.io.kubernetes.container.hash": "ffd23a59",
                "annotation.io.kubernetes.container.restartCount": "14",
                "annotation.io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
                "annotation.io.kubernetes.container.terminationMessagePolicy": "File",
                "annotation.io.kubernetes.pod.terminationGracePeriod": "30",
                "description": "go based runner for distroless scenarios",
                "io.kubernetes.container.logpath": "/var/log/pods/kube-system_kube-apiserver-frodez-virtual-machine_70469b823b8145dec932497bc2353a4d/kube-apiserver/14.log",
                "io.kubernetes.container.name": "kube-apiserver",
                "io.kubernetes.docker.type": "container",
                "io.kubernetes.pod.name": "kube-apiserver-frodez-virtual-machine",
                "io.kubernetes.pod.namespace": "kube-system",
                "io.kubernetes.pod.uid": "70469b823b8145dec932497bc2353a4d",
                "io.kubernetes.sandbox.id": "5afa6e9d43885ef0d2c920fb9d247aff12c0b4ab4bae4b6f8e825c2ec36e8c9e",
                "maintainers": "Kubernetes Authors"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {}
        }
    }
]

log:

Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I1111 11:58:29.468396       1 server.go:625] external host was not specified, using 192.168.75.137
I1111 11:58:29.468987       1 server.go:163] Version: v1.19.3
I1111 11:58:29.813485       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1111 11:58:29.813514       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1111 11:58:29.814262       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1111 11:58:29.814284       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1111 11:58:29.816705       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.816758       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.824555       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.824593       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.830451       1 client.go:360] parsed scheme: "passthrough"
I1111 11:58:29.830542       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1111 11:58:29.830555       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1111 11:58:29.831208       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.831246       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.860900       1 master.go:271] Using reconciler: lease
I1111 11:58:29.861424       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.861459       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.868151       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.868241       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.875140       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.875179       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.881603       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.881693       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.890884       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.890918       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.898171       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.898334       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.906137       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.906227       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.911771       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.911791       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.917589       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.917622       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.922842       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.922904       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.929008       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.929108       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.935067       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.935102       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.941819       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.941854       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.947702       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.947742       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.954832       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.954897       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.962471       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.962534       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.968520       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.968568       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:29.974477       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:29.974525       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.052627       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.052674       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.060055       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.060106       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.066470       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.066516       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.072604       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.072641       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.081485       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.081535       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.088575       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.088649       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.094818       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.094866       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.101876       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.101914       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.109174       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.109232       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.114698       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.114745       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.121363       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.121408       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.127095       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.127137       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.133178       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.133211       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.138965       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.139003       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.144299       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.144344       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.149776       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.149925       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.155983       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.156021       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.161583       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.161619       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.168646       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.168678       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.174578       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.174613       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.179685       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.179721       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.186424       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.186460       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.192259       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.192339       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.198080       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.198114       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.203828       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.203891       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.209330       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.209349       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.215392       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.215427       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.221689       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.221728       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.227440       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.227479       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.232873       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.232936       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.239760       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.239800       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.245941       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.245980       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.251964       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.252004       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.259865       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.259902       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.265200       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.265265       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.285420       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.285465       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.292529       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.292567       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.299168       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.299188       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.304238       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.304271       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.310930       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.310971       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.317102       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.317510       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.323637       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.323673       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.329120       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.329225       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.336151       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.336212       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.342734       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.342797       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.348538       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.348589       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.354374       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.354414       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.360151       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.360219       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
W1111 11:58:30.466502       1 genericapiserver.go:412] Skipping API batch/v2alpha1 because it has no resources.
W1111 11:58:30.476973       1 genericapiserver.go:412] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W1111 11:58:30.489721       1 genericapiserver.go:412] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W1111 11:58:30.503579       1 genericapiserver.go:412] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1111 11:58:30.506562       1 genericapiserver.go:412] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1111 11:58:30.519457       1 genericapiserver.go:412] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1111 11:58:30.535266       1 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources.
W1111 11:58:30.535367       1 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources.
I1111 11:58:30.545416       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1111 11:58:30.545554       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1111 11:58:30.548293       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.548393       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.555298       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.555440       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:30.813520       1 client.go:360] parsed scheme: "endpoint"
I1111 11:58:30.813598       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1111 11:58:32.351433       1 secure_serving.go:197] Serving securely on [::]:6443
I1111 11:58:32.351486       1 controller.go:83] Starting OpenAPI AggregationController
I1111 11:58:32.351513       1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt
I1111 11:58:32.351531       1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key
I1111 11:58:32.351547       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1111 11:58:32.352184       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1111 11:58:32.352207       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1111 11:58:32.352508       1 autoregister_controller.go:141] Starting autoregister controller
I1111 11:58:32.352597       1 cache.go:32] Waiting for caches to sync for autoregister controller
I1111 11:58:32.352749       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I1111 11:58:32.352857       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1111 11:58:32.352874       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I1111 11:58:32.352894       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key
I1111 11:58:32.352966       1 available_controller.go:404] Starting AvailableConditionController
I1111 11:58:32.353014       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1111 11:58:32.353025       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
I1111 11:58:32.353116       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
I1111 11:58:32.353143       1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt
I1111 11:58:32.354795       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1111 11:58:32.354879       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I1111 11:58:32.354907       1 controller.go:86] Starting OpenAPI controller
I1111 11:58:32.354923       1 naming_controller.go:291] Starting NamingConditionController
I1111 11:58:32.354938       1 establishing_controller.go:76] Starting EstablishingController
I1111 11:58:32.354958       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I1111 11:58:32.354973       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1111 11:58:32.354996       1 crd_finalizer.go:266] Starting CRDFinalizer
E1111 11:58:32.366847       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.75.137, ResourceVersion: 0, AdditionalErrorMsg: 
I1111 11:58:32.452607       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1111 11:58:32.452880       1 cache.go:39] Caches are synced for autoregister controller
I1111 11:58:32.452987       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I1111 11:58:32.453072       1 cache.go:39] Caches are synced for AvailableConditionController controller
I1111 11:58:32.455585       1 shared_informer.go:247] Caches are synced for crd-autoregister 
I1111 11:58:33.351374       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1111 11:58:33.351416       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1111 11:58:33.358983       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I1111 11:58:36.050663       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1111 11:58:50.318300       1 controller.go:606] quota admission added evaluator for: endpoints
F1111 11:58:50.335141       1 sample_and_watermark.go:182] Time went backwards from 2020-11-11T11:58:50.335120951Z to 2020-11-11T11:58:50.33512004Z for labelValues=[]string{"executing", "readOnly"}
goroutine 4180 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000132001, 0xc0014603c0, 0xc6, 0x133)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x71fc8a0, 0xc000000003, 0x0, 0x0, 0xc00835b1f0, 0x70e1612, 0x17, 0xb6, 0x0)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printf(0x71fc8a0, 0x3, 0x0, 0x0, 0x491b4f8, 0x36, 0xc0025ab588, 0x3, 0x3)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x17a
k8s.io/kubernetes/vendor/k8s.io/klog/v2.Fatalf(...)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1456
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/metrics.(*sampleAndWaterMarkHistograms).innerSet(0xc0004366e0, 0xc0025ab618)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/metrics/sample_and_watermark.go:182 +0x62b
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/metrics.(*sampleAndWaterMarkHistograms).Set(0xc0004366e0, 0x4018000000000000)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/metrics/sample_and_watermark.go:147 +0x6d
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*requestWatermark).recordReadOnly(0x71d15a0, 0x6)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:74 +0x58
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func2(0x7effe0af8a20, 0xc00b593a68, 0xc00f0c8600)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:165 +0x467
net/http.HandlerFunc.ServeHTTP(0xc008387950, 0x7effe0af8a20, 0xc00b593a68, 0xc00f0c8600)
	/usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7effe0af8a20, 0xc00b593a68, 0xc00f0c8600)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x2306
net/http.HandlerFunc.ServeHTTP(0xc006871600, 0x7effe0af8a20, 0xc00b593a68, 0xc00f0c8600)
	/usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7effe0af8a20, 0xc00b593a68, 0xc00f0c8500)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:70 +0x672
net/http.HandlerFunc.ServeHTTP(0xc00838a5a0, 0x7effe0af8a20, 0xc00b593a68, 0xc00f0c8500)
	/usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e5c1e60, 0xc008373ac0, 0x50a5540, 0xc00b593a68, 0xc00f0c8500)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:113 +0xb8
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:99 +0x1cc

goroutine 1 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.preparedGenericAPIServer.Run(0xc0083370e0, 0xc0000c1140, 0x503fc20, 0xc0083370e0)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/genericapiserver.go:338 +0xc5
k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.preparedAPIAggregator.Run(...)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver/apiserver.go:302
k8s.io/kubernetes/cmd/kube-apiserver/app.Run(0xc000682580, 0xc0000c1140, 0x0, 0x0)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:175 +0x18a
k8s.io/kubernetes/cmd/kube-apiserver/app.NewAPIServerCommand.func2(0xc000682840, 0xc0001abba0, 0x0, 0x1a, 0x0, 0x0)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:124 +0x105
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000682840, 0xc000138010, 0x1a, 0x1b, 0xc000682840, 0xc000138010)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:842 +0x47c
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000682840, 0x16467239ba3a2c7d, 0x71fc240, 0x406505)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/workspace/anago-v1.19.3-rc.0.69+37babbd0e76c11/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
main.main()
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/apiserver.go:44 +0xe5

apiserver.log

I see that there is a line:

F1111 11:58:50.335141       1 sample_and_watermark.go:182] Time went backwards from 2020-11-11T11:58:50.335120951Z to 2020-11-11T11:58:50.33512004Z for labelValues=[]string{"executing", "readOnly"}

And then I get the source code: https://github.com/kubernetes/kubernetes/blob/1e11e4a2108024935ecfcb2912226cedeafd99df/staging/src/k8s.io/apiserver/pkg/util/flowcontrol/metrics/sample_and_watermark.go#L182

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): v1.19.3. But I can’t use the kubectl version now because of the problem, so I can’t give you a formal result about the version.
  • Cloud provider or hardware configuration: Ryzen 5 4500U 16GB memory 30GB SSD In the VMware Workstation 15.5.6
  • OS (e.g: cat /etc/os-release):
NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  • Kernel (e.g. uname -a):
Linux frodez-virtual-machine 5.4.0-53-generic #59-Ubuntu SMP Wed Oct 21 09:38:44 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 24 (15 by maintainers)

Commits related to this issue

Most upvoted comments

Hm … this must have something to do with the fact we’re running in a VM. Funny enough … you guys are reporting apiserver crashing. What led me to this issue is that my metrics-server crashes with this same log message instead 😃.

I originally had ntpd installed on the VM, but removing it did not help. So … if time goes backward, it must be something that VirtualBox does. I’ll try to see if I can repro with HyperV.

EDIT: I was able to get rid of this error with VirtualBox by using https://tothecore.sk/2021/04/08/disable-time-sync-for-virtualbox-virtual-machines/ to disable some kind of automated clock sync that VirtualBox does by default. Don’t forget to use NTPd, as you will need some time synchronization 😃.

VBoxManage setextradata "VM name" "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" 1

EDIT 2: Spoke too soon. That did not really help. It seems to have slightly delayed the issue, though, as it only presented itself after a couple of minutes…

EDIT 3: So, the reason why it’s still syncing time, is VBoxGuestAdditions. When they are running, the guest synces time with the host, even with the above time-sync setting disabled. One either has to stop the guest additions or pass them --disable-timesync (but … it’s not really a configuration change: https://www.virtualbox.org/ticket/16585).

sudo sed -i 's/$1 $2 $3/$1 --disable-timesync $2 $3/; s/$1 -- $2 $3/$1 -- --disable-timesync $2 $3/' /opt/VBoxGuestAdditions-*/init/vboxadd-service
sudo systemctl daemon-reload
sudo systemctl restart vboxadd-service

EDIT 4: The above doesn’t seem to have an effect at all. Still, according to the metrics server, time goes backwards. Well … let’s try to completely disable the VirtualBox guest additions: sudo /opt/VBoxGuestAdditions-*/uninstall.sh (for my use case, this is totally fine, as I don’t need any of the guest additions features such as shared folders or time sync).

EDIT 5: Nothing helps. I reckon, either the check for “time went backwards” is somewhat buggy, or some odd time sync still takes place. For now, because I don’t really need it, I am just disabling the metrics server.

EDIT 6: Huh. Got it, finally. Never surrender, I guess 😃. Turns out, you gotta attack it from all sides.

  • VBoxManage setextradata ... (see above)
  • Add the --disable-timesync argument to vboxadd-service (see above)
  • VBoxManage modifyvm "vm name" --paravirtprovider none (this is new; turns out, even if one tells guest additions not to sync time, and disables the time syncing on the VM, something can still sync the time – this setting makes the guest OS completely blind to the fact it is running in a VM). I suggest to keep ntp and guest additions (with the disabled time sync) on at all times.

EDIT 7: Also … do no let your host sleep. When it sleeps and then wakes up, a lot of k8s cannot cope with it and begins rolling. It should recover, but if you can avoid sleeping, do. For me, this actually meant installing and using “Don’t sleep” (alternative to “Caffeine” or similar) to keep it awake. For some reason, when you lock your computer, Windows will try to sleep. It should be possible to disable this mechanic (https://superuser.com/questions/1153162/how-do-i-stop-windows-10-going-to-sleep-after-locking-with-winl-key#1186786), but even though I did, it would still sleep. That’s why I ended up with “Don’t sleep”.

With all of this in place, it finally works as expected. None of the k8s’ pods roll anymore.