k3s: Operation cannot be fulfilled on "svccontroller": delaying object set, requeuing

Environmental Info: K3s Version: v1.19.4+k3s1 (2532c10f

Node(s) CPU architecture, OS, and Version:

Linux control01 5.8.0-1010-raspi #13-Ubuntu SMP PREEMPT Wed Dec 9 17:14:07 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux All nodes are identical setup rpi4 with OS from USB3 and one additional USB for storage. Lan is 1GB/s

Cluster Configuration:

3 Master 6 Workers - rpi 4 cluster

Describe the bug:

Repeating messages in log, I can’t tell why is it happening.

Steps To Reproduce:

  • Installed K3s: as per documentation for multiple control nodes no special changes

Expected behavior:

No such messages I guess.

Actual behavior:

Repeating messages in log on control node 2:

Jan  2 11:58:11 control02 k3s[3032622]: E0102 11:58:11.674914 3032622 controller.go:135] error syncing 'kube-system/traefik': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:11 control02 k3s[3032622]: E0102 11:58:11.948971 3032622 controller.go:135] error syncing 'docker-registry/registry-service': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:11 control02 k3s[3032622]: E0102 11:58:11.963071 3032622 controller.go:135] error syncing 'docker-registry/registry-service': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:11 control02 k3s[3032622]: E0102 11:58:11.972178 3032622 controller.go:135] error syncing 'kube-system/traefik': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:12 control02 k3s[3032622]: E0102 11:58:12.071810 3032622 controller.go:135] error syncing 'docker-registry/registry-service': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:12 control02 k3s[3032622]: E0102 11:58:12.322932 3032622 controller.go:135] error syncing 'kube-system/traefik': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:12 control02 k3s[3032622]: E0102 11:58:12.338703 3032622 controller.go:135] error syncing 'kube-system/traefik': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:12 control02 k3s[3032622]: E0102 11:58:12.370950 3032622 controller.go:135] error syncing 'docker-registry/registry-service': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:12 control02 k3s[3032622]: E0102 11:58:12.470797 3032622 controller.go:135] error syncing 'kube-system/traefik': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:12 control02 k3s[3032622]: E0102 11:58:12.724791 3032622 controller.go:135] error syncing 'docker-registry/registry-service': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:12 control02 k3s[3032622]: E0102 11:58:12.745745 3032622 controller.go:135] error syncing 'docker-registry/registry-service': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:12 control02 k3s[3032622]: E0102 11:58:12.771079 3032622 controller.go:135] error syncing 'kube-system/traefik': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing
Jan  2 11:58:12 control02 k3s[3032622]: E0102 11:58:12.870858 3032622 controller.go:135] error syncing 'docker-registry/registry-service': handler svccontroller: Operation cannot be fulfilled on  "svccontroller": delaying object set, requeuing

Control node 2 have following strange logs:

Jan  2 11:58:23 control03 k3s[1656]: {"level":"warn","ts":"2021-01-02T11:58:23.089+0100","caller":"etcdserver/util.go:163","msg":"apply request took too long","took":"228.260226ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1531635816537462104 username:\"etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/docker-registry/registry-service\" mod_revision:15950140 > success:<request_put:<key:\"/registry/services/specs/docker-registry/registry-service\" value_size:1137 >> failure:<>>","response":"size:20"}
Jan  2 11:58:23 control03 k3s[1656]: {"level":"warn","ts":"2021-01-02T11:58:23.512+0100","caller":"etcdserver/util.go:163","msg":"apply request took too long","took":"130.250409ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1531635816537462110 username:\"etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" mod_revision:15950123 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" value_size:496 >> failure:<>>","response":"size:20"}
Jan  2 11:58:24 control03 k3s[1656]: {"level":"warn","ts":"2021-01-02T11:58:24.019+0100","caller":"etcdserver/util.go:163","msg":"apply request took too long","took":"307.513167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:9"}
Jan  2 11:58:24 control03 k3s[1656]: {"level":"info","ts":"2021-01-02T11:58:24.019+0100","caller":"traceutil/trace.go:145","msg":"trace[70856247] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:15950163; }","duration":"307.726572ms","start":"2021-01-02T11:58:23.711+0100","end":"2021-01-02T11:58:24.019+0100","steps":["trace[70856247] 'agreement among raft nodes before linearized reading'  (duration: 307.303354ms)"]}
Jan  2 11:58:24 control03 k3s[1656]: {"level":"warn","ts":"2021-01-02T11:58:24.026+0100","caller":"etcdserver/util.go:163","msg":"apply request took too long","took":"102.066104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/longhorn.io/engineimages/\" range_end:\"/registry/longhorn.io/engineimages0\" count_only:true ","response":"range_response_count:0 size:9"}
Jan  2 11:58:24 control03 k3s[1656]: {"level":"info","ts":"2021-01-02T11:58:24.026+0100","caller":"traceutil/trace.go:145","msg":"trace[2037172839] range","detail":"{range_begin:/registry/longhorn.io/engineimages/; range_end:/registry/longhorn.io/engineimages0; response_count:0; response_revision:15950164; }","duration":"102.291249ms","start":"2021-01-02T11:58:23.924+0100","end":"2021-01-02T11:58:24.026+0100","steps":["trace[2037172839] 'agreement among raft nodes before linearized reading'  (duration: 101.885143ms)"]}
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.785488    1656 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.786763    1656 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.786878    1656 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu1/online: open /sys/devices/system/cpu/cpu1/online: no such file or directory
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.786963    1656 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu2/online: open /sys/devices/system/cpu/cpu2/online: no such file or directory
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.787044    1656 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu3/online: open /sys/devices/system/cpu/cpu3/online: no such file or directory
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.787526    1656 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.787598    1656 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.787867    1656 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.787943    1656 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Jan  2 11:58:29 control03 k3s[1656]: E0102 11:58:29.787967    1656 machine.go:72] Cannot read number of physical cores correctly, number of cores set to 0
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.788264    1656 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.788332    1656 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.788388    1656 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Jan  2 11:58:29 control03 k3s[1656]: W0102 11:58:29.788442    1656 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Jan  2 11:58:29 control03 k3s[1656]: E0102 11:58:29.788465    1656 machine.go:86] Cannot read number of sockets correctly, number of sockets set to 0
Jan  2 11:59:28 control03 k3s[1656]: {"level":"warn","ts":"2021-01-02T11:59:28.547+0100","caller":"etcdserver/util.go:163","msg":"apply request took too long","took":"119.160408ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/rancher.io-local-path\" ","response":"range_response_count:1 size:633"}
Jan  2 11:59:28 control03 k3s[1656]: {"level":"info","ts":"2021-01-02T11:59:28.547+0100","caller":"traceutil/trace.go:145","msg":"trace[1725241471] range","detail":"{range_begin:/registry/services/endpoints/kube-system/rancher.io-local-path; range_end:; response_count:1; response_revision:15950914; }","duration":"119.36422ms","start":"2021-01-02T11:59:28.427+0100","end":"2021-01-02T11:59:28.547+0100","steps":["trace[1725241471] 'agreement among raft nodes before linearized reading'  (duration: 118.965262ms)"]}

Control node 01, seems to work fine

Additional context / logs:

All nodes are on wired lan, nothing else is on there.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 16 (5 by maintainers)

Most upvoted comments

Are you by any chance running both MetalLB and ServiceLB? If you’re going to use another LB controller you should make sure you have started all your K3s server nodes with --disable-servicelb