kubevirt: Virtio-FS failing on hosts with AppArmor

/kind bug

What happened: Trying out snapshot build 20200928 with #3493. virtio-fs is setup to map a cephFS backed PVC.

Ubuntu 18.04. Host kernel: 5.4.0-47-generic. No SELinux. Apparmor enabled.

The docker-default AppArmor profile is too restrictive for Virtio-FS to work.

Qemu fails to start with:

{"component":"virt-launcher","level":"error","msg":"internal error: qemu unexpectedly closed the monitor: 2020-09-30T21:08:34.249505Z qemu-kvm: -device vhost-user-fs-pci,chardev=chr-vu-fs0,queue-size=1024,tag=sharedfs,bus=pci.1,addr=0x0: Failed to read msg header. Read 0 instead of 12. Original request 1.","pos":"qemuProcessReportLogError:2064","subcomponent":"libvirt","thread":"56","timestamp":"2020-09-30T21:08:34.250000Z"}
qemu-kvm: -device vhost-user-fs-pci,chardev=chr-vu-fs0,queue-size=1024,tag=sharedfs,bus=pci.1,addr=0x0: vhost_dev_init failed: Operation not permitted

virtiofsd logs:

[617100180193229] [ID: 00000001] mount(/, MS_REC|MS_SLAVE): Permission denied
[617101334792332] [ID: 00000194] virtio_session_mount: Waiting for vhost-user socket connection...
[617101363029043] [ID: 00000194] virtio_session_mount: Received vhost-user socket connection
[617101384956914] [ID: 00000001] mount(/, MS_REC|MS_SLAVE): Permission denied
[617102539847175] [ID: 00000205] virtio_session_mount: Waiting for vhost-user socket connection...
[617102575292582] [ID: 00000205] virtio_session_mount: Received vhost-user socket connection
[617102593016668] [ID: 00000001] mount(/, MS_REC|MS_SLAVE): Permission denied
[617103827655704] [ID: 00000216] virtio_session_mount: Waiting for vhost-user socket connection...
[617103856087607] [ID: 00000216] virtio_session_mount: Received vhost-user socket connection
[617103872973299] [ID: 00000001] mount(/, MS_REC|MS_SLAVE): Permission denied

Workaround Annotate the VMI with:

      annotations:
        container.apparmor.security.beta.kubernetes.io/compute: unconfined

Since new app armor profiles can’t easily be created a reasonable fix to this bug is to have virt-controller add this annotation to the Pod. This might fail if AppArmor is enabled in Kubernetes but the kernel module is not loaded. Test on a non-apparmor enabled system before merging a change like this. It should be included in the (hopefully upcoming) documentation for the Virtio-FS implementation at least.

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Comments: 29 (29 by maintainers)

Most upvoted comments

Adding container.apparmor.security.beta.kubernetes.io/compute: unconfined on the pod spec should solve (or at least workaround) this.