kubevirt: Multicast traffic cannot pass through bridge of virt-launcher pod.

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind enhancement

What happened: Multicast traffic can not pass from/to VM via virt-launcher pod network. What you expected to happen: All kind of traffic can pass transparently from/to VM via virt-launcher pod network How to reproduce it (as minimally and precisely as possible): Create 2 VM and configure direct L2 link between them, use connection type ‘bridge’ and configure some multicast protocol(LLDP for example). You will see that traffic goes out of VM but doesn’t pass via virt-launcher pod. Anything else we need to know?: Possible solution is to add following flag into virt-launcher pod bridge echo 65528 > /sys/class/net/virbr0/bridge/group_fwd_mask Environment:

  • KubeVirt version (use virtctl version): Client Version: version.Info{GitVersion:“v0.16.0”, GitCommit:“06d91198bededc7a9353ac774221c83580ee3373”, GitTreeState:“clean”, BuildDate:“2019-04-05T15:24:49Z”, GoVersion:“go1.11.5”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{GitVersion:“v0.16.0-gb37c22bbf”, GitCommit:“$Format:%H$”, GitTreeState:“”, BuildDate:“2019-04-18T08:47:30Z”, GoVersion:“go1.11.5”, Compiler:“gc”, Platform:“linux/amd64”}
  • Kubernetes version (use kubectl version): [root@zeus06 cnv-tests]# kubectl version Client Version: version.Info{Major:“1”, Minor:“10+”, GitVersion:“v1.10.0+d4cacc0”, GitCommit:“d4cacc0”, GitTreeState:“clean”, BuildDate:“2018-12-06T18:30:39Z”, GoVersion:“go1.11.2”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“12+”, GitVersion:“v1.12.4+0ba401e”, GitCommit:“0ba401e”, GitTreeState:“clean”, BuildDate:“2019-03-31T22:28:12Z”, GoVersion:“go1.10.8”, Compiler:“gc”, Platform:“linux/amd64”}
  • VM or VMI specifications: VM-A
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
metadata:
  name: vma
  namespace: myproject
spec:
  running: false
  template:
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              bridge: {}
            - name: br1
              bridge: {}
        resources:
          requests:
            memory: 1G
        cpu:
          cores: 2
      volumes:
        - name: containerdisk
          containerDisk:
            image: kubevirt/fedora30-cloud-container-disk-demo:latest
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              password: fedora
              chpasswd: { expire: False }
      networks:
        - name: default
          pod: {}
        - multus:
            networkName: br1
          name: br1

VM-B

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
metadata:
  name: vmb
  namespace: myproject
spec:
  running: false
  template:
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              bridge: {}
            - name: br1
              bridge: {}
        resources:
          requests:
            memory: 1G
        cpu:
          cores: 2
      volumes:
        - name: containerdisk
          containerDisk:
            image: kubevirt/fedora30-cloud-container-disk-demo:latest
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              password: fedora
              chpasswd: { expire: False }
      networks:
        - name: default
          pod: {}
        - multus:
            networkName: br1
          name: br1

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 17 (17 by maintainers)

Most upvoted comments

Update: I take a look on this and the problem is because the pod is not privileged we can’t change the file group forward mask for the bridge.

error output:

{"component":"virt-launcher","level":"fatal","msg":"failed to prepared pod networking","pos":"podinterface.go:88","reason":"open /sys/class/net/k6t-eth0/bridge/group_fwd_mask: read-only file system","timestamp":"2019-06-10T12:18:43.006218Z"}
panic: open /sys/class/net/k6t-eth0/bridge/group_fwd_mask: read-only file system

goroutine 25 [running]:
kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/network.(*PodInterface).Plug(0x22576b0, 0xc000c48000, 0xc000b86240, 0xc00049bb60, 0xc000318000, 0x14ade7d, 0x4, 0x10, 0x12c8fe0)
	pkg/virt-launcher/virtwrap/network/podinterface.go:89 +0x536
kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/network.SetupNetworkInterfaces(0xc000c48000, 0xc000318000, 0x0, 0x7)
	pkg/virt-launcher/virtwrap/network/network.go:83 +0x520
kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap.(*LibvirtDomainManager).preStartHook(0xc0000806c0, 0xc000c48000, 0xc000318000, 0x16669c0, 0x0, 0x163a5e0)
	pkg/virt-launcher/virtwrap/manager.go:728 +0x3ad
kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap.(*LibvirtDomainManager).SyncVMI(0xc0000806c0, 0xc000c48000, 0xc0003c8500, 0x0, 0x0, 0x0)
	pkg/virt-launcher/virtwrap/manager.go:881 +0x5e2
kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/cmd-server.(*Launcher).SyncVirtualMachine(0xc000623fa0, 0x16541c0, 0xc00049a780, 0xc00049a7b0, 0xc000623fa0, 0xc00049a6f0, 0x132f6e0)
	pkg/virt-launcher/virtwrap/cmd-server/server.go:159 +0x81
kubevirt.io/kubevirt/pkg/handler-launcher-com/cmd/v1._Cmd_SyncVirtualMachine_Handler(0x13f6480, 0xc000623fa0, 0x16541c0, 0xc00049a780, 0xc000218cd0, 0x0, 0x0, 0x0, 0xc00034b800, 0x7e7)
	bazel-out/k8-fastbuild/bin/pkg/handler-launcher-com/cmd/v1/linux_amd64_stripped/kubevirt_cmd_go_proto%/kubevirt.io/kubevirt/pkg/handler-launcher-com/cmd/v1/cmd.pb.go:515 +0x23e
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00046ad80, 0x165fd60, 0xc000483b00, 0xc000b8a200, 0xc00002a9f0, 0x2193600, 0x0, 0x0, 0x0)
	external/org_golang_google_grpc/server.go:971 +0x4a2
google.golang.org/grpc.(*Server).handleStream(0xc00046ad80, 0x165fd60, 0xc000483b00, 0xc000b8a200, 0x0)
	external/org_golang_google_grpc/server.go:1250 +0xd61
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc0006461f0, 0xc00046ad80, 0x165fd60, 0xc000483b00, 0xc000b8a200)
	external/org_golang_google_grpc/server.go:690 +0x9f
created by google.golang.org/grpc.(*Server).serveStreams.func1
	external/org_golang_google_grpc/server.go:688 +0xa1
{"component":"virt-launcher","level":"error","msg":"dirty virt-launcher shutdown","pos":"virt-launcher.go:558","reason":"exit status 2","timestamp":"2019-06-10T12:18:43.028453Z"}

I think we should consider to back into @rmohr Idea to move the network preparation into the virt-handler. PR: https://github.com/kubevirt/kubevirt/pull/1404

@vladikr @booxter @phoracek @dankenigsberg what you think about that?