kubevirt: Known issue: `macvlan` CNI is not supported

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind enhancement

What happened:

I created a VMI with a secondary multus macvlan network interface on it. It seems that the attachment was successful as I can see the eth1 interface and the IP address correctly assigned, but I cannot access no one from the eth1.

More specifically I want to communicate with the other pods that have also a secondary macvlan interface (e.g POD interface: net1: 172.16.0.5) which is connected to the same subnet with the VM’s secondary interface (e.g VMI interface: eth1: 172.16.0.6) . But when I ping from the VMI : ping -I eth1 172.16.0.5 , I am getting host unreachable. The POD to POD communication is successful. But the POD to VMI or VMI to VMI communication via the multus interface is unsuccessful.

What you expected to happen: Successful POD to VMI or VMI to VMI communication via the multus interface. How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • KubeVirt version (use virtctl version): v0.38.1

  • Kubernetes version (use kubectl version): v1.20.0

  • VM or VMI specifications:

  • OS (e.g. from /etc/os-release): NAME=Fedora VERSION=“32 (Cloud Edition)”

The macvlan conf file:


apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: mec
spec:
  config: '{ "cniVersion": "0.3.0", "type": "macvlan", "master": "eth1", "ipam": { "type": "static", "addresses": [ { "address": "172.16.0.6/24",
    "gateway": "172.16.0.1" } ] } }'

The VMI conf file:

apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  labels:
    special: vmi-multus-multiple-net
  name: vmi-multus-multiple-net
spec:
  domain:
    devices:
      disks:
      - disk:
          bus: virtio
        name: containerdisk
      - disk:
          bus: virtio
        name: cloudinitdisk
      interfaces:
      - masquerade: {}
        name: default
      - bridge: {}
        name: mec
      rng: {}
    machine:
      type: ""
    resources:
      requests:
        memory: 1024M
  networks:
  - name: default
    pod: {}
  - multus:
      networkName: mec
    name: mec
  terminationGracePeriodSeconds: 0
  volumes:
  - containerDisk:
      image: kubevirt/fedora-cloud-container-disk-demo:devel
    name: containerdisk
  - cloudInitNoCloud:
      userData: |
        #!/bin/bash
        echo "fedora" |passwd fedora --stdin
        dhclient eth1
    name: cloudinitdisk

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 1
  • Comments: 23 (12 by maintainers)

Most upvoted comments

I understand that I need to wired mybr0 bridge from Node A to another bridge in Node B right ? Usecase like this can be done by using NMState right ?

Correct. To do so, you just have to list eth1 (or eth0) as a port of mybr0.

Can you provide me example guide from NMState or topic related to my usecase ? I’m quite new about this and didn’t sure I’m going in right way or not.

Something like https://nmstate.io/kubernetes-nmstate/#how-it-works should be enough.

By the way what is different between cluster-network-addons-operator https://github.com/kubevirt/cluster-network-addons-operator and NMState.

The cluster-network-addons-operator used to be used to install NMState. However, recently we have introduced a new operator dedicated for NMState. So you may need to install both.

Thank you so much for give me advice for start project ! Currently I switch to Bridge CNI and using Multus. with fix Static IP it working perfectly my VM that snapshot from on-premise which had port and IP binding to interface can working fine when it have secondary IP from Multus and VM can connect each together in same Kubernetes Node the last step was only using NMState like you mention. If I success in full scenario I will report again so our friend or anyone that looking for similar usecase it might benefit for them 😃

nmstate is not required per se. What is required for bridge CNI to be useful in a cluster with multiple nodes, is to connect bridge on each node to the underlying network. You can do it with any host network configuration tooling - for example kubernetes-nmstate which allows you to configure host networking through Kubernetes API. Note that it depends on NetworkManager being available on your nodes, so it may not work everywhere.

Most of our focus was given to the more flexible bridge CNI. The only reason macvlan does not work with KubeVirt is the bug mentioned earlier in this thread.

@wdrdres3qew5ts21 you can set up communication between nodes even when using the bridge CNI. To do so, you have to create the bridge before scheduling any VMs and connect the bridge to a host’s NIC. There are many ways to configure such a bridge, I’d personally recommend using https://github.com/nmstate/kubernetes-nmstate/, if your OS allows that.

You may want to check this blog post that describes use of secondary networks, including connectivity between nodes: http://kubevirt.io/2020/Multiple-Network-Attachments-with-bridge-CNI.html

Hello, Is this issue still relevant? I have the exact same behavior: ping does work from POD to POD but the POD to VMI or VMI to VMI communication via the multus interface is unsuccessful. I’ve also tried to work around it using :

  • IP link show net1
  • bridge fdb delete <mac address above > dev net1 master But it seems NET_ADMIN capability has been removed from virt-launcher pod. Can you help me with this? Should I just abandon using MACVLAN or is there a way to make it work?