kubevirt: VMI created with the pod/masquerade network on Calico cannot be accessed
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind enhancement
What happened: VMI created with the default network cannot be accessed
What you expected to happen: User could access the VMI by allocated container ip address
How to reproduce it (as minimally and precisely as possible): In a k8s cluster created by kubeadm v1.15.1, install kubevirt v0.24.0. The container network is provided by calico v3.8.0.
Create vmi with the following spec
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
name: testvmi
spec:
terminationGracePeriodSeconds: 30
domain:
resources:
requests:
memory: 1024M
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: emptydisk
disk:
bus: virtio
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- name: default
masquerade: {}
ports:
- port: 22 # allow incoming traffic on port 80 to get into the virtual machine
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo:latest
- name: emptydisk
emptyDisk:
capacity: "2Gi"
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
password: fedora
chpasswd: { expire: False }
# kubectl describe vmi testvmi
Name: testvmi
Namespace: default
Labels: kubevirt.io/nodeName=host03
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"kubevirt.io/v1alpha3","kind":"VirtualMachineInstance","metadata":{"annotations":{},"name":"testvmi","namespace":"def...
kubevirt.io/latest-observed-api-version: v1alpha3
kubevirt.io/storage-observed-api-version: v1alpha3
API Version: kubevirt.io/v1alpha3
Kind: VirtualMachineInstance
Metadata:
Creation Timestamp: 2019-12-19T06:20:27Z
Generation: 9
Resource Version: 23769874
Self Link: /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/testvmi
UID: 649a898d-5219-4bf3-9100-d2d2fba566f8
Spec:
Domain:
Devices:
Disks:
Disk:
Bus: virtio
Name: containerdisk
Disk:
Bus: virtio
Name: emptydisk
Disk:
Bus: virtio
Name: cloudinitdisk
Interfaces:
Masquerade:
Name: default
Ports:
Port: 22
Machine:
Type:
Resources:
Requests:
Memory: 1024M
Networks:
Name: default
Pod:
Node Selector:
kubernetes.io/hostname: host03
Termination Grace Period Seconds: 30
Volumes:
Container Disk:
Image: kubevirt/fedora-cloud-container-disk-demo:latest
Name: containerdisk
Empty Disk:
Capacity: 2Gi
Name: emptydisk
Cloud Init No Cloud:
User Data: #cloud-config
password: fedora
chpasswd: { expire: False }
Name: cloudinitdisk
Status:
Conditions:
Last Probe Time: <nil>
Last Transition Time: <nil>
Status: True
Type: LiveMigratable
Last Probe Time: <nil>
Last Transition Time: 2019-12-19T06:18:05Z
Status: True
Type: Ready
Guest OS Info:
Interfaces:
Ip Address: 192.168.25.222
Mac: 02:00:00:8a:fd:b0
Name: default
Migration Method: BlockMigration
Node Name: host03
Phase: Running
Qos Class: Burstable
Events: <none>
The vmi is created successfully with an allocated ip address of 192.168.25.221, however cannot be accessed from outside.
Anything else we need to know?:
From the container of the virt-launcher of the vmi, user could access other containers and hosts. From the vmi, user could access the container of the virt-launcher of the vmi, but not any other host or container.
Environment:
- KubeVirt version (use
virtctl version): v0.24.0 - Kubernetes version (use
kubectl version): v1.15.1 - VM or VMI specifications: See above
- Cloud provider or hardware configuration: VM created from Openstack
- OS (e.g. from /etc/os-release): CentOS Linux release 7.4.1708 (Core)
- Kernel (e.g.
uname -a): 3.10.0-693.el7.x86_64 - Install tools: kubeadm for k8s cluster, install doc for kubevirt
- Others:
output of ifconfig and ip route of the container of virt-launcher of the vmi
# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
inet 192.168.25.221 netmask 255.255.255.255 broadcast 0.0.0.0
ether 12:29:b9:13:db:50 txqueuelen 0 (Ethernet)
RX packets 21112 bytes 102171371 (97.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18008 bytes 1263229 (1.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
k6t-eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
inet 10.0.2.1 netmask 255.255.255.0 broadcast 10.0.2.255
ether 82:b9:06:f7:9b:58 txqueuelen 0 (Ethernet)
RX packets 119 bytes 9003 (8.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 25 bytes 2294 (2.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
k6t-eth0-nic: flags=195<UP,BROADCAST,RUNNING,NOARP> mtu 1500
ether 82:b9:06:f7:9b:58 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 21 bytes 2823 (2.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 5 bytes 364 (364.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5 bytes 364 (364.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
ether fe:00:00:e9:78:81 txqueuelen 1000 (Ethernet)
RX packets 119 bytes 10669 (10.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 25 bytes 2294 (2.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# ip route
default via 169.254.1.1 dev eth0
10.0.2.0/24 dev k6t-eth0 proto kernel scope link src 10.0.2.1
169.254.1.1 dev eth0 scope link
# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBEVIRT_PREINBOUND all -- anywhere anywhere
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DNAT tcp -- anywhere localhost tcp dpt:ssh to:10.0.2.2
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 10.0.2.2 anywhere
KUBEVIRT_POSTINBOUND all -- anywhere anywhere
Chain KUBEVIRT_POSTINBOUND (1 references)
target prot opt source destination
SNAT tcp -- anywhere anywhere tcp dpt:ssh to:10.0.2.1
Chain KUBEVIRT_PREINBOUND (1 references)
target prot opt source destination
DNAT tcp -- anywhere anywhere tcp dpt:ssh to:10.0.2.2
output of ifconfig and ip route of the vmi
# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
inet 10.0.2.2 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::ff:fee9:7881 prefixlen 64 scopeid 0x20<link>
ether 02:00:00:e9:78:81 txqueuelen 1000 (Ethernet)
RX packets 29 bytes 2574 (2.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 125 bytes 11173 (10.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 36 bytes 3120 (3.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 36 bytes 3120 (3.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# ip route
default via 10.0.2.1 dev eth0 proto dhcp metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.2 metric 100
The output of virsh dumpxml
<domain type='qemu' id='1'>
<name>default_testvmi</name>
<uuid>a5593c0a-99b3-49a8-a64b-326a4303c933</uuid>
<metadata>
<kubevirt xmlns="http://kubevirt.io">
<uid>b8a76915-5a64-4cc1-8e30-dff53833825f</uid>
<graceperiod>
<deletionGracePeriodSeconds>30</deletionGracePeriodSeconds>
</graceperiod>
</kubevirt>
</metadata>
<memory unit='KiB'>1000448</memory>
<currentMemory unit='KiB'>1000448</currentMemory>
<vcpu placement='static'>1</vcpu>
<iothreads>1</iothreads>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>KubeVirt</entry>
<entry name='product'>None</entry>
<entry name='family'>KubeVirt</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-q35-rhel8.1.0'>hvm</type>
<boot dev='hd'/>
<smbios mode='sysinfo'/>
</os>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>EPYC-IBPB</model>
<vendor>AMD</vendor>
<topology sockets='1' cores='1' threads='1'/>
<feature policy='require' name='acpi'/>
<feature policy='require' name='ss'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='erms'/>
<feature policy='require' name='mpx'/>
<feature policy='require' name='pcommit'/>
<feature policy='require' name='clwb'/>
<feature policy='require' name='pku'/>
<feature policy='require' name='la57'/>
<feature policy='require' name='3dnowext'/>
<feature policy='require' name='3dnow'/>
<feature policy='disable' name='vme'/>
<feature policy='disable' name='fma'/>
<feature policy='disable' name='avx'/>
<feature policy='disable' name='f16c'/>
<feature policy='disable' name='avx2'/>
<feature policy='disable' name='rdseed'/>
<feature policy='disable' name='sha-ni'/>
<feature policy='disable' name='xsavec'/>
<feature policy='disable' name='fxsr_opt'/>
<feature policy='disable' name='misalignsse'/>
<feature policy='disable' name='3dnowprefetch'/>
<feature policy='disable' name='osvw'/>
<feature policy='disable' name='topoext'/>
<feature policy='disable' name='ibpb'/>
</cpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/var/run/kubevirt-ephemeral-disks/disk-data/containerdisk/disk.qcow2'/>
<backingStore type='file' index='1'>
<format type='raw'/>
<source file='/var/run/kubevirt/container-disks/disk_0.img'/>
<backingStore/>
</backingStore>
<target dev='vda' bus='virtio'/>
<alias name='ua-containerdisk'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/var/run/libvirt/empty-disks/emptydisk.qcow2'/>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<alias name='ua-emptydisk'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/var/run/kubevirt-ephemeral-disks/cloud-init-data/default/testvmi/noCloud.iso'/>
<backingStore/>
<target dev='vdc' bus='virtio'/>
<alias name='ua-cloudinitdisk'/>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</disk>
<controller type='usb' index='0' model='none'>
<alias name='usb'/>
</controller>
<controller type='sata' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pcie-root'>
<alias name='pcie.0'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</controller>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<alias name='pci.1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<alias name='pci.2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<alias name='pci.3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<alias name='pci.4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<alias name='pci.5'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0x15'/>
<alias name='pci.6'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
</controller>
<interface type='bridge'>
<mac address='02:00:00:e9:78:81'/>
<source bridge='k6t-eth0'/>
<target dev='vnet0'/>
<model type='virtio'/>
<mtu size='1440'/>
<alias name='ua-default'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
<serial type='unix'>
<source mode='bind' path='/var/run/kubevirt-private/b8a76915-5a64-4cc1-8e30-dff53833825f/virt-serial0'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<console type='unix'>
<source mode='bind' path='/var/run/kubevirt-private/b8a76915-5a64-4cc1-8e30-dff53833825f/virt-serial0'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-default_testvmi/org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='mouse' bus='ps2'>
<alias name='input0'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input1'/>
</input>
<graphics type='vnc' socket='/var/run/kubevirt-private/b8a76915-5a64-4cc1-8e30-dff53833825f/virt-vnc'>
<listen type='socket' socket='/var/run/kubevirt-private/b8a76915-5a64-4cc1-8e30-dff53833825f/virt-vnc'/>
</graphics>
<video>
<model type='vga' vram='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</video>
<memballoon model='none'/>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
</domain>
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 32 (19 by maintainers)
Commits related to this issue
- Fix vmi created with pod masq on calico cannot be accessed Set up necessary sysctl net.ipv4.ip_forward=1 to use masq in kubevirt drop depends on upstream cni See https://github.com/kubevirt/kubevirt/i... — committed to halfcrazy/kubevirt by halfcrazy 3 years ago
- Fix vmi created with pod masq on calico cannot be accessed Set up necessary sysctl net.ipv4.ip_forward=1 to use masq in kubevirt drop depends on upstream cni See https://github.com/kubevirt/kubevirt/i... — committed to halfcrazy/kubevirt by halfcrazy 3 years ago
- Fix vmi created with pod masq on calico cannot be accessed Set up necessary sysctl net.ipv4.ip_forward=1 to use masq in kubevirt drop depends on upstream cni See https://github.com/kubevirt/kubevirt/i... — committed to halfcrazy/kubevirt by halfcrazy 3 years ago
- Fix vmi created with pod masq on calico cannot be accessed Set up necessary sysctl net.ipv4.ip_forward=1 to use masq in kubevirt drop depends on upstream cni See https://github.com/kubevirt/kubevirt/i... — committed to halfcrazy/kubevirt by halfcrazy 3 years ago
- Fix vmi created with pod masq on calico cannot be accessed Set up necessary sysctl net.ipv4.ip_forward=1 to use masq in kubevirt drop depends on upstream cni See https://github.com/kubevirt/kubevirt/i... — committed to halfcrazy/kubevirt by halfcrazy 3 years ago
Thank you very much for the helpful advice I’ll have a look on the Rancher side how to enable it in the corresponding config file.
I don’t have any experience with rancher, so I can’t unfortunately help with that. Just know that you have to enable
allow_ip_forwardingon your Calico instance as described here https://docs.projectcalico.org/reference/cni-plugin/configuration, otherwise KubeVirt masquerade will not work.For the record, we solved this issue on our CI by setting
allow_ip_forwardingon Calico’s CNI config: https://github.com/kubevirt/kubevirtci/blob/3ac3596b5c94e22cade3a92e60b869b5ca22e45f/cluster-provision/manifests/cni/calico/script.d/002-allow-ip-forwarding.sh@azaiter thanks we were facing the same issue