vpp: contiv-vpp is unable to steal the NIC

Hi All,

I’m trying to use the contiv-vpp in my local setup. Setup and things all went well but the problem occurs when I try to ping the pod IP from a different node, from the same node, it’s working well.

Any help would be highly appreciated. Thanks!

al-server-a:~$ ping 10.1.1.2 -c 2
PING 10.1.1.2 (10.1.1.2) 56(84) bytes of data.
64 bytes from 10.1.1.2: icmp_seq=1 ttl=63 time=0.405 ms
64 bytes from 10.1.1.2: icmp_seq=2 ttl=63 time=0.341 ms

--- 10.1.1.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1028ms
rtt min/avg/max/mdev = 0.341/0.373/0.405/0.032 ms
al-server-a:~$ ping 10.1.2.6 -c 2
PING 10.1.2.6 (10.1.2.6) 56(84) bytes of data.

--- 10.1.2.6 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1031ms
$ kubectl get po --all-namespaces -o wide
NAMESPACE     NAME                                  READY     STATUS             RESTARTS   AGE       IP          NODE
default       alpine-q5vkp                          1/1       Running            0          3m        10.1.1.2    al-server-a
default       alpine-rhg9j                          1/1       Running            0          2m        10.1.2.6    al-blacknode-a
kube-system   contiv-etcd-0                         1/1       Running            0          7m        1.2.3.10    al-server-a
kube-system   contiv-ksr-7lk5p                      1/1       Running            0          7m        1.2.3.10    al-server-a
kube-system   contiv-vswitch-hmwt2                  1/1       Running            0          7m        1.2.3.100   al-blacknode-a
kube-system   contiv-vswitch-lvfwv                  1/1       Running            0          7m        1.2.3.10    al-server-a
kube-system   etcd-al-server-a                      1/1       Running            0          24m       1.2.3.10    al-server-a
kube-system   kube-apiserver-al-server-a            1/1       Running            0          24m       1.2.3.10    al-server-a
kube-system   kube-controller-manager-al-server-a   1/1       Running            0          24m       1.2.3.10    al-server-a
kube-system   kube-dns-86f4d74b45-kxt4z             3/3       Running            0          24m       1.2.3.10    al-server-a
kube-system   kube-proxy-f86hw                      1/1       Running            0          24m       1.2.3.100   al-blacknode-a
kube-system   kube-proxy-tl56k                      1/1       Running            0          25m       1.2.3.10    al-server-a
kube-system   kube-scheduler-al-server-a            1/1       Running            0          24m       1.2.3.10    al-server-a

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 26 (3 by maintainers)

Most upvoted comments

It looks like you got an Intel XXV710 adapter. I’ve never played with this one. Try modprobe igb_uio and bind it to igb_uio.

I have an XL710 and I am able to bind it to either vfio-pci or igb_uio. Maybe yours takes only igb_uio.

You can also use dpdk-devbind.py to check the status of the NICs. dpdk-devbind.py --status The NIC in question should be displayed in the “Network devices using DPDK-compatible driver” session. If not, try to bind the NIC to DPDK dpdk-devbind.py --bind=XXX 01:00.1 Replace XXX with the correct DPDK driver name to bind.

One more thing: before deploying contiv-vpp can you please ensure that the interface is down: the ‘ip show link’ com mand should show it in DOWN state.