netmap: High Latency and Slot Data Inconsistency with virtio-net RX
I am experiencing high latency and various delays when I have bridged a virtio NIC with linux host rings. I have seen this both with the bridge
application and also with vale-ctl -h
.
PING 10.10.5.84 (10.10.5.84) 56(84) bytes of data.
64 bytes from 10.10.5.84: icmp_seq=1 ttl=64 time=8103 ms
64 bytes from 10.10.5.84: icmp_seq=2 ttl=64 time=7102 ms
64 bytes from 10.10.5.84: icmp_seq=4 ttl=64 time=6608 ms
64 bytes from 10.10.5.84: icmp_seq=5 ttl=64 time=6807 ms
64 bytes from 10.10.5.84: icmp_seq=6 ttl=64 time=5965 ms
64 bytes from 10.10.5.84: icmp_seq=7 ttl=64 time=4971 ms
64 bytes from 10.10.5.84: icmp_seq=8 ttl=64 time=3972 ms
64 bytes from 10.10.5.84: icmp_seq=9 ttl=64 time=2974 ms
I have also noticed that slow moving ARP replies play a role in the delays.
I have seen others solve similar issues with disabling various offloading features. I have tried this but it does not seem to solve the issue.
rx-checksumming: off [fixed]
tx-checksumming: off
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: off
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off [fixed]
I was hoping to use this pathway as a control plane (specifically ssh
) but the latency and delays prevent a connection from getting established. Please let me know if there is anything else we can try to improve the latency to the host stack.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 38
I have done some more testing with other kernel versions and I have not experienced any issues with host ring latency or RX ring drops. Thank you for your work on this fix. I will close this issue and reopen if I run into any more related problems.