runtime: Raw Socket on Linux - Outbound data missing

I am using a raw socket on linux to capture all traffic on a specific interface and port. The issue I’m having is that on eth0 I see only the inbound traffic, but on the lo interface I can see both inbound and outbound. Is there a trick to getting the outbound data on the eth0 interface?

            _socket = new Socket(AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.Tcp);
            _socket.Bind(new IPEndPoint(unicastAddress.Address, port));
            _socket.BeginReceive(_byteData, 0, _byteData.Length, SocketFlags.None, new AsyncCallback(OnReceive), null);

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 3
  • Comments: 19 (9 by maintainers)

Most upvoted comments

This should now work with 3.0 Preview or daily builds.

Note, that what we get is basic ability to create Socket with AddressFamily.Packet. If somebody wants to do anything fancy like setting BPF filter, you will need to get socket.Handle and pinvoke functions from libc or libpcap. By default, you will get all packets from all interfaces and that may or may not be what you want. I added simple example allowing to take interface index (as int for simplicity) and bind to it.

using System;
using System.Net;
using System.Net.Sockets;
using System.Runtime.InteropServices;

namespace Capture
{
    class Program
    {
        /*
          // from linux/if_packet.h
          struct sockaddr_ll {
            Int16 family;
            Int16 protocol;
            Int32 ifindex;
            Int32 pad1;
            Int32 pad2;
            Int32 pad3;
        }
        */
        class LLEndPoint : EndPoint
        {
            private Int32   _ifIndex;

            public LLEndPoint(int interfaceIndex)
            {
                _ifIndex = interfaceIndex;
            }

            public override SocketAddress Serialize()
            {
                var a = new SocketAddress(AddressFamily.Packet, 20);
                byte[] asBytes = BitConverter.GetBytes(_ifIndex);
                a[4] = asBytes[0];
                a[5] = asBytes[1];
                a[6] = asBytes[2];
                a[7] = asBytes[3];

                return a;
            }
        }
        static void doBind(Socket s, int ifIndex)
        {
            var address = new LLEndPoint(ifIndex);

            s.Bind(address);
        }

        static void Main(string[] args)
        {
            Int16 protocol = 0x800; // IP.
            var _socket = new Socket(AddressFamily.Packet, SocketType.Raw, (ProtocolType)System.Net.IPAddress.HostToNetworkOrder(protocol));

            if (args.Length > 0)
            {
                doBind(_socket, int.Parse(args[0]));
            }

            byte[] packet = new byte[1508];

            int count = 10;
            while (count > 0)
            {
                    int packetLen = _socket.Receive(packet);
                    Console.WriteLine("got packet {0} {1} - {2}", packetLen, new IPAddress(packet.AsSpan().Slice(26, 4)), new IPAddress(packet.AsSpan().Slice(30,4)));
                    count--;
             }
        }
    }
}

We (Networking team) are in general fine with the API addition, assuming it solves the original problem - @wfurt will investigate.

any plan or process for this ? i want to use raw socket in .net core on linux to create a raw ip header packets process server. 😂

In case anyone comes across this thread and is looking for examples check my repo that was posted here, I have PACKET_FANOUT and BPF filters working now

When socket is created, there should be no difference to libpcap/tcpdump or any direct use of the socket.

Thanks for the repro but as I run directly on Linux I did quick check with my sample code. It seems like the problem is in the bind example. If I add

   a[3] = 3;   // ETH_P_ALL
   a[10] = 4;  // PACKET_OUTGOING

I can see packets in both directions.

got packet 66 10.37.129.2 - 10.37.129.3
got packet 1458 10.37.129.2 - 10.37.129.3
got packet 66 10.37.129.3 - 10.37.129.2
got packet 114 10.37.129.2 - 10.37.129.3
got packet 66 10.37.129.3 - 10.37.129.2
got packet 430 10.37.129.3 - 10.37.129.2

it was mostly meant as guide how to use Bind() without introducing AF specific c# structure. You can always skip it and p/invoke bind from libc.

If that does not work you can use ‘strace -f -e trace=network xxx’ to see what the difference is between your app and tcpdump. I may not be able to get back to this for a while as I need to focus on remaining 3.0 issues.

https://www.linuxquestions.org/questions/programming-9/problem-sniffing-with-raw-sockets-222971/

You would need to use AF_PACKET(https://www.binarytides.com/packet-sniffer-code-in-c-using-linux-sockets-bsd-part-2/)

However that addressFamily is currently not supported. Your best option may be invoking libpcap.

Sorry to continue on a closed thread, but the discussion here seems relevant. Looking at the source it looks like .NET Core is using a system call to recvmsg under the hood when reading from the socket. For a raw socket it seems it is probably more desirable to leverage SO_RX_RING to minimize the number of system calls and allocations. Is MemoryMappedFiles intended to be useful for these types of things or am I off down the wrong path completely?