go: x/net/ipv4: add IPv4 header checksum computation for ipv4.Header type

I’m working on a project that uses packet sockets directly, and I have to calculate the IPv4 checksum on my own since I’m building from Ethernet frames up.

I notice that x/net/ipv4 doesn’t provide any way to easily calculate a checksum, but I think such a function/method could be useful in conjunction with the ipv4.Header type.

I wrote a basic implementation in ~30 lines with documentation comments, and would be happy to submit it upstream. At this point, my questions are:

  • Is this something that would be considered generally useful enough to go in x/net/ipv4?
  • If so, what should the API look like?

My current implementation accepts a byte slice from the output of ipv4.Header.Marshal, but I could see a method making sense as well.

// ipv4Checksum computes the IPv4 header checksum for input IPv4 header bytes.
func ipv4Checksum(b []byte) (uint16, error) {
    // ...
}

/cc @mikioh

About this issue

  • Original URL
  • State: open
  • Created 6 years ago
  • Reactions: 4
  • Comments: 24 (23 by maintainers)

Most upvoted comments

I’m not @mdlayher or @mikioh, but as someone recently interested in this code, I can try to explain what I believe to be going on.

Let’s say you created a raw socket and print the first packet you receive.

package main

import (
        "encoding/hex"
        "fmt"
        "net"
        "os"

        "golang.org/x/net/ipv4"
)

func main() {
        conn, err := net.ListenIP("ip:40", &net.IPAddr{
                IP: net.ParseIP("127.0.0.1"),
        })

        if err != nil {
                fmt.Fprintf(os.Stderr, "error listening: %v\n", err)
                os.Exit(-1)
        }

        buf := make([]byte, 1024)
        n, _ := conn.Read(buf)

        fmt.Printf("buf: (len=%d)\n%s\n\n", len(buf[:n]), hex.Dump(buf[:n]))

        hdr, err := ipv4.ParseHeader(buf[:n])
        if err != nil {
                fmt.Fprintf(os.Stderr, "error parsing header: %v\n", err)
                os.Exit(-1)
        }

        fmt.Printf("IP hdr: %+v\n", hdr)
}
buf: (len=43)
00000000  45 00 00 2b f7 77 40 00  40 28 45 31 7f 00 00 01  |E..+.w@.@(E1....|
00000010  7f 00 00 01 00 00 00 18  00 00 00 07 00 07 00 00  |................|
00000020  00 01 00 00 00 00 68 65  6c 6c 6f                 |......hello|

IP hdr: ver=4 hdrlen=20 tos=0x0 totallen=43 id=0xf777 flags=0x2 fragoff=0x0 ttl=64 proto=40 cksum=0x4531 src=127.0.0.1 dst=127.0.0.1

On Linux, the bytes you receive are the same bytes received over the wire. Computing the IPv4 checksum over buf[:n] using the CL function results in the same value as we received, 0x4531.

On at least some versions of Darwin, FreeBSD, NetBSD, and DragonFlyBSD this assumption isn’t true, the values may be significantly different. FreeBSD’s ip(4) for example:

Before FreeBSD 10.0 packets received on raw IP sockets had the ip_hl sub-
tracted from the ip_len field.

Before FreeBSD 11.0 packets received on raw IP sockets had the ip_len and
ip_off fields converted to host byte order.  Packets written to raw IP
sockets were expected to have ip_len and ip_off in host byte order.

The checksum value received on the raw connection would be what was seen “on-the-wire” (that is, with these fields in network byte order). Computing a checksum separately by using the output of Marshal on those systems would result in differing values.

I’d suggest updating the documentation of ipv4.Header.Checksum to something along the lines of “wire header checksum”. As far as I know, when sending raw packets, if set to 0 it will be filled in by all kernels, even if the IP_HDRINCL option is provided

I still want to see some better docs at least. @mikioh confused me and if I ever look at this field again, I’d like to know what it means.