caddy-l4: SNI matching after proxy_protocol handler does not work

Hi, thanks for this awesome project, it makes very cool configurations possible. But I think I found a bug in the way the proxy_protocol handler works. It seems not to be possible to do sni matching after the proxy_protocol handler was executed. I would expect the following config to work, but instead I see this in the logs:

debug layer4.handlers.proxy_protocol  received the PROXY header       {"remote": "127.0.0.1:37055", "local": "127.0.0.1:10443"}
debug layer4.matchers.tls     matched {"remote": "127.0.0.1:37055", "server_name": "localhost"}
error    layer4  handling connection     {"error": "tls: first record does not look like a TLS handshake"}
debug layer4  connection stats        {"remote": "127.0.0.1:37055", "read": 230, "written": 0, "duration": 0.0081304}
caddy.json
{
  "admin": {
    "disabled": true
  },
  "logging": {
    "logs": {
      "default": {"level":"DEBUG", "encoder": {"format":"console"}}
    }
  },
  "apps": {
    "tls": {
      "certificates": {
        "automate": ["localhost"]
      },
      "automation": {
        "policies": [{
          "subjects": ["localhost"],
          "issuers": [{
            "module": "internal"
          }]
        }]
      }
    },
    "layer4": {
      "servers": {
        "https": {
          "listen": ["0.0.0.0:10443"],
          "routes": [
            {
              "match": [
                {"proxy_protocol": {}}
              ],
              "handle": [
                {
                  "handler": "proxy_protocol",
                  "timeout": "2s",
                  "allow": ["127.0.0.1/32"]
                }
              ]
            },
            {
              "match": [
                {"tls": {"sni": ["localhost"]}}
              ],
              "handle": [
                {"handler": "tls"},
                {
                  "handler": "proxy",
                  "upstreams": [{"dial": ["127.0.0.1:10080"]}]
                }
              ]
            }
          ]
        }
      }
    },
    "http": {
      "servers": {
        "backend": {
          "allow_h2c": true,
          "listen": ["127.0.0.1:10080"],
          "routes": [
            {
              "handle": [{
                "handler": "static_response",
                "status_code": "200",
                "body": "Hello World\n",
                "headers": {
                  "Content-Type": ["text/plain"]
                }
              }]
            }
          ]
        }
      }
    }
  }
}

You should be able to run this config yourself since it only uses localhost.
For testing I used curl -v --insecure --haproxy-protocol https://localhost:10443

When the match key for the second route is completely removed, so it matches all requests the above curl works and prints Hello World. But in my real config I can not use a match all route since I have to distribute requests to different servers.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 27 (8 by maintainers)

Commits related to this issue

Most upvoted comments

In theory, if there is any case where a “non-terminal” handler consumes less bytes than its corresponding matcher does, simply resetting the buffer may be problematic.

Good point. After some sleep I also do not think that the cx.buf.Reset in Warp is the correct fix.
Resetting should happen after we read more bytes than were present in the buffer and we are not recording these bytes. Because that means the content of cx.buf is now definitely invalid since there would be a gap in the data otherwise.
It could be fixed like so:

diff --git a/layer4/connection.go b/layer4/connection.go
index 69202d0..87b7b2e 100644
--- a/layer4/connection.go
+++ b/layer4/connection.go
@@ -92,6 +92,12 @@ func (cx *Connection) Read(p []byte) (n int, err error) {
 	cx.bytesRead += uint64(n)
 
 	if !cx.recording {
+		// We read past buf and are not recording,
+		// which means cx.buf is now invalid and should be emptied for a clean next recording.
+		// Otherwise, there will be a gap in the data. Also see issue 55.
+		if cx.buf.Len() > 0 {
+			cx.buf.Reset()
+		}
 		return
 	}
 

With this fix all my example configs (caddy.json, tls.json, two-matchers.json) work.

But I still find it strange that Wrap does pass cx.buf which may contain content from previous matchers but drops cx.bufReader. This is effectively also a buffer reset but without cleaning up cx.buf since the next Read will never read from cx.buf since cx.bufReader is nil. I think Wrap should also pass all other fields like so:

diff --git a/layer4/connection.go b/layer4/connection.go
index 69202d0..87b7b2e 100644
--- a/layer4/connection.go
+++ b/layer4/connection.go
@@ -118,10 +124,15 @@ func (cx *Connection) Write(p []byte) (n int, err error) {
 // a connection is wrapped by a package that does not support
 // our Connection type (for example, `tls.Server()`).
 func (cx *Connection) Wrap(conn net.Conn) *Connection {
 	return &Connection{
-		Conn:    conn,
-		Context: cx.Context,
-		buf:     cx.buf,
+		Conn:         conn,
+		Context:      cx.Context,
+		buf:          cx.buf,
+		bufReader:    cx.bufReader,
+		recording:    cx.recording,
+		bytesRead:    cx.bytesRead,
+		bytesWritten: cx.bytesWritten,
 	}
 }
 

This would catch the edge case that a handler did not read all content from cx.buf meaning cx.bufReader is not nil and the following matchers or handlers continue to read at the correct position. It also does not reset the byte statistics.
But at this point why not just do cx.Conn = conn; in Wrap so no future internal field is forgotten. Apparently this was the way it was done before 5f9948fe76ae19009258465fd66da1390aa5ecce which caused other problems (see issue #18). I do not fully understand why it does not work, but I can verify it does not work. But copying everything like in the above diff works fine.

I will update my PR with the new proposed fix.

I have created a PR. I tried to add as much context as possible in case this comes up again 😄

Thanks guys I learned a lot today.

Maybe the even better fix would be calling cx.buf.Reset() in Connection.Wrap

func (cx *Connection) Wrap(conn net.Conn) *Connection {
+	cx.buf.Reset()
	return &Connection{
		Conn:    conn,
		Context: cx.Context,
		buf:     cx.buf,
	}
}

Since the following config without proxy protocol and only with multiple matchers after tls termination has the same unclean buffer problem.

tls.json
{
  "admin": {
    "disabled": true
  },
  "logging": {
    "logs": {
      "default": {"level":"DEBUG", "encoder": {"format":"console"}}
    }
  },
  "apps": {
    "tls": {
      "certificates": {
        "automate": ["localhost"]
      },
      "automation": {
        "policies": [{
          "subjects": ["localhost"],
          "issuers": [{
            "module": "internal"
          }]
        }]
      }
    },
    "layer4": {
      "servers": {
        "https": {
          "listen": ["0.0.0.0:10443"],
          "routes": [
            {
              "match": [
                {"tls": {"sni": ["localhost", "example.com"]}}
              ],
              "handle": [
                {"handler": "tls"}
              ]
            },
            {
              "match": [
                {"http": [{"host": ["example.com"]}]}
              ],
              "handle": [
                {
                  "handler": "proxy",
                  "upstreams": [{"dial": ["127.0.0.1:9999"]}]
                }
              ]
            },
            {
              "match": [
                {"http": [{"host": ["localhost"]}]}
              ],
              "handle": [
                {
                  "handler": "proxy",
                  "upstreams": [{"dial": ["127.0.0.1:10080"]}]
                }
              ]
            }
          ]
        }
      }
    },
    "http": {
      "servers": {
        "backend": {
          "allow_h2c": true,
          "listen": ["127.0.0.1:10080"],
          "routes": [
            {
              "handle": [{
                "handler": "static_response",
                "status_code": "200",
                "body": "Hello World\n",
                "headers": {
                  "Content-Type": ["text/plain"]
                }
              }]
            }
          ]
        }
      }
    }
  }
}

With the current master and tls.json, curl -v --insecure https://localhost:10443 produces:

*   Trying 127.0.0.1:10443...
* Connected to localhost (127.0.0.1) port 10443 (#0)
* schannel: disabled automatic use of client certificate
* schannel: ALPN, offering http/1.1
* schannel: ALPN, server accepted to use http/1.1
> GET / HTTP/1.1
> Host: localhost:10443
> User-Agent: curl/7.79.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 400 Bad Request
< Content-Type: text/plain; charset=utf-8
< Connection: close
<
400 Bad Request^C

With cx.buf.Reset() both of my configs work as expected.

@mholt Is the tls.json config from above one that should work, or am I misusing something here? The example.com route obviously does not work and only exists to make the use case more clear.

Thanks @RussellLuo your fix works 🥳
I agree my Seek attempt was not ideal because it would require more knowledge form handlers and matchers. With your fix my cx.buf.Reset() line is also rather useless because now we never reach it since cx.buf.Len() will always be 0 at that point.

To verify I used this reproducer script and a second caddy config together with the caddy.json from above (with two caddys it is even closer to my real setup):

reproducer.py
import ssl
from urllib.request import urlopen


def main():
    ctx = ssl.SSLContext()
    ctx.verify_mode = ssl.CERT_NONE

    for i in range(0, 100):
        try:
            with urlopen("https://localhost:443", context=ctx) as r:
                if r.status != 200:
                    raise ValueError(f"Unexpected status: {r.status}")

                body = r.read().decode("utf-8")
                if body != "Hello World\n":
                    raise ValueError(f"Unexpected body: {body}")
        except Exception as e:
            print(f"request {i} failed: {e}")
            exit(-1)


if __name__ == "__main__":
    main()

forwarder.json
{
  "admin": {
    "disabled": true
  },
  "logging": {
    "logs": {
      "default": {"level":"DEBUG", "encoder": {"format":"console"}}
    }
  },
  "apps": {
    "tls": {
      "certificates": {
        "automate": ["localhost"]
      },
      "automation": {
        "policies": [{
          "subjects": ["localhost"],
          "issuers": [{
            "module": "internal"
          }]
        }]
      }
    },
    "layer4": {
      "servers": {
        "https": {
          "listen": ["0.0.0.0:443"],
          "routes": [
            {
              "match": [
                {"tls": {"sni": ["localhost"]}}
              ],
              "handle": [
                {
                  "handler": "proxy",
                  "upstreams": [{"dial": ["127.0.0.1:10443"]}],
                  "proxy_protocol": "v2"
                }
              ]
            }
          ]
        }
      }
    }
  }
}

Running the script on 28dab695a01c44157fc3cb17b3dcc1835b7c2eca without your fix will result in a line like this: request 13 failed: <urlopen error EOF occurred in violation of protocol (_ssl.c:1125)> indicating the tls handshake failed. With your fix everything works fine (no output from the script).
I think you can open a PR with that 👍

@mholt I think my refactored proxy protocol matcher from #58 is nevertheless an improvement. It is easier to understand because the whole reading one byte at a time to skip the null byte stuff is missing. It is not needed when not using a bufio.Reader. I also moved the matcher into the l4proxyprotocol package.

I also have another question. During writing tests for a rewritten but now obsolete proxy proto handler, I noticed that it seems impossible to create a caddy context with a Logger attached to it. For example when doing it this way:

ctx, cancel := caddy.NewContext(caddy.Context{Context: context.Background()})
defer cancel()

handler := Handler{}
err := handler.Provision(ctx)

a call to Provision that tries to use the logger like the existing proxy proto handler does https://github.com/mholt/caddy-l4/blob/28dab695a01c44157fc3cb17b3dcc1835b7c2eca/modules/l4proxyprotocol/handler.go#L65 will fail because ctx.cfg.Logging is nil. Do you know any way around this? It would be sad if the usage of a logger prevents writing tests 😃

Thanks for your analysis @RussellLuo, it motivated me too also look into the code.

I think the wrapping of cx.buf can not be removed, it is essential for rewind to work. See the comment here: https://github.com/mholt/caddy-l4/blob/c46c253c62e31bec451035cadd22a2349700e2cc/layer4/connection.go#L66 You can also test this by adding another route with a tls matcher like so:

two-matchers.json
{
  "admin": {
    "disabled": true
  },
  "logging": {
    "logs": {
      "default": {"level":"DEBUG", "encoder": {"format":"console"}}
    }
  },
  "apps": {
    "tls": {
      "certificates": {
        "automate": ["localhost"]
      },
      "automation": {
        "policies": [{
          "subjects": ["localhost"],
          "issuers": [{
            "module": "internal"
          }]
        }]
      }
    },
    "layer4": {
      "servers": {
        "https": {
          "listen": ["0.0.0.0:10443"],
          "routes": [
            {
              "match": [
                {"proxy_protocol": {}}
              ],
              "handle": [
                {
                  "handler": "proxy_protocol",
                  "timeout": "2s",
                  "allow": ["127.0.0.1/32"]
                }
              ]
            },
            {
              "match": [
                {"tls": {"sni": ["example.com"]}}
              ],
              "handle": [
                {"handler": "tls"},
                {
                  "handler": "proxy",
                  "upstreams": [{"dial": ["127.0.0.1:10080"]}]
                }
              ]
            },
            {
              "match": [
                {"tls": {"sni": ["localhost"]}}
              ],
              "handle": [
                {"handler": "tls"},
                {
                  "handler": "proxy",
                  "upstreams": [{"dial": ["127.0.0.1:10080"]}]
                }
              ]
            }
          ]
        }
      }
    },
    "http": {
      "servers": {
        "backend": {
          "allow_h2c": true,
          "listen": ["127.0.0.1:10080"],
          "routes": [
            {
              "handle": [{
                "handler": "static_response",
                "status_code": "200",
                "body": "Hello World\n",
                "headers": {
                  "Content-Type": ["text/plain"]
                }
              }]
            }
          ]
        }
      }
    }
  }
}

This config in combination with you fix will cause caddy to hang.

But you were right the buffer is unclean. But it looks like the real culprit is the cx.Wrap call in the proxy protocol handler https://github.com/mholt/caddy-l4/blob/c46c253c62e31bec451035cadd22a2349700e2cc/modules/l4proxyprotocol/handler.go#L164 which passes cx.buf directly. Thus, for the next matcher PROX is still in the buffer, although the proxy protocol header was already consumed. Which also explains why removing the matcher in my original config works, because then cx.buf is ignored because no other record & rewind happens.

A possible fix for this could be a WrapDiscard function that clears cx.buf before wrapping, and using that in the proxy protocol handler.

func (cx *Connection) WrapDiscard(conn net.Conn) *Connection {
	cx.buf.Reset()
	return &Connection{
		Conn:    conn,
		Context: cx.Context,
		buf:     cx.buf,
	}
}

@ydylla Thanks for verifying the fix! I’ll try to open a PR.

During writing tests for a rewritten but now obsolete proxy proto handler, I noticed that it seems impossible to create a caddy context with a Logger attached to it.

I encountered the same problem while writing tests for my custom module ratelimit, my workaround was to define a separate function provision():

// Provision implements caddy.Provisioner.
func (rl *RateLimit) Provision(ctx caddy.Context) (err error) {
	rl.logger = ctx.Logger(rl)
	return rl.provision()
}

func (rl *RateLimit) provision() (err error) {
	...
}

And then use provision() in the test code.

@ydylla @RussellLuo Ah, dang, of course it’d be a little trickier than previously thought. 😦 I don’t have much in way of help or insights right now, but I want to say that I really appreciate the thoughtful consideration going into this while I’m busy on other things. Thank you for helping make the project better and for the great contributions that will help a lot of other people!

Basically, according to my understanding, I think there are two consuming modes: matcher and handler.

Kind of! I guess I haven’t named them. If you think of it that way, there might be 2 sets of 2 modes actually:

  • Recording and not recording
  • Reading from buffer and reading directly from stream

These two sets are orthogonal; i.e. you can have all the combinations: recording and reading from buffer, or recording and reading directly from stream, or not recording and reading from buffer, or not recording and reading directly from stream.

So yeah, basically, recording happens when we are running a matcher and need to rewind. We don’t know or care where the bytes come from (buffer or stream), but we just need to pretend as if the reading of those bytes “never happened”, i.e. keep the stream in a pristine state for future handlers.

Reading from the buffer happens when the buffer is non-empty. Once empty, we discard the buffer (I think I do that merely as a signal that it is empty, or that we don’t need to read from it – can’t recall if there is any other significance to setting it to nil or not.) and then read directly from the stream.

Hope that helps. Also have a lot of hope for your fix, Luo!

Thanks for continuing to work on this!

Ideally, I think we’d better put as less constraints as possible on implementers. Basically, according to my understanding, I think there are two consuming modes: matcher and handler. In matcher mode, we must read from cx.bufReader first (without discard any byte), and then record anything read from underlying connection into cx.buf. In handler mode, we must also read from cx.bufReader and actually consume these bytes, and then do no recording.

Based on the above analysis, I think this might be a possible solution:

// record starts recording the stream into cx.buf.
func (cx *Connection) record() {
	cx.recording = true
+	cx.bufReader = bytes.NewReader(cx.buf.Bytes()) // Do not discard bytes
}

// rewind stops recording and creates a reader for the
// buffer so that the next reads from an associated
// recordableConn come from the buffer first, then
// continue with the underlying conn.
func (cx *Connection) rewind() {
	cx.recording = false
-	cx.bufReader = bytes.NewReader(cx.buf.Bytes())
+	cx.bufReader = cx.buf // actually consume bytes
}

I have tested all the example configs ((caddy.json, tls.json, two-matchers.json) and I think it works.

@ydylla Could you please give this a try?

Hi, sadly this is still not really fixed. Locally all the example configs work fine for me (nearly always), but with my real config on an Ubuntu server there is a high rate of connections that still fail.

It’s the bufio.NewReader in the proxy protocol matcher which sometimes reads the full proxy protocol header in its internal buffer (by default 4096 bytes). If this happens the reads from the proxy proto handler do not trigger the over read and thus cx.buf is not reset, causing the following tls matcher to produce an invalid cx.buf again and the tls handshake to fail.

Using buffered readers on top of cx.buf makes managing/remembering the current true consumed byte offset difficult. I experimented a bit with implementing io.Seeker for the layer4.Connection but it’s not really usable yet. I did this because the http matcher for example also uses a buffered reader and we can probably not avoid using them. My hope was that matchers & handlers could then use Seek on the connection, to place the offset at the correct position again after they are done. With the use of a byte counting reader and bufio.Reader.Buffered() it is possible to calculate the amount of consumed bytes (https://go.dev/play/p/Rxwcwhai4Gl). With that known all handlers or matchers that used a buffered reader could then call Seek on the connection to go back to the correct position.

The quick fix I am currently using is a rewritten proxy proto matcher which does not use a buffered reader at all. Instead, it reads exactly 12 bytes. With that my fix from #56 is always triggered for the particular config I am using.

The rewritten proxy proto matcher could also help with other configs. I will open a PR with it, maybe you want to merge it.

Sorry I think my explanation for the above screenshot was not good, maybe short text form is better 😄

  1. proxy proto matcher Writes PROX to cx.buf

  2. proxy proto handler Reads buffer and consumes header cx.buf still contains PROX

  3. tls matcher Does not read cx.buf since cx.bufReader is nil. Writes tls handshake stuff to cx.buf but it still contains PROX because Wrap passed the reference. Now cx.buf contains PROX + tls handshake 💥

I don’t think there exists any scenario where this behavior would be correct.

Right - matchers should not consume the buffer, but the handlers should.

does putting cx.buf.Reset() in Wrap() fix the issue on all config samples so far?

Yes all configs in this issue work with that. I also think this is the correct behavior for the tls and proxy_protocol handlers (the only ones that use Wrap right now). The resetting happens after they consumed their bytes of the connection, so they still saw the buffer content.
In other words they unwrapped a layer and after that the matcher buffer should be clean for the next matchers.

Here is a screenshot of the broken buffer that a tls matcher produces which is called after a proxy_protocol matcher and handler were executed, because cx.buf was not empty. The PROX at the beginning is exactly what the proxy_protocol matcher matched aka wrote into the buffer, followed by the tls syn. Notice the missing remainder of the proxy protocol header, it was correctly consumed. broken-buffer Matching after a consuming handler should start with an empty cx.buf, not with the remaining content of a previous matcher.

After some analysis, I’m afraid there’s something wrong with the current rewinding mechanism.

Analysis

With the original JSON config @ydylla provided, by adding a log in Connection.rewind as below:

func (cx *Connection) rewind() {
	cx.recording = false
	cx.bufReader = bytes.NewReader(cx.buf.Bytes())
	fmt.Printf("[rewind] remaining buf bytes (first 10): [% x]\n", cx.buf.Bytes()[:10]) // add a log here
}

I got the following logs after running the cURL command:

[rewind] remaining buf bytes (first 10): [50 52 4f 58 00 00 00 00 00 00]
2022/04/11 08:18:49.924	DEBUG	layer4.handlers.proxy_protocol	received the PROXY header	{"remote": "127.0.0.1:63751", "local": "127.0.0.1:10443"}
2022/04/11 08:18:49.925	DEBUG	layer4.matchers.tls	matched	{"remote": "127.0.0.1:63751", "server_name": "localhost"}
[rewind] remaining buf bytes (first 10): [50 52 4f 58 16 03 01 00 c1 01]
2022/04/11 08:18:49.925	ERROR	layer4	handling connection	{"error": "tls: first record does not look like a TLS handshake"}
2022/04/11 08:18:49.925	DEBUG	layer4	connection stats	{"remote": "127.0.0.1:63751", "read": 242, "written": 0, "duration": 0.001051119}

Note that in the first rewinding, 50 52 4f 58 is the string “PROX” in hexadecimal format, which was read (also recorded in cx.buf) and checked by the proxy_protocol matcher here:

https://github.com/mholt/caddy-l4/blob/c46c253c62e31bec451035cadd22a2349700e2cc/modules/l4proxy/matcher.go#L49-L56

As we can see, in the second rewinding — triggered by the tls matcher — the prior bytes 50 52 4f 58 (i.e. “PROX”) still remained in cx.buf. Obviously, 50 52 4f 58 16 03 01 00 c1 01 is exactly the byte sequence that caused the error "tls: first record does not look like a TLS handshake".

Possible solution

I think the root cause is that in the current implementation:

https://github.com/mholt/caddy-l4/blob/c46c253c62e31bec451035cadd22a2349700e2cc/layer4/connection.go#L139

reading from cx.bufReader does not advance the cx.buf.off.

By changing rewind as follows (omitting the log):

func (cx *Connection) rewind() {
	cx.recording = false
- 	cx.bufReader = bytes.NewReader(cx.buf.Bytes())
+	cx.bufReader = cx.buf
}

everything works as expected:

[rewind] remaining buf bytes (first 10): [50 52 4f 58 00 00 00 00 00 00]
2022/04/11 08:21:32.130	DEBUG	layer4.handlers.proxy_protocol	received the PROXY header	{"remote": "127.0.0.1:63867", "local": "127.0.0.1:10443"}
2022/04/11 08:21:32.131	DEBUG	layer4.matchers.tls	matched	{"remote": "127.0.0.1:63867", "server_name": "localhost"}
[rewind] remaining buf bytes (first 10): [16 03 01 00 c1 01 00 00 bd 03]
2022/04/11 08:21:32.131	DEBUG	tls.handshake	choosing certificate	{"identifier": "localhost", "num_choices": 1}
2022/04/11 08:21:32.131	DEBUG	tls.handshake	default certificate selection results	{"identifier": "localhost", "subjects": ["localhost"], "managed": true, "issuer_key": "local", "hash": "e860f43cff99c265bd12c8d2b9247809c2f182c533e9c5cc64ea352ff98dc0c1"}
2022/04/11 08:21:32.131	DEBUG	tls.handshake	matched certificate in cache	{"subjects": ["localhost"], "managed": true, "expiration": "2022/04/11 19:16:48.000", "hash": "e860f43cff99c265bd12c8d2b9247809c2f182c533e9c5cc64ea352ff98dc0c1"}
2022/04/11 08:21:32.141	DEBUG	layer4.handlers.tls	terminated TLS	{"remote": "127.0.0.1:63867", "server_name": "localhost"}
2022/04/11 08:21:32.142	DEBUG	layer4.handlers.proxy	dial upstream	{"remote": "127.0.0.1:63867", "upstream": "127.0.0.1:10080"}
2022/04/11 08:21:32.145	DEBUG	layer4	connection stats	{"remote": "127.0.0.1:63867", "read": 507, "written": 1376, "duration": 0.015388055}

It’s weird that this problem will go away if removing either matcher (proxy_protocol or tls). For example, with the following config:

{
  "admin": {
    "disabled": true
  },
  "logging": {
    "logs": {
      "default": {"level":"DEBUG", "encoder": {"format":"console"}}
    }
  },
  "apps": {
    "tls": {
      "certificates": {
        "automate": ["localhost"]
      },
      "automation": {
        "policies": [{
          "subjects": ["localhost"],
          "issuers": [{
            "module": "internal"
          }]
        }]
      }
    },
    "layer4": {
      "servers": {
        "https": {
          "listen": ["0.0.0.0:10443"],
          "routes": [
            {
              "handle": [
                {
                  "handler": "proxy_protocol",
                  "timeout": "2s",
                  "allow": ["127.0.0.1/32"]
                }
              ]
            },
            {
              "match": [
                {"tls": {"sni": ["localhost"]}}
              ],
              "handle": [
                {"handler": "tls"},
                {
                  "handler": "proxy",
                  "upstreams": [{"dial": ["127.0.0.1:10080"]}]
                }
              ]
            }
          ]
        }
      }
    },
    "http": {
      "servers": {
        "backend": {
          "allow_h2c": true,
          "listen": ["127.0.0.1:10080"],
          "routes": [
            {
              "handle": [{
                "handler": "static_response",
                "status_code": "200",
                "body": "Hello World\n",
                "headers": {
                  "Content-Type": ["text/plain"]
                }
              }]
            }
          ]
        }
      }
    }
  }
}

then everything works properly:

$ curl -v --haproxy-protocol https://localhost:10443
*   Trying 127.0.0.1:10443...
* Connected to localhost (127.0.0.1) port 10443 (#0)
> PROXY TCP4 127.0.0.1 127.0.0.1 56845 10443
* ALPN, offering http/1.1
* TLS 1.2 connection using TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
* Server certificate: X509 Certificate
* Server certificate: Caddy Local Authority - ECC Intermediate
* Server certificate: Caddy Local Authority - 2020 ECC Root
> GET / HTTP/1.1
> Host: localhost:10443
> User-Agent: curl/7.82.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Server: Caddy
< Date: Sun, 10 Apr 2022 12:32:39 GMT
< Content-Length: 12
< 
Hello World
* Connection #0 to host localhost left intact

I still need time to figure out what happened… (maybe in the next few days)