conmon: segmentation fault with podman and ocrmypdf
I originally filed the issue in ocrmypdf: https://github.com/ocrmypdf/OCRmyPDF/issues/889
Essentially, when doing
podman run --network none --rm -i jbarlow83/ocrmypdf:v13.2.0 --jobs 1 -l deu - - <tmp.pdf >out.pdf
the output hangs, and I see a segfault in conmon in dmesg:
[24408.182063] conmon[100696]: segfault at a8a000 ip 00007f025e26d9b5 sp 00007ffcc9e773e8 error 4 in libc.so.6[7f025e11e000+176000]
[24408.182091] Code: fd 74 5f 41 c5 fd 74 67 61 c5 ed eb e9 c5 dd eb f3 c5 cd eb ed c5 fd d7 cd 85 c9 75 48 48 83 ef 80 48 81 ea 80 00 00 00 77 cb <c5> fd 74 4f 01 c5 fd d7 c1 66 90 85 c0 75 5c 83 c2 40 0f 8f c3 00
Address within libc: 0x00007f025e26d9b5 - 0x7f025e11e000 = 0x14F9B5
$ addr2line -e /usr/lib/libc.so.6 -fCi 0x14F9B5
__GI_netname2host
:?
Running the same with log-level=debug, I see
DEBU[0270] Sending signal 2 to container b2799f12bc64a5e78947be68ec1c9c37daaa0de03a47c4de4531e87f32d7551f
2022-01-10T16:46:08.000913691Z: open pidfd: No such process
ERRO[0275] No exit file for container b2799f12bc64a5e78947be68ec1c9c37daaa0de03a47c4de4531e87f32d7551f found: timed out waiting for file /run/user/1000/libpod/tmp/exits/b2799f12bc64a5e78947be68ec1c9c37daaa0de03a47c4de4531e87f32d7551f: internal libpod error
ERRO[0275] Error forwarding signal 2 to container b2799f12bc64a5e78947be68ec1c9c37daaa0de03a47c4de4531e87f32d7551f: error sending signal to container b2799f12bc64a5e78947be68ec1c9c37daaa0de03a47c4de4531e87f32d7551f: `/usr/bin/crun kill b2799f12bc64a5e78947be68ec1c9c37daaa0de03a47c4de4531e87f32d7551f 2` failed: exit status 1
Searching for this on the net brought up a potential fix on https://issueexplorer.com/issue/containers/conmon/251 Of this suggestion, only the first part made it into your code.
It is also worth noting that this does not happen all the time, but that is probably due to the output of ocrmypdf not being deterministic. Additionally, when this happens, podman logs contains some content that should go to stdout in stderr (and vice-versa).
This is executed on:
- Fedora 35
- conmon version 2.0.30
- podman version 3.4.4
- jbarlow83/ocrmypdf:v13.2.0
- The input file contains copyrighted material and can be made available to anyone looking into this (but I do not want to simply upload it here)
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 20 (4 by maintainers)
Commits related to this issue
- cluster-up: Avoid a known comon issue when using podman There is currently a known conmon issue [1] when podman based containers dump large amounts of data to stdout. gocli would previously do this w... — committed to lyarwood/kubevirtci by lyarwood 2 years ago
- cluster-up: Avoid a known comon issue when using podman There is currently a known conmon issue [1] when podman based containers dump large amounts of data to stdout. gocli would previously do this w... — committed to lyarwood/kubevirtci by lyarwood 2 years ago
- cluster-up: Avoid a known comon issue when using podman There is currently a known conmon issue [1] when podman based containers dump large amounts of data to stdout. gocli would previously do this w... — committed to lyarwood/kubevirtci by lyarwood 2 years ago
- cluster-up: Avoid a known comon issue when using podman There is currently a known conmon issue [1] when podman based containers dump large amounts of data to stdout. gocli would previously do this w... — committed to lyarwood/kubevirtci by lyarwood 2 years ago
- cluster-up: Avoid a known conmon issue when using podman There is currently a known conmon issue [1] when podman based containers dump large amounts of data to stdout. gocli would previously do this ... — committed to lyarwood/kubevirtci by lyarwood 2 years ago
- cluster-up: Avoid a known conmon issue when using podman There is currently a known conmon issue [1] when podman based containers dump large amounts of data to stdout. gocli would previously do this ... — committed to lyarwood/kubevirtci by lyarwood 2 years ago
- cluster-up: Avoid a known conmon issue when using podman (#743) There is currently a known conmon issue [1] when podman based containers dump large amounts of data to stdout. gocli would previously d... — committed to kubevirt/kubevirtci by lyarwood 2 years ago
- logging: do not read more that the buf size msg_len doesn't take into account NUL bytes that could be present in the buffer, while g_strdup_printf("MESSAGE=%s%s", partial_buf, buf) does and would sto... — committed to giuseppe/conmon by giuseppe 2 years ago
- logging: do not read more that the buf size msg_len doesn't take into account NUL bytes that could be present in the buffer, while g_strdup_printf("MESSAGE=%s%s", partial_buf, buf) does and would sto... — committed to giuseppe/conmon by giuseppe 2 years ago
- logging: do not read more that the buf size msg_len doesn't take into account NUL bytes that could be present in the buffer, while g_strdup_printf("MESSAGE=%s%s", partial_buf, buf) does and would sto... — committed to containers/conmon by giuseppe 2 years ago
I started using
podmanrecently to run a bunch of fuzz targets and noticed that those fuzz targets somehow managed to crashconmonon the host from inside their containers (which I would have probably reported privately if this issue wasn’t already public for more than half a year).Anyway as far as I can tell
conmondoesn’t take null bytes into account when it calculatesmsg_lenwhileg_strdup_printfstops when it sees them somessagecan be much shorter thanmsg_len(depending on where null bytes are embedded): https://github.com/containers/conmon/blob/9e416a2cf4c37bcdb4ce5955ab1e2d7763ee0434/src/ctr_logging.c#L296-L300It can be reproduced by building
conmonwith ASan and running the following command: