redis: Debian bookworm upgrade renders container unable to start

The changes introduced in 7e4a3dd9d2644458a40700a96620f6f028887a25 has broken our CI pipeline (Travis CI). We began seeing errors as the recent push to the 7.0 tag happened on DockerHub https://hub.docker.com/layers/library/redis/7.0/images/sha256-178215249742b63308db1a5373a7c627714c582362f3dcd24b2eec849dc16e67?context=explore

It is worth mentioning that we resolved the issue temporarily by switching to the 7.0-bullseye image, but I still don’t see a reason why the bookworm-based images should be crashing.

Starting a container using the redis:7.0 image produces the following output:

redis_1  | 1:C 15 Jun 2023 09:43:37.054 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1  | 1:C 15 Jun 2023 09:43:37.054 # Redis version=7.0.11, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1  | 1:C 15 Jun 2023 09:43:37.054 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1  | 1:M 15 Jun 2023 09:43:37.055 * monotonic clock: POSIX clock_gettime
redis_1  | 1:M 15 Jun 2023 09:43:37.055 * Running mode=standalone, port=6379.
redis_1  | 1:M 15 Jun 2023 09:43:37.055 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1  | 1:M 15 Jun 2023 09:43:37.055 # Server initialized
redis_1  | 1:M 15 Jun 2023 09:43:37.055 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1  | 1:M 15 Jun 2023 09:43:37.055 # Fatal: Can't initialize Background Jobs.
api_redis_1 exited with code 1

In docker-compose.yml the container is defined as such:

version: '3.4'
services:
  redis:
    image: redis:7.0
    hostname: redis
    ports:
      - "6379:6379"
  • Linux 4.15.0-1098-gcp 111~16.04.1-Ubuntu SMP Tue Apr 13 19:05:08 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • Docker version 20.10.7, build f0df350
  • docker-compose version 1.23.2, build 1110ad01
  • MemTotal: 8164804 kB

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 33
  • Comments: 18 (6 by maintainers)

Most upvoted comments

Root cause: it is Docker with libseccomp so a newer syscall used in Debian Bookworm packages/libs is being blocked.

libseccomp lets you configure allowed syscalls for a process. Docker sets a default seccomp profile for all containers such that only certain syscalls are allowed and everything else is blocked (so, newer syscalls that are not yet known to libseccomp or docker are blocked).

  • verify that it is libseccomp by running the Bookworm-based image with --security-opt seccomp=unconfined
  • one fix:
    • update libseccomp and docker on the host running the containers
  • one workaround:
    • switch to the *bullseye images (in the redis images, these are now unmaintained/unchanging)

Seen in our piplines running 6.x as well

We had the same issue on our side with the redis:6.2 image. However the strange thing is that it happened only on some of our test platform with slightly older versions of Docker. In that case updating Docker fixed the issue. I’m still puzzled by what is the root cause.

it seems like there’s some call in the build that requires the latest version of debian/docker, which wouldn’t be a problem locally but the reality is the redis docker image is heavily used for CI pipelines and not production systems, where it makes sense to just stay on latest instead of pinning versions. This also makes it more jarring, since all of your builds start failing. Would be great to know a root cause so people can go back to using latest

We’re in the same boat, all our pipelines started to fail yesterday when the latest version was released. We pinned to the bullseye version

image: redis:7.0.11-bullseye

Ref for anyone else coming to this issue fresh.

If this is something we need to change in our CI pipelines to support bookworm, can this be clarified?

The best short term solution for you is going to be pinning to bullseye images explicitly, yes. However, they are no longer actively maintained/supported in this case, meaning that when there are updates to Debian or Redis you will not be receiving them, so you do want to figure out the issues with your environment and get yourself upgraded to the bookworm-based images.

My issue is that there is no clarity on what are the requirements to make the new image work. I’m happy to make updates as necessary but nothing here tells me the root cause

Hi, it would be helpful to know more details here – is the best short-term solution to pin to use bullseye? is bullseye going to break in the next release, so should we also pin a version?

Using recent AWS AMI’s, docker-compose is no longer able to bring up redis:latest which (per docs) is the defacto image… to which I would argue is a fairly jarring change

version: "3.2"

services:
  redis:
    image: redis
    ports:
      - 6379

If you are having troubles, try using the redis:*-bullseye tags. If that fixes it, then you probably need to update Docker and libseccomp on the host (and maybe possibly kernel version, but try the others first). Newer base OS’s use newer system calls and an older libseccomp can block them since they are unknown to it.

I’ve tried updating to the latest docker version to no avail. I think it must be related to a specific kernel version.

The bullseye images are working fine.

We also notice this. The host is running on Ubuntu 18.04 with a 4.15.0-212-generic kernel. I assume it has something to do with the kernel version or the base OS?