containerd: Inconsistent state on pod termination

Description

We just had an issue with containerd: an application was killed several times by the oom killer because it reached its cgroup memory limit. Containers on the host are now in a really weird state:

  • ok according to crictl ps
  • crictl exec fails with cannot exec in a stopped state: unknown
  • ctr -n k8s.io t ls hangs without any output
  • ps auxf shows many containerd-shim without any child process (or sometime only the pause container)
  • runc --root /run/containerd/runc/k8s.io list shows some containers in stopped state
  • the associated containerd-shim process is still running without any child

It seems that sometimes when a container process is oom-killed because it has reached its cgroup memory limit the containerd state becomes inconsistent. Once this has happened it’s no longer possible to delete containers. When trying to delete a pod, the containerd logs show:

  • containerd tries to stop it (StopContainer)
  • stop container xx timed out
  • then error=“an error occurs during waiting for container xxx to stop: wait container xxx is cancelled”
  • the container is stopped but not removed

Steps to reproduce the issue:

  1. Run kubernetes using containerd as CRI
  2. Create a pod with a memory limit
  3. Allocate more memory than the limit
  4. After several OOM kills, it should no longer be possible to interact with containerd

Describe the results you received: containerd seems to be stuck in a inconsistent state and no longer able to fulfill CRI requests

Describe the results you expected: containerd should clean up oom killed containers and remain consistent

Output of containerd --version:

containerd --version
containerd github.com/containerd/containerd v1.1.0 209a7fc3e4a32ef71a8c7b50c68fc8398415badf```

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 25 (11 by maintainers)

Most upvoted comments

@crosbymichael @lbernail and I spent sometime to debug this issue last week, and we found the suspicious stack dump.

containerd stack dump:

goroutine 13440572 [select, 4083 minutes]:
github.com/containerd/containerd/vendor/github.com/stevvooe/ttrpc.(*Client).dispatch(0xc420789ce0, 0x557f8a4d8300, 0xc4207d5ce0, 0xc420fd16c0, 0xc420635fe0, 0x0, 0x0)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/github.com/stevvooe/ttrpc/client.go:102 +0x24c
github.com/containerd/containerd/vendor/github.com/stevvooe/ttrpc.(*Client).Call(0xc420789ce0, 0x557f8a4d8300, 0xc4207d5ce0, 0x557f89d82d53, 0x25, 0x557f89d60749, 0xd, 0x557f8a3f4b40, 0xc420849ca0, 0x557f8a3ece20, ...)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/github.com/stevvooe/ttrpc/client.go:73 +0x15d
github.com/containerd/containerd/linux/shim/v1.(*shimClient).DeleteProcess(0xc42000e138, 0x557f8a4d8300, 0xc4207d5ce0, 0xc420849ca0, 0x557f8a3768a0, 0x557f8a3b4880, 0x0)
        /home/travis/gopath/src/github.com/containerd/containerd/linux/shim/v1/shim.pb.go:1761 +0xbf
github.com/containerd/containerd/linux.(*Task).DeleteProcess(0xc4201a8540, 0x557f8a4d8300, 0xc4207d5ce0, 0xc420b25b40, 0x40, 0x557f8a4e5640, 0xc4201a8540, 0x0)
        /home/travis/gopath/src/github.com/containerd/containerd/linux/task.go:275 +0x8a
github.com/containerd/containerd/services/tasks.(*local).DeleteProcess(0xc420235290, 0x7f7d7ed9fe88, 0xc4207d5ce0, 0xc420635de0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
        /home/travis/gopath/src/github.com/containerd/containerd/services/tasks/local.go:224 +0xf1
github.com/containerd/containerd.(*process).Delete(0xc420eb64e0, 0x557f8a4d8300, 0xc4207d5ce0, 0x0, 0x0, 0x0, 0xc42008a8b8, 0xc42008a8c0, 0x24)
        /home/travis/gopath/src/github.com/containerd/containerd/process.go:204 +0x37c
github.com/containerd/containerd/vendor/github.com/containerd/cri/pkg/server.(*criService).execInContainer.func2(0x557f8a4e1a20, 0xc420eb64e0, 0xc420b25b40, 0x40, 0xc42069e100, 0x40)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/github.com/containerd/cri/pkg/server/container_execsync.go:134 +0xd4
github.com/containerd/containerd/vendor/github.com/containerd/cri/pkg/server.(*criService).execInContainer(0xc420494d20, 0x7f7d7ed44a60, 0xc42104bd00, 0xc42069e100, 0x40, 0xc420791d20, 0x1, 0x1, 0x7f7d745c43c8, 0xc420314b20, ...)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/github.com/containerd/cri/pkg/server/container_execsync.go:193 +0xe94
github.com/containerd/containerd/vendor/github.com/containerd/cri/pkg/server.(*streamRuntime).Exec(0xc4202420d8, 0xc4208a4cc0, 0x40, 0xc420791d20, 0x1, 0x1, 0x7f7d745c43c8, 0xc420314b20, 0x557f8a4c77c0, 0xc4205b3720, ...)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/github.com/containerd/cri/pkg/server/streaming.go:73 +0x16c
github.com/containerd/containerd/vendor/k8s.io/kubernetes/pkg/kubelet/server/streaming.(*criAdapter).ExecInContainer(0xc420388530, 0x0, 0x0, 0x0, 0x0, 0xc4208a4cc0, 0x40, 0xc420791d20, 0x1, 0x1, ...)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/k8s.io/kubernetes/pkg/kubelet/server/streaming/server.go:365 +0xf2
github.com/containerd/containerd/vendor/k8s.io/kubernetes/pkg/kubelet/server/remotecommand.ServeExec(0x557f8a4d7040, 0xc420c38a80, 0xc420911000, 0x557f8a4bfc20, 0xc420388530, 0x0, 0x0, 0x0, 0x0, 0xc4208a4cc0, ...)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/k8s.io/kubernetes/pkg/kubelet/server/remotecommand/exec.go:52 +0x220
github.com/containerd/containerd/vendor/k8s.io/kubernetes/pkg/kubelet/server/streaming.(*server).serveExec(0xc4200d8000, 0xc420d7c180, 0xc421407080)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/k8s.io/kubernetes/pkg/kubelet/server/streaming/server.go:277 +0x1bf
github.com/containerd/containerd/vendor/k8s.io/kubernetes/pkg/kubelet/server/streaming.(*server).(github.com/containerd/containerd/vendor/k8s.io/kubernetes/pkg/kubelet/server/streaming.serveExec)-fm(0xc420d7c180, 0xc421407080)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/k8s.io/kubernetes/pkg/kubelet/server/streaming/server.go:127 +0x40
github.com/containerd/containerd/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc4200d8090, 0x557f8a4d7040, 0xc420c38a80, 0xc420911000)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/github.com/emicklei/go-restful/container.go:277 +0x9bb
github.com/containerd/containerd/vendor/github.com/emicklei/go-restful.(*Container).(github.com/containerd/containerd/vendor/github.com/emicklei/go-restful.dispatch)-fm(0x557f8a4d7040, 0xc420c38a80, 0xc420911000)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/github.com/emicklei/go-restful/container.go:120 +0x4a
net/http.HandlerFunc.ServeHTTP(0xc4203886c0, 0x557f8a4d7040, 0xc420c38a80, 0xc420911000)
        /home/travis/.gimme/versions/go1.10.3.linux.amd64/src/net/http/server.go:1947 +0x46
net/http.(*ServeMux).ServeHTTP(0xc420419f80, 0x557f8a4d7040, 0xc420c38a80, 0xc420911000)
        /home/travis/.gimme/versions/go1.10.3.linux.amd64/src/net/http/server.go:2337 +0x132
github.com/containerd/containerd/vendor/github.com/emicklei/go-restful.(*Container).ServeHTTP(0xc4200d8090, 0x557f8a4d7040, 0xc420c38a80, 0xc420911000)
        /home/travis/gopath/src/github.com/containerd/containerd/vendor/github.com/emicklei/go-restful/container.go:292 +0x4f
net/http.serverHandler.ServeHTTP(0xc4204105b0, 0x557f8a4d7040, 0xc420c38a80, 0xc420911000)
        /home/travis/.gimme/versions/go1.10.3.linux.amd64/src/net/http/server.go:2694 +0xbe
net/http.(*conn).serve(0xc4212da640, 0x557f8a4d8280, 0xc42104b880)
        /home/travis/.gimme/versions/go1.10.3.linux.amd64/src/net/http/server.go:1830 +0x653
created by net/http.(*Server).Serve
        /home/travis/.gimme/versions/go1.10.3.linux.amd64/src/net/http/server.go:2795 +0x27d

containerd-shim stack dump:

goroutine 1052 [semacquire, 388 minutes]:
sync.runtime_Semacquire(0xc42020a3cc)
        /home/travis/.gimme/versions/go1.10.3.linux.amd64/src/runtime/sema.go:56 +0x39
sync.(*WaitGroup).Wait(0xc42020a3c0)
        /home/travis/.gimme/versions/go1.10.3.linux.amd64/src/sync/waitgroup.go:129 +0x72
github.com/containerd/containerd/linux/proc.(*execProcess).delete(0xc42020a3c0, 0x6c3840, 0xc420054540, 0xc420045c01, 0x45cf2d)
        /home/travis/gopath/src/github.com/containerd/containerd/linux/proc/exec.go:96 +0x42
github.com/containerd/containerd/linux/proc.(*execStoppedState).Delete(0xc42000c100, 0x6c3840, 0xc420054540, 0x0, 0x0)
        /home/travis/gopath/src/github.com/containerd/containerd/linux/proc/exec_state.go:173 +0x98
...

Our current theory is that if an exec process forks another process, and the new process holds the IO open after the exec process dies, this may happen. I haven’t reproduced this yet, but based on the use case described by @lbernail, it is not impossible that this could happen.

Ya, i’ll have to look into this

I’m using gke, which is using 1.1.0.

On Fri, Jul 6, 2018 at 6:49 PM Mike Brown notifications@github.com wrote:

@Random-Liu https://github.com/Random-Liu you on 1.1.0? may have been a delta or two in that code path.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/containerd/containerd/issues/2438#issuecomment-403180582, or mute the thread https://github.com/notifications/unsubscribe-auth/AFjVuxSXQNsQDdzLLQsUJbcU_oPfwJiTks5uEBOXgaJpZM4VECbU .