containerd: 'failed to reserve container name'

Description

Hi!

We are running containerd on GKE with pretty much all defaults. A dozen nodes, and a few hundreds pods. Plenty of memory and disk free.

We started to have many pods fail due to failed to reserve container name error in the last week or so. I do not recall any specific changes to the cluster, or containers themselves.

Any help will be greatly appreciated!

Steps to reproduce the issue: I have no clue how to specifically reproduce this issue.

Cluster have nothing special, deployment is straightforward. The only thing that could be relevant is that our images are quite large, around 3Gb.

I got a few more details here : https://serverfault.com/questions/1036683/gke-context-deadline-exceeded-createcontainererror-and-failed-to-reserve-contai

Describe the results you received:

2020-10-07T08:01:45Z Successfully assigned default/apps-abcd-6b6cb5876b-nn9md to gke-bap-mtl-1-preemptible-e2-s4-e6a8ddb4-ng3v I 
2020-10-07T08:01:50Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:16:45Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:18:45Z Error: context deadline exceeded W 
2020-10-07T08:18:45Z Container image "redis:4.0-alpine" already present on machine I 
2020-10-07T08:18:53Z Created container redis I 
2020-10-07T08:18:53Z Started container redis I 
2020-10-07T08:18:53Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:02Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:02Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:19:03Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:20Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:20Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:19:21Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:34Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:34Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:19:35Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:44Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:44Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:19:54Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:20:08Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:20:08Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:20:18Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:20:30Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:20:30Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:21:19Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:26:35Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:31:36Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:36:26Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:41:18Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
I 2020-10-07T08:46:41Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" 

Describe the results you expected: Live an happy life, error free 😃

Output of containerd --version:

containerd github.com/containerd/containerd 1.3.2 ff48f57fc83a8c44cf4ad5d672424a98ba37ded6

Any other relevant information:

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 28
  • Comments: 50 (14 by maintainers)

Commits related to this issue

Most upvoted comments

How can we get someone from GKE on this thread?

Hi Matti, I am from GKE. We are fully aware of this issue and are prioritizing it.

Just had this happen to me on GKE.

Unfortunately, the only working solution is to move back to cos with docker. Amazing to see that this critical bug has been opened more than one year go, and still no fix.

Here is my current analaysis. I will keep updating this comment.

Summary

ā€œfailed to reserve container nameā€ error is returned by containerd CRI if there is an in-flight CreateContainer request reserving the same container name (like below). T1: 1st CreateContainer(XYZ) request is sent. (Timeout on Kubelet side) T2: 2nd CreateContainer(XYZ) request is sent (Kubelet retry) T3: 2nd CreateContainer request returns ā€œfailed to reserve container name XYZā€ error T4: 1st CreateContainer request is still in-flight…

It simply indicates the CreateContainer request is slower than configurable --runtime-request-timeout (default 2min).

Based on my observation and investigation so far, I found following facts.

  1. All the issues originate slow disk operations (e.g. disk throttle on GKE)
  2. The container and pod will be created successfully, given sufficient time. (Note, this assumes restartPolicy:Always or restartPolicy:OnFailure in PodSpec. Yes, restartPolicy affects the behavior of container creation.)

Mitigation

  1. If pods are failed, consider to use restartPolicy:Always or restartPolicy:OnFailure in PodSpec
  2. Increase the boot disk IOPS (e.g. upgrade disk type or increase disk size)
  3. Switch back to docker

Theory 1

Expected symptom: some pods are up and generate heavy IO but other are not

Docker has a similar mechanism of ā€œreserving container nameā€ to prevent conflict. However, dockershim handles it in a different way from containerd CRI implementation.

err = docker.CreateContainer
if (err == container name conflict) {
    removeErr = docker.RemoveContainer
    if (removeErr == nil) {
        return err
    } else {
        if (removeErr == "Container not found") {
            randomize the container name XYZ to XYZ_<RANDOM_SUFFIX>
            return docker.CreateContainer
        } else {
            return err
        }
    } 
}

https://github.com/kubernetes/kubernetes/blob/release-1.19/pkg/kubelet/dockershim/helpers.go#L284

In my experiment, it keeps hitting the case of ā€œrandomize the container nameā€. It indicates every Kubelet retry will try to create a new container name in dockershim. However, containerd is stick to single containerd name where all the subsequent retrys are doomed to fail if the initial request is in-flight.

In conclusion, dockershim has a more aggressive way of retry. Therefore, docker has higher chance of creating container successfully in a much faster way than containerd.

Theory 2

Expected symptom: all the pods are not up.

Containerd has worse pull-image control than docker. For example, it may pull too many images in parallel which generates more disk IO.

(Not found any code reference yet)

Reproducing the Problem

Unfortunately, I haven’t found a way to reproduce that docker is consistently superior to containerd.

Experiment for Theory 1

Setup:

  • On GKE, Cluster A with 1 docker node(UBUNTU); Cluster B with 1 containerd node(UBUNTU_CONTAINERD)
  • Stress the disk and make it throttle. This can be done by the stress-ng tool.

Execution:

  • Keep creating new pod every 5 minutes
  • Stop the disk stress after certain time(e.g 1h)

Expected Result:

  • In both docker and containerd, all pods are successfully running eventually
  • Docker takes shorter time

Actual Result:

  • When disk throttle is over, all pods are successfully running in both docker and containerd
  • During extreme disk throttle, the pods cannot be created in both docker and containerd case.
  • During slight disk throttle, sometimes containerd creates pods faster and sometimes docker creates pod faster

Experiment for Theory 2

Setup:

  • On GKE, Cluster A with 1 containerd node(COS_CONTAINERD)
  • Use pd-standard and try different boot disk size: 10GB, 20GB, 30GB, 50GB, 100GB

Execution:

  • Create pods with multiple images
    • alpine ubuntu python busybox redis node mysql nginx httpd mongo memcached postgres mariadb wordpress influxdb consul rabbitmq debian amazonlinux cassandra

Expected Result

  • Pod takes long time to run successfully
  • ā€œfailed to reserve container nameā€ is observed

Actual Result

  • In 10GB disk, some pods were evicted
  • In 20GB+ disk, pods run successfully in short time without error

Need Help from Community

  1. Find a way to reproduce that docker is superior to containerd in the same environment
  2. Answer the following questionnaire.
  • What is the k8s workload type?
  • Did the pod successfully run eventually?
    • If the pod failed, did you use the restartPolicy:Never?
    • If you switched back to docker node, how long did you wait?
  • On a given node, were (a) all pods stuck on this error or (b) some pods could run while others were stuck?
  • How large were the container images?
  • Are you aware any of your workloads have heavy disk IOPS?
  • If you switched back to docker, did the pods successfully run in a very short time?

Misc

jotting down some notes here, apologies if it’s lengthy:

Let me try to explain/figure out the reason you got ā€œfailed to reserve container nameā€ …

Kubelet tried to create a container that it had already asked containerd to create at least once… when containerd tried the first time it received a variable in the container create meta data named attempt and that variable held the default value 0 … then containerd reserved the unique name for attempt 0 that you see in your log (see _0 at end of name) "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0"… something happened causing a context timeout between kubelet and containerd … the kubelet context timeout value is configurable… ā€œā€“runtime-request-timeout duration Default: 2m0sā€ a 2min timeout could happen for any number of reasons… an unusually long garbage collection a file system hiccup, locked files, deadlocks while waiting, some very expensive init operation occurring in the node for one of your other containers… who knows? That’s why we have/need recovery procedures.

What should have happened is kubelet should’ve incremented the attempt number (or at least that’s how I see it from this side (the containerd side) of the CRI api, but kubelet did not increment the attempt number and further containerd was still trying to create the container from the first request… or the create on the containerd side may even be finished at this point, it is possible the timeout only happened on the kubelet side and containerd continued finishing the create, possibly even attempting to return the success result. If containerd actually failed it would have deleted the reservation for that container id as the immediate thing after we reserve the id in containerd is to defer it’s removal on any error in the create… https://github.com/containerd/containerd/blob/master/pkg/cri/server/container_create.go#L65-L84

So ok… skimming over the kubelet code… I believe this is the code that decides what attempt number we are on? https://github.com/kubernetes/kubernetes/blame/master/pkg/kubelet/kuberuntime/kuberuntime_container.go#L173-L292

In my skim… I think I see a window where kubelet will try attempt 0 a second time after the first create attempt fails with a context timeout. But I may be reading the code wrong? @dims @feiskyer @Random-Liu

Is going back to docker really the only option here?

yes, now it looks like it.

On GCP, only for a little while longer, though. Just got an email:

[Action Required] Migrate to Containerd node images before GKE v1.24

Support for Docker as a container runtime on Kubernetes nodes will be removed from OSS Kubernetes and GKE starting with v1.24. Please migrate your GKE workloads to Containerd as soon as possible.

I only reproduce this issue (always) when scaling multiple heavy (both in terms of image size and the processes launched) pods.

Happened to me too on GKE

Same problem here

containerd github.com/containerd/containerd 1.4.6 d71fcd7d8303cbf684402823e425e9dd2e99285d

Amazon EKS 1.21

Bumped into this issue as well. Switching back to cos with docker.

We are also seeing the same issue, GKE with containerd. It does seem to be correlated with starting many pods at once.

Switching from cos_containerd back to cos (docker based) seems to have resolved the situation, at least in the short term.

Same for us once we switched back to cos with docker everything worked

Summary (2022/02)

ā€œfailed to reserve container nameā€ error is returned by containerd CRI if there is an in-flight CreateContainer request reserving the same container name (like below). T1: 1st CreateContainer(XYZ) request is sent. (Timeout on Kubelet side) T2: 2nd CreateContainer(XYZ) request is sent (Kubelet retry) T3: 2nd CreateContainer request returns ā€œfailed to reserve container name XYZā€ error T4: 1st CreateContainer request is still in-flight…

Don’t panic. Given sufficient time, the container and pod will be created successfully, as long as you are using restartPolicy:Always or restartPolicy:OnFailure in PodSpec.

Root Cause and Fix

Slow disk operations((e.g. disk throttle on GKE) are the culprit. What generates lots of disk IO can come from a number of factors: user’s disk-heavy workload, big images pulling and containerd CRI implementation.

An unnecessary sync-fs operation was found as part of CreateContainer stack. It is the where CreateContainer gets stuck. The sync-fs is got rid of in https://github.com/containerd/containerd/pull/6478. Not only it makes CreateContainer return faster, but it reduces disk IO generated by containerd.

Please note there are perhaps other undiscovered reason contributing to this problem.

Mitigation

  1. If pods are failed, consider to use restartPolicy:Always or restartPolicy:OnFailure in PodSpec
  2. Increase the boot disk IOPS (e.g. upgrade disk type or increase disk size)
  3. Upgrade containerd with this patch https://github.com/containerd/containerd/pull/6478 which will be available in 1.6+ and 1.5.X(backport working in progress)

Amended Theory 1

(See the original theory 1 in https://github.com/containerd/containerd/issues/4604#issuecomment-1006013231)

Docker has a similar mechanism of ā€œreserving container nameā€ to prevent conflict. However, dockershim handles it in a different way from containerd CRI implementation.

err = docker.CreateContainer
if (err == container name conflict) {
    removeErr = docker.RemoveContainer
    if (removeErr == nil) {
        return err
    } else {
        if (removeErr == "Container not found") {
            randomize the container name XYZ to XYZ_<RANDOM_SUFFIX>
            return docker.CreateContainer
        } else {
            return err
        }
    } 
}

https://github.com/kubernetes/kubernetes/blob/release-1.19/pkg/kubelet/dockershim/helpers.go#L284

In fact, this difference of retry leads to significantly different CRI rates between dockershim and containerd. In containerd, the CreateContainer request comes about every 10s-20s (See example in https://github.com/containerd/containerd/issues/4604#issue-716346199). But in dockershim case, theCreateContainer request comes about every 2min. This is because the requests of hitting ā€œfailed to reserve nameā€ are fast in containerd while the requests can take 2min with a new container name in dockershim. This applies to RunPodSandbox as well. Therefore, it is a fact that the load of CRI requests in containerd is 10x of the load in dockershim. And I infer this makes the node further overloaded.

This theory echos with a similar bug solved in CRI-O - https://bugzilla.redhat.com/show_bug.cgi?id=1785399., in which the solution says ā€œNow, when systems are under load, CRI-O does everything it can to slow down the Kubelet and reduce load on the system.ā€

I believe our direction is also to slow down Kubelet sending too many requests. This might be aligned with Mike’s comment- https://github.com/containerd/containerd/issues/4604#issuecomment-1013268187

Happened to me, it’s really a serious bug when you are running your gitlab ci/cd runners in containerd based k8s because some pipelines are designed to run multiple containers in parallel and this bug happens very often. Is going back to docker really the only option here?

@mikebrow I investigated a reported issue in k/k before https://github.com/kubernetes/kubernetes/issues/94085

My summary is that kubelet has correct logic for incrementing the restart number which is set to ā€œcurrent_restart + 1ā€. See this kubelet code.

  1. If the CreateContainer request eventually succeeds on containerd side, kubelet will see it and increment the restart on the next iteration of SyncPod. The pod will be eventually ready.
  2. If the CreateContainer request eventually fails on containerd side, containerd should release the name. On next iteration of Kubelet SyncPod, it shouldn’t see ā€œfailed to reserve container nameā€ error.
  3. If the CreateContainer request is stuck on containerd side, the name is never released. Then kubelet will keep seeing ā€œfailed to reserve container nameā€ error.

@fuweid

Thanks for your time on this issue.

Unfortunately, I did stop using COS back in 2020 after we could not find a solution.

I’m 97% sure we were using overlayfs and as for the rest I have no way to find this historical data.

Sorry about that.

Hi, @matti and @kubino148 and @sadortun and all subscribers, could you mind to provide the goroutine stack of containerd when you see the error? Thanks.

kill -USR1 $(pidof containerd) will trigger the dump and check containerd log to get stack.

Ran into this on KinD (kindest/node:v1.21, single node) when disk IO was higher than expected during tests+, which I suspect was caused/exacerbated by creating too many pods at once. Creating fewer pods at once still didn’t work at first, but restarting containerd and kubelet (in that order) caused those few pods to come up as expected. I was then able to slowly scale all of the test pods back up to their expected replica counts without a problem. My guess is that once this error occurs, kubelet and containerd are ā€œstuckā€ but restarting them appears to ā€œun-stuckā€ them. No idea if this has any applicability to an actual production environment.

This also happens with UBUNTU_CONTAINERD, not just COS_CONTAINERD

I confirm that moving back to docker solves the problem:

gcloud container clusters upgrade mycluster --image-type cos --node-pool mynodepool --zone myzone