origin: Container creation fails because of "Failed create pod sandbox"
Pods are not getting created anymore
Version
oc v3.6.173.0.7 kubernetes v1.6.1+5115d708d7 features: Basic-Auth
Server https://api.starter-ca-central-1.openshift.com:443 openshift v3.7.0-0.143.7 kubernetes v1.7.0+80709908fd
Steps To Reproduce
- create a application (e.g. resid (persistent) from catalog)
- check pod/container creation
- wait for timeouts
Current Result
Warn messages on pod:
1:33:46 PM | Normal | Sandbox changed | Pod sandbox changed, it will be killed and re-created. 2 times in the last 5 minutes -- | -- | -- | -- 1:33:42 PM | Warning | Failed create pod sand box | Failed create pod sandbox. 2 times in the last 5 minutes
–> pod is not created
the only real error I could grab was :
Failed kill pod | error killing pod: failed to "KillPodSandbox" for "c4c2ec61-ba29-11e7-8b2c-02d8407159d1" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"redis-1-deploy_instantsoundbot\" network: CNI request failed with status 400: 'Failed to execute iptables-restore: exit status 4 (Another app is currently holding the xtables lock. Perhaps you want to use the -w option?\n)\n'"
Expected Result
pod should start up as the used to do…
Additional Information
Couldn’t get oc adm diagnostics working atm I guess it could have to do with the introduction of https://github.com/openshift/origin/pull/15880
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 35 (9 by maintainers)
Still having this problem on
starter-us-west-2.I’ve got 7 failed deployments in a row for this error message.
Same problem on starter-ca-central-1.openshift.com
Facing same issue not possible to rollout anything on starter-ca-central-1.openshift.com. Hope will be fixed soon.
Seeing this on pro-us-east-1
Seeing this (or something similar) currently on OpenShift Online starter-us-west-1. Unable to build or deploy because of it. No logs from pods that have this issue. Status page says all green.
Seeing this the last couple of days on pro-us-east-1 as well
This is a known issue that has a fix and it being rolled out to the starter clusters presently.
Same issue deploying rhscl/mysql-57-rhel7 on starter-us-east-1.