kubernetes: Cannot delete jobs when their associated pods are gone

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

no


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-08T02:50:34Z", GoVersion:"go1.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:34:32Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Bare metal
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie) 8.7
  • Kernel (e.g. uname -a): Linux mt11 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1 (2016-12-30) x86_64 GNU/Linux
  • Install tools:
  • Others:

What happened:

Pods created by jobs have been deleted. Attempting to delete the jobs afterwards succeeds according to the output of kubectl but the jobs reappear immediately. The only fix i could find was to remove the keys directly in etcd.

What you expected to happen:

kubectl should delete the jobs even if the associated pod is gone.

How to reproduce it (as minimally and precisely as possible):

Let a cronjob create jobs and delete the pods. Afterwards try to delete the jobs with kubectl delete job X

Anything else we need to know:

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 1
  • Comments: 42 (29 by maintainers)

Most upvoted comments

went into the same problem … kubectl delete job myjobname -n mynamespace --cascade=false deleted the job (which has had no pod anymore)

@john-bakker I tried the command mentioned by @WolfgangMau above

kubectl delete job kafka-test-job-7gsd9 --cascade=false

and that deleted the job. Still seems like this should not be needed; but my immediate problem of having to special case this job in my scripts is resolved.

Having the same problem here in 1.9.3 , one of the developers deleted it’s pods before deleting the job .

NAME                      DESIRED   SUCCESSFUL   AGE
kafka-test-job-7gsd9      1         1            14d
john@DESKTOP-9IS4C8K:~/.ssh$ kubectl delete job kafka-test-job-7gsd9 -n kaste503 --force --grace-period=0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
error: timed out waiting for "kafka-test-job-7gsd9" to be synced

Can this item be reopened?

I think I am seeing the same issue. minikube / Kube client/ server 1.10.2 I have a job:

k get job
NAME                         DESIRED   SUCCESSFUL   AGE
ctsstore-replication-setup   1         1            20h

That can’t be deleted, even with --force

k delete job ctsstore-replication-setup --force
error: timed out waiting for "ctsstore-replication-setup" to be synced

The job was created by helm.