kubernetes: ScheduledJob of kubernetes never gets called

What happened: We created a scheduled jobs from example mentioned on documentation. kubectl run hello --schedule=“0/1 * * * ?” --restart=OnFailure --image=busybox – /bin/sh -c “date; echo Hello from the Kubernetes cluster”

kubectl get scheduledjob hello NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE hello 0/1 * * * ? False 0 <none>

It never create a associated jobs.

What you expected to happen: It should create jobs as per scheduled job specification.

How to reproduce it (as minimally and precisely as possible): We noticed this problem with v1.4.3 however on stage cluster we run v1.4.4 and it does create associated jobs there.

Output from stage cluster:

kubectl get jobs
NAME                     DESIRED   SUCCESSFUL   AGE
hello-1859028810         1         1            11m
hello-1934853965         1         1            12m
hello-1934985037         1         1            9m
hello-1935116109         1         1            6m
hello-2010810192         1         1            10m
hello-2010941264         1         1            7m
hello-2011072336         1         1            4m
hello-2011203408         1         1            1m
hello-2086766419         1         1            8m
hello-2086897491         1         1            5m
hello-2087028563         1         1            2m
hello-2162853718         1         1            3m
hello-2162984790         1         1            33s

Kubernetes version (use kubectl version): 1.4.3

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): CoreOS 1122.3.0
  • Kernel (e.g. uname -a): Linux -1 4.7.0-coreos-r1
  • Others: Enabled --runtime-config=batch/v2alpha1 in apiserver flag.

Is this issue related to specific kubernetes version like v1.4.3 where it couldn’t create associated jobs as per scheduled job specification?

Please let us know if you need any more information on this. Thanks!

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 18 (13 by maintainers)

Most upvoted comments

FWIW I had this same issue on my cluster (spoiler: restart you controller manager after enabling the API):

  • upgraded to 1.5.1
  • failed to schedule a CronJob due to api not enabled
  • enabled the API on the apiserver (flag --runtime-config=batch/v2alpha1)
  • then scheduled a CronJob. No executions took place
  • restarted kube-scheduler. No executions took place
  • restarted kube-controller-manager. Executions took place successfully

Only controller-manager restart should be required, scheduler has nothing to do with that, iirc. I’ve created https://github.com/kubernetes/kubernetes.github.io/pull/2215 to addressed the issue.