kubernetes: ScheduledJobs: jobs.batch already exists
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Kubernetes version (use kubectl version
):
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:10:32Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Cloud provider or hardware configuration: AWS
- OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie)
- Kernel (e.g.
uname -a
): 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux
What happened:
Deployed a scheduled job from this manifest:
---
apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
name: expire-documents
spec:
schedule: 15,45 * * * *
jobTemplate:
spec:
template:
spec:
containers:
- name: expire-documents-job
image: registry.example.com/base:alpine
imagePullPolicy: Always
args:
- curl
- -XPOST
- -v
- http://document.stage.svc.example.cluster:8080/admin/tasks/expire-documents
restartPolicy: OnFailure
imagePullSecrets:
- name: my-registry
After some time the jobs are no longer executed:
kubectl --namespace stage describe scheduledjobs
Name: expire-documents
Namespace: stage
Schedule: 15,45 * * * *
Concurrency Policy: Allow
Suspend: False
Starting Deadline Seconds: <unset>
Image(s): registry.example.com/base:alpine
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
No volumes.
Labels: <none>
Last Schedule Time: Sat, 08 Oct 2016 18:15:00 +0200
Active Jobs: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h 31m 178 {scheduled-job-controller } Warning FailedCreate Error creating job: jobs.batch "expire-documents-1858111306" already exists
30m 1m 177 {scheduled-job-controller } Warning FailedCreate Error creating job: jobs.batch "expire-documents-1858242378" already exists
8h 1m 14 {scheduled-job-controller } Warning FailedCreate (events with common reason combined)
56s 6s 6 {scheduled-job-controller } Warning FailedCreate Error creating job: jobs.batch "expire-documents-1400211256" already exists
What you expected to happen:
Jobs are executed as defined in the manifest.
How to reproduce it (as minimally and precisely as possible):
Deploy a scheduled job and wait for some time.
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 4
- Comments: 23 (12 by maintainers)
Fix in flight https://github.com/kubernetes/kubernetes/pull/36812, I’ll make sure to cherry-pick it back to 1.4
Do the jobs start to reschedule if you delete old one? Try running
kubectl get jobs
.cc @soltysh
Look like should be available later today: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kubernetes-dev/00lJpT9pxDc/YZfDzCxMDQAJ
There’s a fix (https://github.com/kubernetes/kubernetes/pull/35420) that I’ve submitted against master to fix the Replace strategy, I’ll mark it for inclusion in 1.4.