kubernetes: kubectl apply fails on create AlreadyExists even though the command was apply
Is this a request for help? No
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): kubectl apply creating upsert
Is this a BUG REPORT or FEATURE REQUEST? Bug (I think)
Kubernetes version (use kubectl version
): 1.6.1
Environment:
- Cloud provider or hardware configuration: AWS
- OS (e.g. from /etc/os-release): CoreOS 1298.7.0
- Kernel (e.g.
uname -a
): 4.9.16-coreos-r1 - Install tools: downloaded and installed k8s directly
- Others:
What happened: When doing kubectl apply -f <url>
on multiple master nodes, some fail as if kubectl create
instead of apply
As I understand it, kubectl apply
is an upsert, “create of exists else update”. I have a cloud-init run on all master nodes to instantiate my cluster. It installs networking via kubectl apply -f https://git.io/weave-kube-1.6
. Of course, this runs on all master nodes.
In order to avoid an error (and thus failing out the cloudinit script), it specifically does kubectl apply
instead of kubectl create
. Yet I get errors:
Apr 06 13:42:16 ip-10-50-22-238.ec2.internal bash[1112]: clusterrole "weave-net" configured
Apr 06 13:42:20 ip-10-50-22-238.ec2.internal bash[1112]: Error from server (AlreadyExists): error when creating "https://git.io/weave-kube-1.6": serviceaccounts "weave-net" already exists
Apr 06 13:42:20 ip-10-50-22-238.ec2.internal bash[1112]: Error from server (AlreadyExists): error when creating "https://git.io/weave-kube-1.6": clusterrolebindings.rbac.authorization.k8s.io "weave-net" already exists
Apr 06 13:42:20 ip-10-50-22-238.ec2.internal bash[1112]: Error from server (AlreadyExists): error when creating "https://git.io/weave-kube-1.6": daemonsets.extensions "weave-net" already exists
Notice that it errors out with AlreadyExists
, but only for clusterrolebbinding
, daemonset
and serviceaccount
. The clusterrole
responds with the correct configured
for a resource that already exists, but fails with the creating
for the others.
I will wrap around it catching the fail, but it would be better if this just worked.
I suspect a race condition. Two or three of the nodes try to run at the same time. One of them creates the clusterrole
so the other sees it and applies, but two of them check if the other elements exist at the same time, only one submits the correct create command, following which the other tries and fails.
I guess the real question is, is apply
logic of “check if exists update else create” applied at the server API level, or at the kubectl cli level? If the latter, this explanation makes sense.
What you expected to happen:
kubectl apply
should upsert, essentially making no change and returning with an exit code of 0.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 5
- Comments: 77 (45 by maintainers)
Commits related to this issue
- Fix race condition issue for k8s multi masters When creating a multi-master cluster, all master nodes will attempt to create kubernetes resources in the cluster at this same time, like coredns, the ... — committed to openstack/magnum by openstacker 6 years ago
- Fix race condition issue for k8s multi masters When creating a multi-master cluster, all master nodes will attempt to create kubernetes resources in the cluster at this same time, like coredns, the ... — committed to openstack/magnum by openstacker 6 years ago
- Fix race condition issue for k8s multi masters When creating a multi-master cluster, all master nodes will attempt to create kubernetes resources in the cluster at this same time, like coredns, the ... — committed to NeCTAR-RC/magnum by openstacker 6 years ago
- ocp4: use --server-side flag for `oc apply` in the e2e manual remediation `kubelet_enable_protect_kernel_defaults`, the script was failing since the object already existed. We want to make the script... — committed to mrogers950/content by JAORMX 4 years ago
will fix this before this weekend.
I am honoured @hjkatz 😄
/reopen
This is definitely still an issue, /reopen
This issue poses challenges to those who run
kubectl apply
in their deployment pipelines.