kustomize: Starting with 00f0fd71, performance degradation for strategic patch merges

The run time for strategic patch merges in 00f0fd71 is roughly five times the run time for 1c6481d0 (the commit immediately before 00f0fd71).

Apologies for the lack of a test case PR.

Test setup

kustomization.yaml

resources:
  - resources.yaml

patchesStrategicMerge:
  - merge.yaml

resources.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
spec:
  template:
    spec:
      containers:
      - name: this-is-my-container
        image: this-is-my-image

merge.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
spec:
  template:
    metadata:
      annotations:
        anno1: value1

Equivalent output

The kustomize build output agrees between builds of the two commits:

$ diff <(./kustomize-1c6481d0 build ./input) <(./kustomize-00f0fd71 build ./input)
$

Expected run time (based on 1c6481d0)

Execution using 1c6481d0 takes about an eighth of a second:

$ time ./kustomize-1c6481d0 build ./input > /dev/null

real    0m0.124s
user    0m0.122s
sys     0m0.016s
$

Actual run time for 00f0fd71

Execution using 00f0fd71 takes about five eighths of a second:

$ time ./kustomize-00f0fd71 build ./input > /dev/null

real    0m0.636s
user    0m0.650s
sys     0m0.021s
$

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 3
  • Comments: 34 (14 by maintainers)

Most upvoted comments

@ephesused there is indeed a significant slowdown starting with 00f0fd71. However, it’s rather difficult to pinpoint exact changeset that did it, because there were both related and unrelated performance changes over time that affect test results even in a very minimal set of manifests.

But I agree, it will be better to open a new ticket. Will do it tomorrow. I have already updated my tests to conveniently specify versions (and git revisions – to be built locally) to be tested to have a set of reference points, just need to finish a few minor things.

For what it’s worth, I’m not seeing any performance improvement between v4.3.0 and master (after the merge of #4152).

I was doing some tests with the latest version v4.0.5 before upgrading our CI/CD. The performance drop on my machine is incredible compared to v3.8.4 (which, like others have pointed out, was much slower than v3.5.4):

v4.0.5

time ./kustomize build stage -o stage.yaml

real	1m15.867s
user	2m16.644s
sys	0m0.766s

v3.8.4

time kustomize build stage -o stage.yaml

real	0m27.704s
user	0m47.028s
sys	0m0.518s

The generated file is about 2.2MB, so pretty big.

We really can’t use that on our CI/CD, what has changed that could explain these poor performances?

I also noticed this when installing Kubeflow via Kustomize. It takes over 20s to build the manifests, I am using v3.9.1

Hmm, that’s interesting. I see no tangible improvement when running “kustomize build --enable_kyaml=false” on kustomize versions > v3.5.4. So maybe it isn’t kyaml that’s causing the degradation. Per my test results, I’m guessing it must be the introduction of one of these other components:

kustomize/api v0.3.3 sigs.k8s.io/yaml v1.2.0

Those changed between kustomize v3.5.4 and later versions.

For the test case in this issue, I don’t see meaningful performance differences with v3.9.1. For performance trouble arising in recent master builds and v3.9.1, https://github.com/kubernetes-sigs/kustomize/issues/2869#issuecomment-745363365 may be related.

(Edited to clarify that this comment was in response to @marcghorayeb’s comment.)