kustomize: Kustomize doesn't support metadata.generateName

I am trying to use kustomize with https://github.com/argoproj/argo.

Example spec:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: hello-world-
spec:
  entrypoint: whalesay
  templates:
  - name: whalesay
    container:
      image: docker/whalesay:latest
      command: [cowsay]
args: ["hello world"]

Argo Workflow CRDs don’t require or use metadata.name, but I am getting the following error when I try to run kustomize build on an Argo Workflow resource:

Error: loadResMapFromBasesAndResources: rawResources failed to read Resources: Missing metadata.name in object {map[args:[hello world] kind:Workflow metadata:map[generateName:hello-world-] spec:map[entrypoint:whalesay templates:[map[container:map[command:[cowsay] image:docker/whalesay:latest] name:whalesay]]] apiVersion:argoproj.io/v1alpha1]}

Is there a way for me to override where kustomize looks for a name to metadata.generateName?

About this issue

  • Original URL
  • State: open
  • Created 6 years ago
  • Reactions: 64
  • Comments: 75 (12 by maintainers)

Commits related to this issue

Most upvoted comments

The workaround I’ve used for Argo specifically is to define my workflow with a name:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  name: hello-world-
spec:
  ...

And then tell Kustomize to move that to generateName as the last patch:

resources:
  - hello-world.yaml
patches:
  - patch: |-
      - op: move
        from: /metadata/name
        path: /metadata/generateName
    target:
      kind: Workflow

This is obviously not very good, but it does let us use Kustomize with Argo (and without writing a Kustomize plugin).

I wanted to use generateName with kustomize too.

It appears that the vast majority of us are just trying to generate random names for our jobs. If kustomize just generated the name using generateName as the prefix, most of us will be perfectly happy with it.

The workaround above does not work. I am using Kustomize v4.2.0. The following error is thrown. My files are shown below.

panic: number of previous names, number of previous namespaces, number of previous kinds not equal

goroutine 1 [running]:
sigs.k8s.io/kustomize/api/resource.(*Resource).PrevIds(...)
	sigs.k8s.io/kustomize/api@v0.8.11/resource/resource.go:345
sigs.k8s.io/kustomize/api/resource.(*Resource).OrgId(0xc0003513b0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	sigs.k8s.io/kustomize/api@v0.8.11/resource/resource.go:328 +0x17a
sigs.k8s.io/kustomize/api/builtins.(*PrefixSuffixTransformerPlugin).Transform(0xc002a11d40, 0x4851050, 0xc00000f5f0, 0x0, 0x0)
	sigs.k8s.io/kustomize/api@v0.8.11/builtins/PrefixSuffixTransformer.go:52 +0xa9
sigs.k8s.io/kustomize/api/internal/target.(*multiTransformer).Transform(0xc0000d34e8, 0x4851050, 0xc00000f5f0, 0xc002dabaa0, 0x6)
	sigs.k8s.io/kustomize/api@v0.8.11/internal/target/multitransformer.go:30 +0x79
sigs.k8s.io/kustomize/api/internal/accumulator.(*ResAccumulator).Transform(...)
	sigs.k8s.io/kustomize/api@v0.8.11/internal/accumulator/resaccumulator.go:142
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).runTransformers(0xc0002f9040, 0xc000366240, 0x0, 0x0)
	sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:270 +0x225
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).accumulateTarget(0xc0002f9040, 0xc000366240, 0xc00022f610, 0x4449453, 0x0)
	sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:195 +0x2b0
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).AccumulateTarget(0xc0002f9040, 0x0, 0xffffffffffffffff, 0x0)
	sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:156 +0xce
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).makeCustomizedResMap(0xc0002f9040, 0x0, 0x0, 0x0, 0x1)
	sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:111 +0x2f
sigs.k8s.io/kustomize/api/internal/target.(*KustTarget).MakeCustomizedResMap(...)
	sigs.k8s.io/kustomize/api@v0.8.11/internal/target/kusttarget.go:107
sigs.k8s.io/kustomize/api/krusty.(*Kustomizer).Run(0xc0000d3d60, 0x484ea60, 0x4c2c0d8, 0x7ffeefbff96f, 0x1f, 0x0, 0x0, 0x0, 0x0)
	sigs.k8s.io/kustomize/api@v0.8.11/krusty/kustomizer.go:88 +0x3dd
sigs.k8s.io/kustomize/kustomize/v4/commands/build.NewCmdBuild.func1(0xc000320580, 0xc00031b500, 0x1, 0x3, 0x0, 0x0)
	sigs.k8s.io/kustomize/kustomize/v4/commands/build/build.go:80 +0x1c9
github.com/spf13/cobra.(*Command).execute(0xc000320580, 0xc00031b4d0, 0x3, 0x3, 0xc000320580, 0xc00031b4d0)
	github.com/spf13/cobra@v1.0.0/command.go:842 +0x472
github.com/spf13/cobra.(*Command).ExecuteC(0xc000320000, 0x0, 0xffffffff, 0xc00002e0b8)
	github.com/spf13/cobra@v1.0.0/command.go:950 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra@v1.0.0/command.go:887
main.main()
	sigs.k8s.io/kustomize/kustomize/v4/main.go:14 +0x2a

cron-workflow.yaml:

apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
  name: foo
spec:
  entrypoint: main
  templates:
    - name: main
      steps:
        - - name: main
            templateRef:
              name: main
              template: main

kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: argo
resources:
  - cron-workflow.yaml
patches:
  - target:
      kind: CronWorkflow
    path: patch.yaml

patch.yaml:

- op: move
  from: /metadata/name
  path: /metadata/generateName

/remove-lifecycle rotten

I wanted to use generateName with kustomize but I can’t 😦

Also ran into this issue.

And 5 years later we have no solution.

/triage accepted /kind bug We will take a closer look to see if we can fix this.

/remove-lifecycle stale

#627 is about names, but currently i see it as a feature request.

This bug and #586 are noting that kustomize doesn’t recognize the kubernetes API directive generateName, which is indeed a bug.

This directive is a kustomize-like feature introduced before kustomize… (complicating our life).

We might try to allow it and work with it - or disallow it and provide an alternative mechanism.

/remove-lifecycle stale

Agreed, let’s re-open and solve the issue.

Just ran into this as well when working with Jobs.

That’s definitely an interesting idea. A few complications come to mind:

  1. Resources can be added from several different sources, and each entrypoint would need to handle the conversion (probably doable)
  2. Name references can’t work with generateName (we don’t have the real name, and we can’t possibly get it), but with this solution, Kustomize’s transformers would do incorrect and potentially confusing things with it. For example, the temporary internal name could show up in object references, or be targeted by replacements.
  3. Plugin transformers would be exposed to the workaround, i.e. they would receive objects with both fields populated and need to know what’s up with that. In other words, the annotation would need to become part of the KRM Functions standard. I’m very reluctant to do that, since the reason for it is Kustomize internals.

Please provide a solution or how we can support the kustomize team closing this issue - it is blocking our pipeline development in Argo Workflow at this moment as we want to kustomize pipelines.

I’m working on a project where Kustomize was selected for ArgoCD. We were hoping to extend its use to our Argo Workflows but this is a significant impediment. Deleting previous workflow instances as was suggested in one comment does not align with our operational requirements. I’m sure there are plenty of usecases outside of Argo Workflow that are also affected by this issue.

Is there a progress update on this? It seems like a pretty big limitation. Not being able to us Kustomize with CRDs such as those in Argo Workflows is a huge drawback.

Is there any progress?

/reopen

I’ve just stumbled across this issue as well, and would appreciate a fix or an alternative mechanism (as mentioned by monopole above).

I ran into this. I want found the solution but failed. I tried like below and be OK!

patches: 
- patch: | 
    - op: replace
      path: /metadata
      value: 
        generateName: data-migration-
  target: 
    kind: Job

Oh. No, it just ok for kubectl kustomize, but failed with argocd. And, i run with kustomize build would be failed too. kustomize version: v4.4.0

We also just ran into this. All related PR’s got closed so what’s the plan now? What exactly can we do to bring this forward?

I also have just run into this issue while attempting to use generateName on Jobs. Would be nice to see some fix or official workaround communicated.

I don’t want to get anyone’s hopes up, but here’s a PR that works in local testing: https://github.com/kubernetes-sigs/kustomize/pull/4981.

I don’t understand kustomize enough to know if this is really sufficient, but I look forward to comments pointing me in the right direction.

Still no work around yet for this. I find myself moving from kustomize to helm charts just because of this 🤦🏿

We worked around the Job name problem by setting ttlSecondsAfterFinished: 0 [1] so the Job is removed automatically after completion. The next time the Job is created, the previous one is already deleted.

[1] https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically

Perhaps a good way for kustomize to deal with generateName seemlessly would be to:

  1. Detect generateName and assign the object a random name with the given generateName prefix, but note that it did (for example, it could save some special annotation on the object).
  2. Deal with the object as with all others.
  3. Before returning the output, do a final pass and remove the name from all resources that have the special annotation set.

This could probably be implemented fairly quickly in code too since it doesn’t break kustomize’s assumption of identifying objects by apiversion/kind/name/namespace.

I’am also fighting against this issue. it’s useful to run a job every time we deploy a new version and we need to generate job names randomly

this is open a long time ago… Any deadline?

I’d like to see this too, the implications of not having this is pretty big.

For example Argo CD will delete the old job + create a new job, but without a unique name tools like Datadog will override the old job with the new job so you lose being able to see the history of the job’s runs and how long each one took. With the same name job the best you can do is get container stats like logs which doesn’t contain details about the job itself.

Having generateName supported would allow each job to be individually tracked and persisted since they would each have their own name.

This issue should be reopened unless it has been solved and the docs don’t show it.

Similar issues #627, #586