pipeline: taskrun pod of resource.request settings is not good enough for customization

Expected Behavior

If I set the resource.request in each Tekton task step, during create pod, I would like to keep this setting for my original resource.request value

Actual Behavior

The tektoncd pkg/pod logic will override the container.resource.request value follow by this doc: https://github.com/tektoncd/pipeline/blob/master/docs/taskruns.md#specifying-limitrange-values

It is not good, because I would like to control the resource.request same as request.limit for my taskrun steps/containers

Steps to Reproduce the Problem

  1. From the task API defination, you can see the task step comes from the corev1.Container: https://github.com/tektoncd/pipeline/blob/master/pkg/apis/pipeline/v1beta1/task_types.go#L119
type Step struct {
	corev1.Container `json:",inline"`

	// If Script is not empty, the Step cannot have an Command or Args.
	Script string `json:"script,omitempty"`
}

It mean, I can define the resource.limit and resource.request for each step myself: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/core/v1/types.go#L2260

type ResourceRequirements struct {
	Limits ResourceList `json:"limits,omitempty" protobuf:"bytes,1,rep,name=limits,casttype=ResourceList,castkey=ResourceName"`
	Requests ResourceList `json:"requests,omitempty" protobuf:"bytes,2,rep,name=requests,casttype=ResourceList,castkey=ResourceName"`
}
  1. So I created a taskrun and want my first step/container uses 1cpu/1Gi mem and second uses 2cpu/2Gi mem I don’t like or want tekton to help me do any intelligent check, I just want to keep my original resource settings!:
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
  generateName: helloworld-
spec:
  taskSpec:
    steps:
    - name: helloworld1
      image: ubuntu
      resources:
        limits:
          cpu: "1"
          memory: "1Gi"
        requests:
          cpu: "1"
          memory: "1Gi"        
      script: echo helloworld
    - name: helloworld2
      image: ubuntu
      resources:
        limits:
          cpu: "2"
          memory: "2Gi"
        requests:
          cpu: "2"
          memory: "2Gi"        
      script: echo helloworld
  1. But the final pod is not my expected: The first one’s resource.requests is changed to 10m/128Mi from limitRange settings:
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:                10m
      ephemeral-storage:  0
      memory:             128Mi

The second one is kept as before:

    Limits:
      cpu:     2
      memory:  2Gi
    Requests:
      cpu:                2
      ephemeral-storage:  0
      memory:             2Gi
  1. I know this is work as design now by reviewing the doc and code: https://github.com/tektoncd/pipeline/blob/master/docs/taskruns.md#specifying-limitrange-values

  2. But I think it is not correct, if I set the resource.request by myself, it mean I want this setting, we don’t need tekton to help me to refine that. My case is I want to make the resource.limit = resource.request, the current logic break my requirement.

Additional Info

The code logic is here: https://github.com/tektoncd/pipeline/blob/master/pkg/pod/resource_request.go#L67-L77

Current logic is:

	for i := range containers {
		if containers[i].Resources.Requests == nil {
			containers[i].Resources.Requests = limitRangeMin
			continue
		}
		for _, resourceName := range resourceNames {
			if maxIndicesByResource[resourceName] != i {
				containers[i].Resources.Requests[resourceName] = limitRangeMin[resourceName]
			}
		}
	}

containers[i].Resources.Requests[resourceName] = limitRangeMin[resourceName] Right now ,tekton keeps the resource.limit as original values, but change the resource.request like above ^^

My point is:

  • If the user want tekton to control the request.request for the container, he doesn’t need to set the resource.request in his step
  • If the end user want to control my himself and the containers[i].Resources.Requests != nil, the tekton should skip this logic and keep using my ORIGINAL configuration.

And just keep as this:

	for i := range containers {
		if containers[i].Resources.Requests == nil {
			containers[i].Resources.Requests = limitRangeMin
			continue
		}
	}

Please let me know if there is any other cosideration.

Thanks!

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 4
  • Comments: 21 (15 by maintainers)

Most upvoted comments

Would be potentially helpful to convert the slides to a doc. I think the visuals/the brevity of it is really helpful.

Do you mean the memory or ephemeral-disk or other resources are same as the CPU request exmaple in your ppt?

I mean that the semantics of how K8s translates the requests of all the containers in a Pod into the request for that Pod as a whole when scheduling that Pod is (as far as I know) the same whether you’re talking about CPUs, or memory, or ephemeral-disk, or GPUs.

In the example, those 9 CPUs could easily have been 9 GiB of memory, or 9 GiB of disk, or 9 GPUs.

So what I thought is the Pod request is max(initContainers) OR sum(containers) ?

Yes! You’re absolutely right.

The request is actually max(max(initContainers), sum(containers)), not the sum. I’ll update the slides accordingly. Thanks for catching that!

I wrote a brief presentation to hopefully illustrate why Tekton modifies requests the way it does. I’ve shared it with tekton-dev@

https://docs.google.com/presentation/d/1-FNMMbRuxckAInO2aJPtzuINqV-9pr-M25OXQoGbV0s/edit