actions-runner-controller: Cannot attach an image to just the runner

Checks

Controller Version

0.6.1

Helm Chart Version

gha-runner-scale-set-controller-0.6.1

CertManager Version

v1.12.0

Deployment Method

Helm

cert-manager installation

Same installation method with helm just an older version

Checks

  • This isn’t a question or user support case (For Q&A and community support, go to Discussions. It might also be a good idea to contract with any of contributors and maintainers if your business is so critical and therefore you need priority support
  • I’ve read releasenotes before submitting this issue and I’m sure it’s not due to any recently-introduced backward-incompatible changes
  • My actions-runner-controller version (v0.x.y) does support the feature
  • I’ve already upgraded ARC (including the CRDs, see charts/actions-runner-controller/docs/UPGRADING.md for details) to the latest and it didn’t fix the issue
  • I’ve migrated to the workflow job webhook event (if you using webhook driven scaling)

Resource Definitions

## githubConfigUrl is the GitHub url for where you want to configure runners
## ex: https://github.com/myorg/myrepo or https://github.com/myorg
githubConfigUrl: "https://github.com/Gallery-Media-Group"

## githubConfigSecret is the k8s secrets to use when auth with GitHub API.
## You can choose to use GitHub App or a PAT token
githubConfigSecret:
  ### GitHub Apps Configuration
  ## NOTE: IDs MUST be strings, use quotes
  #github_app_id: ""
  #github_app_installation_id: ""
  #github_app_private_key: |

  ### GitHub PAT Configuration
  github_token: ""
## If you have a pre-define Kubernetes secret in the same namespace the gha-runner-scale-set is going to deploy,
## you can also reference it via `githubConfigSecret: pre-defined-secret`.
## You need to make sure your predefined secret has all the required secret data set properly.
##   For a pre-defined secret using GitHub PAT, the secret needs to be created like this:
##   > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_token='ghp_your_pat'
##   For a pre-defined secret using GitHub App, the secret needs to be created like this:
##   > kubectl create secret generic pre-defined-secret --namespace=my_namespace --from-literal=github_app_id=123456 --from-literal=github_app_installation_id=654321 --from-literal=github_app_private_key='-----BEGIN CERTIFICATE-----*******'
# githubConfigSecret: pre-defined-secret

## proxy can be used to define proxy settings that will be used by the
## controller, the listener and the runner of this scale set.
#
# proxy:
#   http:
#     url: http://proxy.com:1234
#     credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
#   https:
#     url: http://proxy.com:1234
#     credentialSecretRef: proxy-auth # a secret with `username` and `password` keys
#   noProxy:
#     - example.com
#     - example.org

# maxRunners is the max number of runners the autoscaling runner set will scale up to.
maxRunners: 10

# minRunners is the min number of runners the autoscaling runner set will scale down to.
minRunners: 1

# runnerGroup: "default"

## name of the runner scale set to create.  Defaults to the helm release name
# runnerScaleSetName: ""

## A self-signed CA certificate for communication with the GitHub server can be
## provided using a config map key selector. If `runnerMountPath` is set, for
## each runner pod ARC will:
## - create a `github-server-tls-cert` volume containing the certificate
##   specified in `certificateFrom`
## - mount that volume on path `runnerMountPath`/{certificate name}
## - set NODE_EXTRA_CA_CERTS environment variable to that same path
## - set RUNNER_UPDATE_CA_CERTS environment variable to "1" (as of version
##   2.303.0 this will instruct the runner to reload certificates on the host)
##
## If any of the above had already been set by the user in the runner pod
## template, ARC will observe those and not overwrite them.
## Example configuration:
#
# githubServerTLS:
#   certificateFrom:
#     configMapKeyRef:
#       name: config-map-name
#       key: ca.crt
#   runnerMountPath: /usr/local/share/ca-certificates/

## Container mode is an object that provides out-of-box configuration
## for dind and kubernetes mode. Template will be modified as documented under the
## template object.
##
## If any customization is required for dind or kubernetes mode, containerMode should remain
## empty, and configuration should be applied to the template.
containerMode:
  type: "dind"  ## type can be set to dind or kubernetes
#   ## the following is required when containerMode.type=kubernetes
#   kubernetesModeWorkVolumeClaim:
#     accessModes: ["ReadWriteOnce"]
#     # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath
#     storageClassName: "dynamic-blob-storage"
#     resources:
#       requests:
#         storage: 1Gi

## template is the PodSpec for each listener Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
# listenerTemplate:
#   spec:
#     containers:
#     # Use this section to append additional configuration to the listener container.
#     # If you change the name of the container, the configuration will not be applied to the listener,
#     # and it will be treated as a side-car container.
#     - name: listener
#       securityContext:
#         runAsUser: 1000
#     # Use this section to add the configuration of a side-car container.
#     # Comment it out or remove it if you don't need it.
#     # Spec for this container will be applied as is without any modifications.
#     - name: side-car
#       image: example-sidecar

## template is the PodSpec for each runner Pod
## For reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec
template:
  # template.spec will be modified if you change the container mode
  # with containerMode.type=dind, we will populate the template.spec with following pod spec
  template:
    spec:
      initContainers:
      - name: init-dind-externals
        image: ghcr.io/actions/actions-runner:latest
        command: ["cp", "-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
        volumeMounts:
          - name: dind-externals
            mountPath: /home/runner/tmpDir
      containers:
      - name: runner
        image: CUSTOM_IMAGE
        command: ["/home/runner/run.sh"]
        env:
          - name: DOCKER_HOST
            value: unix:///run/docker/docker.sock
        volumeMounts:
          - name: work
            mountPath: /home/runner/_work
          - name: dind-sock
            mountPath: /run/docker
            readOnly: true
      - name: dind
        image: docker:dind
        args:
          - dockerd
          - --host=unix:///run/docker/docker.sock
          - --group=$(DOCKER_GROUP_GID)
        env:
          - name: DOCKER_GROUP_GID
            value: "123"
        securityContext:
          privileged: true
        volumeMounts:
          - name: work
            mountPath: /home/runner/_work
          - name: dind-sock
            mountPath: /run/docker
          - name: dind-externals
            mountPath: /home/runner/externals
      volumes:
      - name: work
        emptyDir: {}
      - name: dind-sock
        emptyDir: {}
      - name: dind-externals
        emptyDir: {}
  ######################################################################################################
  ## with containerMode.type=kubernetes, we will populate the template.spec with following pod spec
  # template:
  #   spec:
  #     containers:
  #     - name: runner
  #       image: 303701770803.dkr.ecr.us-east-2.amazonaws.com/github:latest
  #       command: ["/home/runner/run.sh"]
  #       env:
  #         - name: ACTIONS_RUNNER_CONTAINER_HOOKS
  #           value: /home/runner/k8s/index.js
  #         - name: ACTIONS_RUNNER_POD_NAME
  #           valueFrom:
  #             fieldRef:
  #               fieldPath: metadata.name
  #         - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
  #           value: "true"
  #       volumeMounts:
  #         - name: work
  #           mountPath: /home/runner/_work
  #     volumes:
  #       - name: work
  #         ephemeral:
  #           volumeClaimTemplate:
  #             spec:
  #               accessModes: [ "ReadWriteOnce" ]
  #               storageClassName: "local-path"
  #               resources:
  #                 requests:
  #                   storage: 1Gi
  # spec:
  #   initContainers:
  #     - name: init-dind-externals
  #       image: ghcr.io/actions/actions-runner:latest
  #       command: ["cp", "-r", "-v", "/home/runner/externals/.", "/home/runner/tmpDir/"]
  #       volumeMounts:
  #         - name: dind-externals
  #           mountPath: /home/runner/tmpDir
  #   containers:
  #     - name: runner
  #       image: CUSTOM_IMAGE
  #       command: ["/home/runner/run.sh"]

## Optional controller service account that needs to have required Role and RoleBinding
## to operate this gha-runner-scale-set installation.
## The helm chart will try to find the controller deployment and its service account at installation time.
## In case the helm chart can't find the right service account, you can explicitly pass in the following value
## to help it finish RoleBinding with the right service account.
## Note: if your controller is installed to only watch a single namespace, you have to pass these values explicitly.
# controllerServiceAccount:
#   namespace: arc-system
#   name: test-arc-gha-runner-scale-set-controller

To Reproduce

I attached the custom image to the runner via the template, it is ignored and just attaches the default image. I attached the image to the official pod spec which causes my image to be attached to the init container which is not needed. I also attempted to copy the template down to pod spec which causes errors with the controller.

Describe the bug

Please see above

Describe the expected behavior

The container does not attach to the init container and is specific to the runner container

Whole Controller Logs

Cannot post(work)

Whole Runner Pod Logs

Cannot post(work) also depending on the scenario the pods are started and deleted because I am attempting to specify paths which have already been specified else where

Additional Context

No response

About this issue

  • Original URL
  • State: closed
  • Created 9 months ago
  • Comments: 15 (5 by maintainers)

Most upvoted comments

Hey,

I agree with @wherka-ama, this is not really an issue, but it can be difficult to understand how to configure ARC properly. Thank you so much for answering this issue @wherka-ama!

I will close this issue now but @k-walsh-gmg, feel free to comment on it ☺️