actions-runner-controller: webhook server: "cache contained , which is not an Object" when handling check_run event

Describe the bug When receiving a check_run event, the webhook server throws an error, which states the following:

ERROR controllers.Runner handling check_run event {"event": "check_run", "hookID": "xxxxxxxxx", "delivery": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "checkRun.status": "completed", "action": "completed", "error": "cache contained <nil>, which is not an Object"}

Checks

  • My actions-runner-controller version (v0.19.0) does support the feature
  • I’m using an unreleased version of the controller I built from HEAD of the default branch

To Reproduce Steps to reproduce the behavior:

  1. Follow the install guide
  2. Configure the Helm chart to start the Webhook server, expose it
  3. Create a webhook in GitHub to listen for check_run events
  4. Run a workflow to trigger an event
  5. Observe the error in the Webhook server’s logs

Expected behavior I’ve configured a HorizontalRunnerAutoscaler to scale on a check_run event, so the expected behavior is for a new runner to be spun up.

Environment (please complete the following information):

  • Controller Version: 0.19.0
  • Deployment Method: Helm
  • Helm Chart Version: 0.12.7

Additional context I’ve authenticated via an app with the following permissions:

Actions (read)
Checks (read)
Metadata (read)
Self-hosted runners (read / write)

This is the runner configuration I’m using:

apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: senteca-runner
spec:
  template:
    spec:
      organization: senteca
      dockerdWithinRunnerContainer: true
      env:
        - name: STARTUP_DELAY_IN_SECONDS
          value: "10"
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
  name: senteca-runner-autoscaler
spec:
  scaleTargetRef:
    name: senteca-runner
  scaleDownDelaySecondsAfterScaleOut: 60
  minReplicas: 0
  maxReplicas: 5
  scaleUpTriggers:
  - githubEvent:
      checkRun:
        types: ["created"]
        status: "queued"
    amount: 1
    duration: "5m"

This is how my GitHub webhook looks like: image

Any hints as to what may be wrong on my end are welcome. Thanks in advance :^)

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 5
  • Comments: 24 (2 by maintainers)

Most upvoted comments

In my case I had to delete github-webhook-server pod. After restart it started working.

In my case I had to delete github-webhook-server pod. After restart it started working.

same for us.

cache contained <nil> occurred when deleting a runnerDeployment+HRA and recreating it in a different namespace or a different name. Chart: 0.14.0, app: 0.20.2

I’ve just seen it reappear, and it was straight after deleting a runnerdeployment and its matching HRA. Perhaps it’s to do with deleting an object and the webhook service not handling it properly?

also started seeing this with chart v0.13.0 and controller v0.20.0:

021-09-22T21:49:08.221Z	INFO	controllers.Runner	finding organizational runner	{"event": "workflow_job", "hookID": "290639658", "delivery": "e98f45d0-1bee-11ec-8b9e-dbe85cf41894", "workflowJob.status": "queued", "workflowJob.labels": ["feature/generic", "size/nano", "env/infra-mgmt", "group/default"], "repository.name": "gitops-qa", "repository.owner.login": "<my-org>", "repository.owner.type": "Organization", "action": "queued", "organization": "<my-org>"}
2021-09-22T21:49:08.221Z	ERROR	controllers.Runner	handling check_run event	{"event": "workflow_job", "hookID": "290639658", "delivery": "e98f45d0-1bee-11ec-8b9e-dbe85cf41894", "workflowJob.status": "queued", "workflowJob.labels": ["feature/generic", "size/nano", "env/infra-mgmt", "group/default"], "repository.name": "gitops-qa", "repository.owner.login": "<my-org>", "repository.owner.type": "Organization", "action": "queued", "error": "cache contained <nil>, which is not an Object"}
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2046
net/http.(*ServeMux).ServeHTTP
	/usr/local/go/src/net/http/server.go:2424
net/http.serverHandler.ServeHTTP
	/usr/local/go/src/net/http/server.go:2878
net/http.(*conn).serve
	/usr/local/go/src/net/http/server.go:1929

Restarting the pods solved it for the time being