skaffold: G_A_C in GKE Tekton pipeline: open /secret/kaniko-secret: no such file or directory

Expected behavior

From the example, when running skaffold in a tekton pipeline using GKE cluster, an env var GOOGLE_APPLICATION_CREDENTIALS needs to be specified with a kaniko secret mounted to it.

Either by running “skaffold generate-pipeline” or using the examples from the examples folder should successfully mount the secret to the GOOGLE_APPLICATION_CREDENTIALS.

Actual behavior

The image gcr.io/k8s-skaffold/skaffold (tested v1.4.0 and v1.5.0) can’t find the secret from GOOGLE_APPLICATION_CREDENTIALS. Here’s the error message:

Creating kaniko secret [<MY NAMESPACE>/kaniko-secret]...
Building [gcr.io/XXX/redisslave]...
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "gcr.io/<MY PROJECT>/<MY IMG>:7ef7eb8": creating push check transport for gcr.io failed: Get https://gcr.io/v2/token?scope=repository%3A<MYPROJECT>%2F<MY IMG>%3Apush%2Cpull&service=gcr.io: invoking docker-credential-gcr: exit status 1; output: docker-credential-gcr/helper: could not retrieve GCR's access token: google: error getting credentials using GOOGLE_APPLICATION_CREDENTIALS environment variable: open /secret/kaniko-secret: no such file or directory
time="2020-03-13T21:27:24Z" level=fatal msg="build failed: building [gcr.io/yuwenma-gke-playground/redisslave]: waiting for pod to complete: condition error: pod already in terminal phase: Failed"

Information

  • Skaffold version: v1.4.0
  • Operating system: OSX/Linux
  • Contents of skaffold.yaml:
apiVersion: skaffold/v2alpha4
kind: Config
metadata:
  name: ci-test-
build:
  artifacts:
  - image: gcr.io/k8s-skaffold/skaffold
profiles:
- name: oncluster
  build:
    artifacts:
    - image: gcr.io/<MY PROJECT>/<My IMG>
      context: .
      kaniko: {}
    tagPolicy:
      gitCommit: {}
    cluster:
      pullSecretName: kaniko-secret
      namespace: <MY NAMESPACE>

Here’s my tekton task config

apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: skaffold-build-and-push
  namespace: <MY NAMESPACE>
spec:
  inputs:
    resources:
    - name: workspace
      type: git
    params:
    - name: pathToSkaffold
      default: /workspace/workspace/skaffold.yaml # my input resource is located in /workspace/workspace
  outputs:
    resources:
    - name: builtImage
      type: image
  steps:
  - name: run-skaffold-build
    image: gcr.io/k8s-skaffold/skaffold:v1.5.0 # also tested v1.4.0
    command:
    - skaffold
    - build
    args:
    - --filename
    - $(inputs.params.pathToSkaffold) # location of my skaffold.yaml 
    - --profile
    - oncluster 
    - --file-output
    - build.out
    workingDir: /workspace/workspace
    resources: {}
    env:
    - name: GOOGLE_APPLICATION_CREDENTIALS
      value: /secret/kaniko-secret  # tried both kaniko-secret and kaniko-secret.json (the file name contained in the k8s kaniko-secret). The skaffold/examples folder seems to have both formats.
    volumeMounts:
    - name: kaniko-secret
      mountPath: /secret
  volumes:
  - name: kaniko-secret
    secret:
      secretName: kaniko-secret

Additional Information

The kaniko-secret itself is correct. The service account of the kaniko-secret has the expected access to upload images to my gcr and I’ve tested the secret using kaniko directly in tekton task config.


steps:
- name: build-and-push
  image: gcr.io/kaniko-project/executor:v0.15.0
  command:
  - /kaniko/executor
  args:
  - --dockerfile=$(inputs.params.pathToDockerFile)
  - --destination=$(outputs.resources.builtImage.url)
  - --context=$(inputs.params.pathToContext)
  env:
  - name: GOOGLE_APPLICATION_CREDENTIALS
    value: /secret/kaniko-secret.json
  volumeMounts:
  - name: kaniko-secret
    mountPath: /secret
volumes:
- name: kaniko-secret
  secret:
    secretName: kaniko-secret 

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 16 (9 by maintainers)

Most upvoted comments

Something is definitely broken here, but we haven’t yet had the time to reproduce the issue.

Lowering the priority due to current bandwidth.