cloud-sql-proxy: Error when attempting to deploy with Workload Identity

Bug Description

Following the latest instructions here, I am left with a pod that fails to start with the following log output:

2020/06/22 21:13:09 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
2020/06/22 21:13:09 errors parsing config:
	googleapi: Error 401: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
More details:
Reason: authError, Message: Invalid Credentials

Example code (or command)

Deployment yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  selector:
    matchLabels:
      run: api
  template:
    metadata:
      labels:
        run: api
    spec:
      containers:
      - name: api
        image: api
        ports:
        - containerPort: 8080
        env:
        - name: POSTGRES_HOST
          valueFrom:
            configMapKeyRef:
              name: postgres-connection-info
              key: host
        - name: POSTGRES_DB
          valueFrom:
            configMapKeyRef:
              name: postgres-connection-info
              key: database
        - name: REDIS_URL
          valueFrom:
            configMapKeyRef:
              name: redis-connection-info
              key: url
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: postgres-connection-info
              key: user
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-connection-info
              key: password
      - name: cloud-sql-proxy
        # It is recommended to use the latest version of the Cloud SQL proxy
        # Make sure to update on a regular schedule!
        image: gcr.io/cloudsql-docker/gce-proxy:1.17
        command:
          - "/cloud_sql_proxy"

          # If connecting from a VPC-native GKE cluster, you can use the
          # following flag to have the proxy connect over private IP
          # - "-ip_address_types=PRIVATE"

          # Replace DB_PORT with the port the proxy should listen on
          # Defaults: MySQL: 3306, Postgres: 5432, SQLServer: 1433
          - "-instances=keybox-281016:us-central1:development=tcp:5432"
        # securityContext:
        #   # The default Cloud SQL proxy image runs as the
        #   # "nonroot" user and group (uid: 65532) by default.
        #   runAsNonRoot: true

How to reproduce

  1. Follow the instructions mentioned above

Environment

  1. OS type and version: Mac OS 10.15.4
  2. Cloud SQL Proxy version (./cloud_sql_proxy -version): 1.17

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 20 (7 by maintainers)

Most upvoted comments

Got the same issue here. Workload identity works in my Kubernetes Config Connector set up, but got the same 401 error for Cloudsql proxy container.

2020/07/02 06:21:11 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here. 2020/07/02 06:21:12 errors parsing config: googleapi: Error 401: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project. More details: Reason: authError, Message: Invalid Credentials

Thanks for posting this @thebigredgeek. I had a running cluster that I had setup maybe two weeks ago with Workload Identity for the CloudSQL Proxy but I tore it down yesterday to test some changes in our provisioning scripts and I’ve been experiencing the same issue since. I also had trouble initially but it finally worked when I added this iam policy binding. However, maybe that was a fluke. In any case that “workaround” is no longer working.

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
  --role roles/cloudsql.client \
  --member "serviceAccount:${PROJECT_ID}@${PROJECT_ID}.iam.gserviceaccount.com"

The Kubernetes Service Account that was binded to the Google Service Account isn’t the default Kubernetes Service Account

@NicolasFruyAdeo Can you clarify what you mean by this? Is it you just missed adding the service account to your k8s object like this:

    spec:
      serviceAccountName: <YOUR-KSA-NAME>

I noticed that while the example in this project does include a a snippet of this under step 5, it’s missing from the Cloud Docs version. Would this have helped you notice this step?