kubectl: Kubectl should not validate whether there's an active gcloud config

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): Google search for exact error message. Turned up #23496 (already closed), however I think this is a slightly different use case. Also turned up some ones related to spurious errors.


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release): Debian 9
  • Kernel (e.g. uname -a): Linux b4b75670c524 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 GNU/Linux
  • Install tools:
  • Others:

What happened: (slightly simplified to avoid using variables in shell scripts)

gcloud -q --configuration "some-project" --project "some-project" container clusters get-credentials "somecluster" --zone "us-central1-a" gcloud config -q configurations describe "some-project" > /dev/null 2>&1 || gcloud config -q configurations create "some-project" --no-activate gcloud --configuration="some-project" auth activate-service-account --key-file "/some/file.json" [this generates a valid Kube config in ~/.kube/config] kubectl --context "gke_some-project_us-central1-a_somecluster" --cluster "gke_some-project_us-central1-a_somecluster" delete -f kubernetes.yml

What you expected to happen: kubectl would run the requested operation (in the example above, delete). IMHO, I should not need to have an active gcloud configuration to run kubectl. In the past, use of these two tools was more or less completely decoupled; the behavior in 1.13 seems a little too “magic” to me.

How to reproduce it (as minimally and precisely as possible): Run any kubectl command against a GKE cluster without an active gcloud config, without CLOUDSDK_CONFIG or CLOUDSDK_ACTIVE_CONFIG_NAME set

Anything else we need to know: See also https://stackoverflow.com/questions/52704015/unable-to-access-kubernetes-cluster-using-the-go-client-when-cloudsdk-config-is

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 21 (10 by maintainers)

Most upvoted comments

i guess this is not kubectl bug

gcloud -q --configuration "some-project" --project "some-project" container clusters get-credentials "somecluster" --zone "us-central1-a"

command above generated kubeconfig that using gcloud command for auth by default. it’s generated kubeconfig with users like this

users:
- name: some-user
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

because your gcloud config is not active, gcloud cannot do auth with those command

gcloud config config-helper --format=json
ERROR: (gcloud.config.config-helper) You do not currently have an active account selected.

just specify the configuration by adding the --configuration "some-project" in the cmd-args will solve your issue

or if you not need any gcloud configuration just set the GOOGLE_APPLICATION_CREDENTIALS environment variable with your service-account.json file and remove the user.auth-provider.config

kubectl config set-credentials "some-user" \
--auth-provider=gcp \
--auth-provider-arg=cmd-path- \
--auth-provider-arg=cmd-args- \
--auth-provider-arg=expiry-key- \
--auth-provider-arg=token-key-