kubectl: Kubectl should not validate whether there's an active gcloud config
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): Google search for exact error message. Turned up #23496 (already closed), however I think this is a slightly different use case. Also turned up some ones related to spurious errors.
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Kubernetes version (use kubectl version
):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Cloud provider or hardware configuration: GKE
- OS (e.g. from /etc/os-release): Debian 9
- Kernel (e.g.
uname -a
):Linux b4b75670c524 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 GNU/Linux
- Install tools:
- Others:
What happened: (slightly simplified to avoid using variables in shell scripts)
gcloud -q --configuration "some-project" --project "some-project" container clusters get-credentials "somecluster" --zone "us-central1-a"
gcloud config -q configurations describe "some-project" > /dev/null 2>&1 || gcloud config -q configurations create "some-project" --no-activate
gcloud --configuration="some-project" auth activate-service-account --key-file "/some/file.json"
[this generates a valid Kube config in ~/.kube/config
]
kubectl --context "gke_some-project_us-central1-a_somecluster" --cluster "gke_some-project_us-central1-a_somecluster" delete -f kubernetes.yml
What you expected to happen:
kubectl
would run the requested operation (in the example above, delete
). IMHO, I should not need to have an active gcloud
configuration to run kubectl
. In the past, use of these two tools was more or less completely decoupled; the behavior in 1.13 seems a little too “magic” to me.
How to reproduce it (as minimally and precisely as possible):
Run any kubectl
command against a GKE cluster without an active gcloud
config, without CLOUDSDK_CONFIG
or CLOUDSDK_ACTIVE_CONFIG_NAME
set
Anything else we need to know: See also https://stackoverflow.com/questions/52704015/unable-to-access-kubernetes-cluster-using-the-go-client-when-cloudsdk-config-is
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 21 (10 by maintainers)
i guess this is not kubectl bug
command above generated
kubeconfig
that usinggcloud
command for auth by default. it’s generatedkubeconfig
withusers
like thisbecause your gcloud config is not active,
gcloud
cannot do auth with those commandjust specify the configuration by adding the
--configuration "some-project"
in thecmd-args
will solve your issueor if you not need any gcloud configuration just set the
GOOGLE_APPLICATION_CREDENTIALS
environment variable with yourservice-account.json
file and remove theuser.auth-provider.config