sealed-secrets: Getting public-cert is failing with an interactive prompt

I’m using a KUBECONFIG file pointing at DigitalOcean Kubernetes and I’m getting this odd behaviour.

# Wait for the controller to come up
kubectl rollout status deploy -n kube-system ofc-sealedsecrets-sealed-secrets

./kubeseal --fetch-cert --controller-name=ofc-sealedsecrets-sealed-secrets

It gives this text, then blocks indefinitely

Please enter Username: 

I am not sure what’s going wrong. Has there been a recent update to the binary release? Did something get nudged, or could it be a problem with the latest k8s version?

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTr
eeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTr
eeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Thanks in advance for your assistance

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 22 (11 by maintainers)

Commits related to this issue

Most upvoted comments

I also ran into this issue trying to get everything set up on DigitalOcean. After hammering at it for the evening last night, I may have figured out what’s going on. When creating a cluster via the command line, DigitalOcean saves the created cluster’s config automatically. This can also be done manually for existing clusters by running doctl kubernetes cluster kubeconfig save $CLUSTER_ID.

The problem seems to lie in the format that DigitalOcean saves the config via the command line. Taking a look at the ~/.kube/config downloaded via the command line, and the ~/.kube/config downloaded via the DigitalOcean control panel shows that there are a few differences.

Here’s an example of a config file retrieved via the command line:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: [REMOVED]
    server: [REMOVED]
  name: do-nyc1-lordran-dev-2
contexts:
- context:
    cluster: do-nyc1-lordran-dev-2
    user: do-nyc1-lordran-dev-2-admin
  name: do-nyc1-lordran-dev-2
current-context: do-nyc1-lordran-dev-2
kind: Config
preferences: {}
users:
- name: do-nyc1-lordran-dev-2-admin
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - kubernetes
      - cluster
      - kubeconfig
      - exec-credential
      - --version=v1beta1
      - [REFERENCE_TO_SERVER_REMOVED]
      command: doctl
      env: null

Here’s a copy that was downloaded via the DigitalOcean control panel:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: [REMOVED]
    server: [REMOVED]
  name: do-nyc1-lordran-dev-2
contexts:
- context:
    cluster: do-nyc1-lordran-dev-2
    user: do-nyc1-lordran-dev-2-admin
  name: do-nyc1-lordran-dev-2
current-context: do-nyc1-lordran-dev-2
kind: Config
preferences: {}
users:
- name: do-nyc1-lordran-dev-2-admin
  user:
    client-certificate-data: [REMOVED]
    client-key-data: [REMOVED]

The config file downloaded via the control panel has the client-certificate-data and client-key-data nested under the users key. It seems as though kubeseal is using these credentials to access the cluster. When I swapped my config file to use this copy, I was able to get through to the cluster without being prompted for a username or password.

Hope this helps!

yeah, I didn’t use `doctl kubernetes cluster kubeconfig save to save the config.

Turns out that doctl kubernetes cluster kubeconfig save and doctl kubernetes cluster kubeconfig show generate a different config, the former using the exec mechanism; this way it leverages the doctl access token (which I assume doesn’t expire like the TLS client certificate).

I assume DO opted for this (a bit surprising) behaviour on the grounds that show (or downloaded yaml) might be manually merged in environments that lack the doctl tool.


I can now reproduce the issue with v0.7.0. With HEAD, that uses a more recent k8s client library the error message is a bit more informative:

exec plugin: invalid apiVersion "client.authentication.k8s.io/v1beta1"

I just deployed a DigitalOcean Kubernetes cluster (v1.14.3), installed sealed-secrets using the helm chart (helm install --namespace kube-system --name ofc-sealedsecrets stable/sealed-secrets)

and:

$ ./kubeseal --fetch-cert --controller-name=ofc-sealedsecrets-sealed-secrets
-----BEGIN CERTIFICATE-----
MIIErjCCApagAwIBAgIRAMhYPyO/MGH4+WSZWMDebcswDQYJKoZIhvcNAQELB...

I used this binary: https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.7.0/kubeseal-darwin-amd64