sealed-secrets: Getting a "cannot fetch certificate" when working with kubeseal client

When trying to seal a secret with the kubeseal client like in the follwing, kubeseal hangs:

kubeseal < mysecret.yml -o yaml

When I set a timeout with the --request-timeout option, I get a more detailed message:

E1114 14:56:18.638781    8199 round_trippers.go:174] CancelRequest not implemented by *oidc.roundTripper
E1114 14:56:18.639062    8199 request.go:858] Unexpected error when reading response body: net/http: request canceled (Client.Timeout exceeded while reading body)
error: cannot fetch certificate: Unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout exceeded while reading body)

using it via: kubeseal < mysecret.yml -o yaml --cert certfile.cert works.

What am I doing wrong?

Some details about my setup:

  • using release version 0.9.5 (client & controller)
  • accessing the cert.pem, via port-forwarding the sealed-secret-controller service and downloading it from /v1/cert.pem works
  • RBAC is enabled (but the used user has full permissions on the resources in the namespace)

Thanks for your help

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 42

Commits related to this issue

Most upvoted comments

I was facing the same issue while trying to fetch certificate. Here are the steps that worked for me.

  1. First expose the kubeseal service to local : kubectl port-forward service/sealed-secrets-controller -n kube-system 8081:8080

  2. Call the endpoint : curl localhost:8081/v1/cert.pem

In GKE check firewall on 8080 port.

Tf example:

  resource "google_compute_firewall" "kubeseal-http" {
   name    = "kubeseal-http"
   network = "projects/${var.project}/global/networks/default"
   project = var.project

    allow {
     protocol = "tcp"
     ports    = ["8080"]
   }

    source_ranges = ["${google_container_cluster.primary.private_cluster_config.0.master_ipv4_cidr_block}"]
 }

There is an issue with helm so you must use sealed-secrets-controller as the release name not anything else, otherwise you can get this error because controller svc is created with release name but internal pods still tries to connect to sealed-secrets-controller

I am seeing this problem using kubeseal 0.17.1 and and controller 0.17.1. This is an on-premises deployment so no cloud components.

(⎈ |supervisor:sealed-secrets)supervisor [main●] % \cat ss.json | kubeseal --controller-namespace sealed-secrets --controller-name sealed-secrets
cat: ss.json: No such file or directory
error: cannot fetch certificate: no endpoints available for service "http:sealed-secrets:"

The release name is sealed-secrets but this has worked in the past. When I issue raw curl commands as this issue described, it works when I do sealed-secrets:http but not the other way around. kubeseal appears to be trying the incorrect method. How do I resolve this?

please try:

shell 1:

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

shell 2:

$ curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem

This issue might also be related. ArgoCD replaces sealed-secrets-controller with an app name in a Helm chart:
https://github.com/argoproj/argo-cd/issues/1066

@tereschenkov

You might been affected by #397 . There is an open PR against the helm chart (which contains a bug) but the maintainers of the helm chart are currently unresponsive.