sealed-secrets: Bug in regards to "cannot fetch certificate: no endpoints available for service "http:sealed-secrets-controller:"

This is after reading #317 #397 and #368 all slightly related.

Description

Error message:

cannot fetch certificate: no endpoints available for service "http:sealed-secrets-controller:

Repro Steps

if my sealed-secrets-controller service is configured with:

spec:
  clusterIP: 172.20.54.13
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080

it does NOT work

but if you remove the port name http it works fine, i.e.

spec:
  clusterIP: 172.20.54.13
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080

Testing

shell 1

kubectl proxy

shell2

curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:/proxy/v1/cert.pem

Other Related Information

I use the bitnami kube.libsonnet libraries to manage my deployments & services, only using upstream yamls as reference (not truth), and Service() function does this automatically!

Service(name): $._Object("v1", "Service", name) {
    local service = self,

    target_pod:: error "service target_pod required",
    port:: self.target_pod.spec.containers[0].ports[0].containerPort,

    // Helpers that format host:port in various ways
    host:: "%s.%s.svc" % [self.metadata.name, self.metadata.namespace],
    host_colon_port:: "%s:%s" % [self.host, self.spec.ports[0].port],
    http_url:: "http://%s/" % self.host_colon_port,
    proxy_urlpath:: "/api/v1/proxy/namespaces/%s/services/%s/" % [
      self.metadata.namespace,
      self.metadata.name,
    ],
    // Useful in Ingress rules
    name_port:: {
      serviceName: service.metadata.name,
      servicePort: service.spec.ports[0].port,
    },

    spec: {
      selector: service.target_pod.metadata.labels,
      ports: [
        {
          port: service.port,
          name: service.target_pod.spec.containers[0].ports[0].name,
          targetPort: service.target_pod.spec.containers[0].ports[0].containerPort,
        },
      ],
      type: "ClusterIP",
    },
  },

https://github.com/bitnami-labs/kube-libsonnet/blob/master/kube.libsonnet#L181


The curl call can be resolved by appending the port name:

curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:sealed-secrets-controller:http/proxy/v1/cert.pem
- http:sealed-secrets-controller:
+ http:sealed-secrets-controller:http

kube.libsonnet workaround

local k = import 'kubernetes/kube.libsonnet';

{
  local addon = $.sealedSecrets,
  local config = $.config_.sealedSecrets,

  sealedSecrets+: {
    service+: k.Service(config.name, config.namespace) {
      local service = self,

      target_pod:: addon.deployment.spec.template,
      spec+: {
        ports: [
          // this port must be unnamed until this issue is resolved
          // https://github.com/bitnami-labs/sealed-secrets/issues/502
          {
            port: service.port,            
            targetPort: service.target_pod.spec.containers[0].ports[0].containerPort,
          },
        ],
      },
    },
  },
}

Note: lib/ is in my jsonnet path (I’m using Tanka), and so kubernetes/kube.libsonnet actually refers to a local file lib/kubernetes/kube.libsonnet which wraps the upstream bitnami-labs kube.libsonnet. Just adding this note in case anyone is confused why the function here takes 2 arguments (name & namespace) where the upstream requires only 1 argument.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 10
  • Comments: 24 (3 by maintainers)

Commits related to this issue

Most upvoted comments

Hi everyone!

Thanks so much for reporting this! The changes we recently introduced at https://github.com/bitnami-labs/sealed-secrets/pull/690 (introducing a name for the http port exposed in the service) broke the compatibility with kubeseal.

This should be fixed by this PR: https://github.com/bitnami-labs/sealed-secrets/pull/648

In the meantime, you can workaround this by removing the name and setting targetPort: 8080 as @glitchcrab pointed out at https://github.com/bitnami-labs/sealed-secrets/issues/694#issuecomment-997370679.

@Mirdrack This works for me, with chart 2.1.0 and app v0.17.2:

kubeseal --controller-namespace=sealed-secrets           --controller-name=sealed-secrets --fetch-cert

The specs of my service still has http as target port

This is expected.

Have you updated kubeseal?

$ kubeseal --version
kubeseal version: 0.17.2

I just installed kubeseal following the fluxcd documentation.

flux create source helm sealed-secrets --interval=1h --url=https://bitnami-labs.github.io/sealed-secrets
flux create helmrelease sealed-secrets --interval=1h --release-name=sealed-secrets-controller --target-namespace=flux-system --source=HelmRepository/sealed-secrets --chart=sealed-secrets --chart-version=">=1.15.0-0" --crds=CreateReplace

This used the 2.6.0 version of the chart and 0.18.1 of the application.

I also just installed version 0.18.1 of the kubeseal CLI.

I’m still getting the timeout error.

$ kubeseal --fetch-cert --controller-name=sealed-secrets-controller --controller-namespace=flux-system
error: cannot fetch certificate: error trying to reach service: dial tcp 100.96.1.94:8080: i/o timeout

I can port forward the service and retrieve the certificate.

That should be automatically done in a few hours @davidkarlsen

There’s a bot (https://github.com/BrewTestBot) that usually does the magic for us updating this homebrew formula: https://github.com/Homebrew/homebrew-core/blob/master/Formula/kubeseal.rb

Are you on civo.com or with a different Kubernetes service?

Sorry, forgot to mention. AWS EKS 1.21.5.

In my case same Helm chart: bitnami-labs works fine on Rancher RKE on prem clusters and not on cloud.

Same Helm, with same values (deployed as HelmRelease via Flux)

In the the sealed-secrets-controller svc there is no port name by default. Tried to add/remove & change it manually by editing the svc which didn’t help.

In the Helm values (of the working on prem clusters) I set the values:

  values:
    fullnameOverride: sealed-secrets-controller

but with and without this value it fails in cloud (civo + Digital Ocean) with error:

when running kubeseal command:

kubeseal -o yaml < civo-secret.yaml >  civo-sealed-secret.yaml
error: cannot fetch certificate: error trying to reach service: dial tcp 10.42.0.7:8080: i/o timeout

Again, exact same Helm works On prem

Helm values:

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: sealed-secrets
  namespace: kube-system
spec:
  releaseName: sealed-secrets
  chart:
    spec:
      chart: sealed-secrets
      version: "1.16.1"
      sourceRef:
        kind: HelmRepository
        name: sealed-secrets
        namespace: flux-system
  interval: 10m
  values:
    fullnameOverride: sealed-secrets-controller
    commandArgs:
      - "--update-status"
    crd:
      # crd.create: `true` if the crd resources should be created
      create: true
      # crd.keep: `true` if the sealed secret CRD should be kept when the chart is deleted
      keep: true

Service

kubectl get svc/sealed-secrets-controller -n kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: sealed-secrets
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2021-11-01T13:48:16Z"
  labels:
    app.kubernetes.io/instance: sealed-secrets
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: sealed-secrets
    app.kubernetes.io/version: v0.16.0
    helm.sh/chart: sealed-secrets-1.16.1
    helm.toolkit.fluxcd.io/name: sealed-secrets
    helm.toolkit.fluxcd.io/namespace: kube-system
  name: sealed-secrets-controller
  namespace: kube-system
  resourceVersion: "22220"
  uid: c5158d18-16be-4394-9204-ff4e0cf9adaa
spec:
  clusterIP: 10.43.225.37
  clusterIPs:
  - 10.43.225.37
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app.kubernetes.io/name: sealed-secrets
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}