terraform-provider-kubernetes: Kubernetes provider does not respect data when kubernetes_manifest is used

Terraform Version, Provider Version and Kubernetes Version

Terraform version: v1.0.5
Kubernetes provider version: v2.4.1
Kubernetes version: 1.20.8-gke.900

Affected Resource(s)

  • kubernetes_manifest

Terraform Configuration Files

data "google_client_config" "this" {}

data "google_container_cluster" "this" {
  name     = "my-cluster"
  location = "europe-west2"
  project  = "my-project"
}

provider "kubernetes" {
  token                  = data.google_client_config.this.access_token
  host                   = data.google_container_cluster.this.endpoint
  cluster_ca_certificate = base64decode(data.google_container_cluster.this.master_auth.0.cluster_ca_certificate)

  experiments {
    manifest_resource = true
  }
}

resource "kubernetes_manifest" "test-crd" {
  manifest = {
    apiVersion = "apiextensions.k8s.io/v1"
    kind       = "CustomResourceDefinition"

    metadata = {
      name = "testcrds.hashicorp.com"
    }

    spec = {
      group = "hashicorp.com"

      names = {
        kind   = "TestCrd"
        plural = "testcrds"
      }

      scope = "Namespaced"

      versions = [{
        name    = "v1"
        served  = true
        storage = true
        schema = {
          openAPIV3Schema = {
            type = "object"
            properties = {
              data = {
                type = "string"
              }
              refs = {
                type = "number"
              }
            }
          }
        }
      }]
    }
  }
}

Debug Output

Debug log contains lots of private information. I’d prefer to not to post it.

Steps to Reproduce

  1. terraform apply

Expected Behavior

Plan is presented, after apply CRD is created successfully

Actual Behavior

Error:

Invalid attribute in provider configuration

  with provider["registry.terraform.io/hashicorp/kubernetes"],
  on main.tf line 9, in provider "kubernetes":
   9: provider "kubernetes" {

'host' is not a valid URL

╷
│ Error: Failed to construct REST client
│
│   with kubernetes_manifest.test-crd,
│   on main.tf line 19, in resource "kubernetes_manifest" "test-crd":
│   19: resource "kubernetes_manifest" "test-crd" {
│
│ cannot create REST client: no client config

Important Factoids

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 104
  • Comments: 34

Most upvoted comments

Same issue here. Serious blocker for us. 😦

how is this still an issue? Still affected.

Same here, 1.5 year and counting.

Getting Failed to construct REST client when I try to deploy argocd app on non-existent EKS cluster. But it works fine on running EKS cluster.

│ Error: Failed to construct REST client
│ 
│   with module.argocd_application_gitops.kubernetes_manifest.argo_application,
│   on .terraform/modules/argocd_application_gitops/main.tf line 1, in resource "kubernetes_manifest" "argo_application":
│    1: resource "kubernetes_manifest" "argo_application" {

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}


data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}


provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}


provider "helm" {

  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster.token
  }
}


module "eks" {

...
}

module "argocd_application_gitops" {

  depends_on = [module.vpc, module.eks, module.eks_services]
  source     = "project-octal/argocd-application/kubernetes"
  version    = "2.0.0"

  argocd_namespace    = var.argocd_k8s_namespace
  destination_server  = "https://kubernetes.default.svc"
  project             = var.argocd_project_name
  name                = "gitops"
  namespace           = "myns"
  repo_url            = var.argocd_root_gitops_url
  path                = "Chart"
  chart               = ""
  target_revision     = "master"
  automated_self_heal = true
  automated_prune     = true
}

It doesn’t work with depends_on either.

Apparently, the helm provider (when configured in the same way) does not have this issue. So I can have the helm resources described in TF when the cluster does not exist. But I can’t have the k8s manifest TF code in the project until the cluster is created.

It would be great to see the issue with Failed to construct REST client for the Kubernetes provider solved soon! 🤞

I don’t want to post another +1 here, but I do have the same issue when trying to deploy a certmanager Issuer.

How can we get the attention of the maintainers here? This issue is open for almost two years affecting many users…

Same problem with cert-manager:

Error: Failed to construct REST client │ │ with module.eks_cluster_first.module.cert_manager.kubernetes_manifest.cluster_issuer_selfsigned, │ on modules\cert_manager\cert_manager.tf line 89, in resource “kubernetes_manifest” “cluster_issuer_selfsigned”: │ 89: resource “kubernetes_manifest” “cluster_issuer_selfsigned” { │ │ cannot create REST client: no client config

Still an issue! cannot create AWS infra and all related in new empty account because EKS cluster does not yet exists, even though I have dependencies. Thats silly!

+1 this is significant problem

Still an issue, please fix this

The problem is actual, a big request to fix it.

This may have been evident from the issue title, but those looking for a workaround can remove dynamic/data values from the provider configuration.

E.g., given a suitably configured kubectl environment, replacing:

provider "kubernetes" {
  host                   = "https://${data.google_container_cluster.default.endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(data.google_container_cluster.default.master_auth.0.cluster_ca_certificate)
}

with:

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "gke_my-project_my-region_my-cluster"
}