terraform-provider-kubernetes: Kubernetes Provider 1.11.3 "Unauthorized"

Terraform Version and Provider Version

Terraform v0.12.26

  • provider.archive v1.3.0
  • provider.aws v2.70.0
  • provider.helm v1.2.3
  • provider.kubernetes v1.11.3
  • provider.null v2.1.2

Affected Resource(s)

  • kubernetes_cluster_role

Terraform Configuration Files

provider "kubernetes" {
  version                = "1.11.3"
  host                   = aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  #config_context         = aws_eks_cluster.cluster.arn
  load_config_file       = false
}
resource "kubernetes_cluster_role" "admin_role" {
  metadata {
    name = "k8s-admin-role"
    labels = {
      name = "AdminRole"
    }
  }

  rule {
    api_groups = ["", "apps", "batch", "extensions"]
    resources = [
      "pods", "cronjobs", "deployments", "devents", "ingresses",
      "jobs", "pods/attach", "pods/exec", "pods/log", "pods/portfoward",
      "secrets", "services", "nodes"
    ]
    verbs = [
      "get", "list", "watch", "delete", "describe", "patch",
    "update", "create"]
  }

  depends_on = [aws_eks_node_group.cluster_node_group]
}

Expected Behavior

What should have happened? Apply complete!

Terraform should create my cluster, node group and cluster role successfully.

Actual Behavior

Error: Unauthorized

  on ../../modules/eks/rbac.tf line 35, in resource "kubernetes_cluster_role" "admin_role":
  35: resource "kubernetes_cluster_role" "admin_role" {

Works properly 2nd time running terraform apply

Steps to Reproduce

  1. terraform apply --> Fails
  2. terraform apply --> Works 2nd Time (no changes between commands)

Important Factoids

  • Running k8s provider v1.9 works on initial run, but when running terraform destroy I get an error against the kubernetes_cluster_role saying system:anonymous does not have access to destroy.
    • If I change the provider to load_config_file = false and use the config_context, then it works. But this method does not work on terraform apply. I have to manually switch each time I want to do a destroy.

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 81
  • Comments: 23 (1 by maintainers)

Most upvoted comments

I get such problem while trying to import kuberenets_secret:

kubernetes_secret.newrelic-license-key: Importing from ID "newrelic/newrelic-license-key"...
kubernetes_secret.newrelic-license-key: Import prepared!
  Prepared kubernetes_secret for import
kubernetes_secret.newrelic-license-key: Refreshing state... [id=newrelic/newrelic-license-key]

Error: Unauthorized

Interesting: I’ve replaced:

token = data.aws_eks_cluster_auth.cluster.token

with

    exec {
      api_version = "client.authentication.k8s.io/v1alpha1"
      args        = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.cluster.name]
      command     = "aws"
    }

and now import works.

It would be nice if you could make the error message (Unauthorized) clearer. At the moment we can only guess that this is a Kubernetes problem. There is no more context surround this error message. No message like stacktrace or on resource …

Just: grafik

My Versions: Terraform 1.12.28 Kubernets Provider 1.12 Kubernetes API 1.17

Proposal: Could not connect to kubernetes api (https://api.k8s.de), Authorization failed. Please check your credentials.

Still reproduced with eks 1.18 and terraform 1.14.

As a temporary fix, I was able to update the token by regenerating the statefile using the terraform refresh command, which avoided this problem.

I hope that you will implement a process to automatically update the token when creating and building kubernetes-related resources.

Still reproduced with 1.14

In my case it was actually unauthorised (into k8s cluster). Problem is that I’m using creds from another state, I’ve split infra and post-infra projects, and grabbing k8s creds from output which is masked. Seems to me token been expired, when I’ve did terraform apply in the first project, second one started to work. Interestingly I’ve never had this issue before. Seems to me digital ocean decreased token lifetime maybe.

EKS tokens need to be refreshed every 15 minutes. If a Terraform apply runs and takes longer than 15 minutes, we get this error. A re-plan and re-apply works without issue.

What’s weird is that we sometimes also get this error on a simple change that takes less than 15 minutes, like changing a value in a ConfigMap. Re-plan and re-apply also work here, so it’s not a credential/ AWS IAM issue.

We are going to close this issue due to this being related to an outdated version. This helps our maintainers find and focus on the active and important issues. If you have found a problem that seems related to this issue, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

I ran into the same issue when I’ve tried to import the “kubernetes_secret” resource and it got solved simply running a “terraform refresh” before, FYI

This worked for me (using the same provider config above):

Added the role that is being used by the pod to aws-auth configmap.

    - rolearn: arn:aws:iam::**account_id**:role/terraform-runner-role
      username: admin:{{SessionName}}
      groups:
        - system:masters

@ipleten that is perfect. Given the token generated by aws-iam-authenticator have a short life. It would make sense creating a new token on every run would be the better solution.

I am having the same issues on EKS with 1.13.3 ( did not have this issue with 1.10.x) when deleting configmaps/secrets as part of a big plan which includes a lot of non EKS stuff( i assume the token is requested at the start but by the time it gets to using it it’s expired; if doing targeted deletes it works fine). I can also confirm the issue is inconsistent, doing the same plan a few times ( create/delete) it appears about 50% if the times.

Would be great if a fix would be included in future versions, cheers.

Having this same problem when EKS upgrades take longer than 15 minutes. I think that using a kubeconfig file instead of supplying the token to the provider this way token = data.aws_eks_cluster_auth.cluster.token fixes it as for every kubectl call, a new token is requested. However, I have not confirmed this 100%.

One other option I thought is creating a second provider for post-upgrade usage by aliasing, but I believe that all providers are initialized at the same time so the tokens would expire at the same time: https://www.terraform.io/docs/configuration/providers.html#alias-multiple-provider-configurations

A solution that doesn’t expect the kubeconfig to be set up before the provider runs and that auto-renews the token would be greatly appreciated

we also encounter this “Error: Unauthorized” after updating our k8s cluster to new kubernetes version because this takes more than 15 minutes. It would be great if the Kubernetes Provider could automatically refresh an expired token and continue without errors.