terraform-provider-kubernetes: Kubernetes Provider 1.11.3 "Unauthorized"
Terraform Version and Provider Version
Terraform v0.12.26
- provider.archive v1.3.0
- provider.aws v2.70.0
- provider.helm v1.2.3
- provider.kubernetes v1.11.3
- provider.null v2.1.2
Affected Resource(s)
- kubernetes_cluster_role
Terraform Configuration Files
provider "kubernetes" {
version = "1.11.3"
host = aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
#config_context = aws_eks_cluster.cluster.arn
load_config_file = false
}
resource "kubernetes_cluster_role" "admin_role" {
metadata {
name = "k8s-admin-role"
labels = {
name = "AdminRole"
}
}
rule {
api_groups = ["", "apps", "batch", "extensions"]
resources = [
"pods", "cronjobs", "deployments", "devents", "ingresses",
"jobs", "pods/attach", "pods/exec", "pods/log", "pods/portfoward",
"secrets", "services", "nodes"
]
verbs = [
"get", "list", "watch", "delete", "describe", "patch",
"update", "create"]
}
depends_on = [aws_eks_node_group.cluster_node_group]
}
Expected Behavior
What should have happened?
Apply complete!
Terraform should create my cluster, node group and cluster role successfully.
Actual Behavior
Error: Unauthorized
on ../../modules/eks/rbac.tf line 35, in resource "kubernetes_cluster_role" "admin_role":
35: resource "kubernetes_cluster_role" "admin_role" {
Works properly 2nd time running terraform apply
Steps to Reproduce
terraform apply--> Failsterraform apply--> Works 2nd Time (no changes between commands)
Important Factoids
- Running k8s provider v1.9 works on initial run, but when running
terraform destroyI get an error against thekubernetes_cluster_rolesayingsystem:anonymousdoes not have access to destroy.- If I change the provider to
load_config_file = falseand use theconfig_context, then it works. But this method does not work onterraform apply. I have to manually switch each time I want to do a destroy.
- If I change the provider to
References
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 81
- Comments: 23 (1 by maintainers)
I get such problem while trying to import kuberenets_secret:
Interesting: I’ve replaced:
with
and now import works.
It would be nice if you could make the error message (Unauthorized) clearer. At the moment we can only guess that this is a Kubernetes problem. There is no more context surround this error message. No message like stacktrace or on resource …
Just:
My Versions: Terraform 1.12.28 Kubernets Provider 1.12 Kubernetes API 1.17
Proposal: Could not connect to kubernetes api (https://api.k8s.de), Authorization failed. Please check your credentials.
Still reproduced with eks 1.18 and terraform 1.14.
As a temporary fix, I was able to update the token by regenerating the statefile using the
terraform refreshcommand, which avoided this problem.I hope that you will implement a process to automatically update the token when creating and building kubernetes-related resources.
Still reproduced with 1.14
In my case it was actually unauthorised (into k8s cluster). Problem is that I’m using creds from another state, I’ve split infra and post-infra projects, and grabbing k8s creds from output which is masked. Seems to me token been expired, when I’ve did terraform apply in the first project, second one started to work. Interestingly I’ve never had this issue before. Seems to me digital ocean decreased token lifetime maybe.
EKS tokens need to be refreshed every 15 minutes. If a Terraform apply runs and takes longer than 15 minutes, we get this error. A re-plan and re-apply works without issue.
What’s weird is that we sometimes also get this error on a simple change that takes less than 15 minutes, like changing a value in a
ConfigMap. Re-plan and re-apply also work here, so it’s not a credential/ AWS IAM issue.We are going to close this issue due to this being related to an outdated version. This helps our maintainers find and focus on the active and important issues. If you have found a problem that seems related to this issue, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
I ran into the same issue when I’ve tried to import the “kubernetes_secret” resource and it got solved simply running a “terraform refresh” before, FYI
This worked for me (using the same provider config above):
Added the role that is being used by the pod to
aws-authconfigmap.@ipleten that is perfect. Given the token generated by
aws-iam-authenticatorhave a short life. It would make sense creating a new token on every run would be the better solution.I am having the same issues on EKS with 1.13.3 ( did not have this issue with 1.10.x) when deleting configmaps/secrets as part of a big plan which includes a lot of non EKS stuff( i assume the token is requested at the start but by the time it gets to using it it’s expired; if doing targeted deletes it works fine). I can also confirm the issue is inconsistent, doing the same plan a few times ( create/delete) it appears about 50% if the times.
Would be great if a fix would be included in future versions, cheers.
Having this same problem when EKS upgrades take longer than 15 minutes. I think that using a kubeconfig file instead of supplying the token to the provider this way
token = data.aws_eks_cluster_auth.cluster.tokenfixes it as for every kubectl call, a new token is requested. However, I have not confirmed this 100%.One other option I thought is creating a second provider for post-upgrade usage by aliasing, but I believe that all providers are initialized at the same time so the tokens would expire at the same time: https://www.terraform.io/docs/configuration/providers.html#alias-multiple-provider-configurations
A solution that doesn’t expect the kubeconfig to be set up before the provider runs and that auto-renews the token would be greatly appreciated
we also encounter this “Error: Unauthorized” after updating our k8s cluster to new kubernetes version because this takes more than 15 minutes. It would be great if the Kubernetes Provider could automatically refresh an expired token and continue without errors.