terraform-provider-kubernetes: Can not set internal annotations to specify load balancer ssl certs
With the recent change of disallowing internal annotations it is no longer possible to setup internal annotations.
I want to use these internal annotations to provide SSL certificates/arn configuration for the load balancers. Any idea for a workaround?
Affected Resource(s)
- service
Terraform Configuration Files
"aws_elb_hosted_zone_id" "main" {}
data "aws_acm_certificate" "hello" {
domain = "*.hello.com"
statuses = ["ISSUED"]
}
resource "kubernetes_service" "hello" {
metadata {
name = "terraform-hello-integrated-example"
annotations {
"service.beta.kubernetes.io/aws-load-balancer-ssl-cert" = "${data.aws_acm_certificate.hello.arn}"
}
}
spec {
selector {
app = "hello"
}
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
resource "aws_route53_record" "hello" {
zone_id = "HELLO"
name = "hello"
type = "A"
alias {
name = "${kubernetes_service.hello.load_balancer_ingress.0.hostname}"
zone_id = "${data.aws_elb_hosted_zone_id.main.id}"
evaluate_target_health = true
}
}
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 20
- Comments: 60 (25 by maintainers)
Here’s another set of “internal” kubernetes.io annotations that are disallowed: https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/ingress-resources.md#annotations. This ingress controller is absolutely unusable without whitelisting the large list.
C’mon guys, the examples where this is blocking people for legitimate usage are far outweighing and far outnumbering the original reasons for blocking it, and severely degrades the usability of this provider. Please revert #50 or reduce its scope.
This has now been released in 1.7.0: https://github.com/terraform-providers/terraform-provider-kubernetes/blob/master/CHANGELOG.md#170-may-22-2019 🎉 🚀
I think that especially because of the reasoning behind #50 this is a big issue for many of us.
Unfortunately, the usage after this change showed that some tasks that were possible before are not possible anymore. For those of us who rely on these features (internal load balancers, aws certificates, …) this makes those things really hard do achieve.
Therefore, I would be really interested in feasible solution such as some whitelisting of certain annotations or similar.
With terraform 0.12+, you need to specify annotations as a map and with an equal sign:
Haha, lack of support of internal annotations just made this and couple of other Kubernetes resources useless. Especially in enterprise. Thanks for removing this. Now automation of this just got much complicated.
Hi, same issue for me, as I need to specify the well known
pv.beta.kubernetes.io/gidannotation on my persistent volumes.I’ve read https://github.com/terraform-providers/terraform-provider-kubernetes/pull/50#issue-251016641 and understand that the root issue from the terraform point of view is more with dynamic kubernetes annotations that potentially change all the time like those listed at https://kubernetes.io/docs/reference/labels-annotations-taints/ :
beta.kubernetes.io/archbeta.kubernetes.io/oskubernetes.io/hostnamebeta.kubernetes.io/instance-typefailure-domain.beta.kubernetes.io/regionfailure-domain.beta.kubernetes.io/zoneAs maintaining the list of such annotations exhaustively can seem overwhelming if not impossible as time goes, what about allowing to whitelist annotations by configuration?
I guess doing so at the provider level would be practical:
What do you think?
PS: I’ll use the pod security context’s
fs_groupfield to resolve for my current FS GID issue, but I believe whitelisting will come in handy anyway.Edit: #325 is a better alternative as it does not require the user to actively maintain the whitelist.
The same problem exists when your LB shall not have an external IP, e.g., in development environments with only internal IPs. I’d be happy if this issue can be solved. Thank you.
And here’s a dodgy workaround (partial implementation - it provisions, but not updates or destroys). YMMV.
@dh-harald, the main issue I had with using https://github.com/sl1pm4t/terraform-provider-kubernetes was it’s lack of support for exec authentication (because it is using an older k8s client) which is needed to integrate with AWS (at least without hacks). I took your suggested changes and filed PR #244. I really don’t understand why #Hashicorp is allowing this provider to languish and become useless by both making changes that prevent its use in real-world scenarios and not adding support for current k8s objects like Ingress, ClusterRole, etc.
I’m not a go developer, so I won’t open a pull request, but here’s my solution for this problem: https://github.com/dh-harald/terraform-provider-kubernetes/commit/d0b48e8ec7e991f702aa6e0ea1dfdfef6380c509
Idea: @pdecat / Solution: @mb290
Here’s an example:
I don’t know is it known, but if you’re using
trueinstead of"true", terraform translates it into1, and it won’t workNow that development is starting back up on this, it would be great to have this revisited. I would love to use this in some different projects, but kube2iam used the internal annotations to allow pods to assume an IAM role.
I’m also now hit with the bug and having the list of approved annotations would seriously help. I’m not a developer and had never even ventured into Go before, but managed to piece together the following extension to the existing function validateAnnotations in
https://github.com/terraform-providers/terraform-provider-kubernetes/blob/master/kubernetes/validators.go.I’ve tested the above and it works with my specific use case of needing an internal load balancer in Azure for provisioning a POC. Whilst i understand why the check for internal providers was built in the first place, this is severely impacting the usefulness of the Kubernetes provider without the ability for this exception.
If there is appetite for me to submit a pull request i’m more than happy to but I suspect there is a significantly more efficient way of the above check 😃. I’ll also have to figure out the part of that stops the attempt to re-enable the load balancer every time as this is what happens when I re-run:
Same here, I am trying to make storage class to be default. And I need
storageclass.kubernetes.io/is-default-classannotation.Using of this annotation is not restricted by Kubernetes and documented there https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/
By my opinion this check is unneeded, it’s up to user to not use these annotations, and use them when he knows what he is doing.
So is there’s any progress with this? I mean it’s 2019 and April already hello? @pdecat @AlexBoulyga Anyone? Help? Devops can’t live without terraform!
List of related issues/PRs:
That specific use case is missing from the docs. The most extensive source for examples are the acceptance test cases, e.g. https://github.com/terraform-providers/terraform-provider-kubernetes/blob/v1.7.0/kubernetes/resource_kubernetes_secret_test.go#L342 Maybe they could be extracted and put somewhere more easily discoverable.
Edit: I’ve noticed a few examples were outdated for terraform 0.12, working on a PR.
Hi. Yes, a release is coming in the next couple of days. I’m aiming for it to include PodAffinity as well, so I’m looking into getting that ready as well.
On Fri 17. May 2019 at 19:46, Pavel Eremeev notifications@github.com wrote:
–
— Sent from my phone.
Can we have a release tagged with the fix included, please?
I took a similar approach and used “
az aks get-credential”. It works well enough, but I find I have to be a lot more explicit about my dependencies.Is this still stalled? I’m trying to use External DNS, and I’ve run into this issue. It’s very annoying. I don’t like this idea of terraform making decisions about what I can and can’t do in k8s. I’ve run into other issues like this (not being able to set automountServiceAccountToken on deployments).
Is there a workaround for this particular issue?
@Xyrodileas, most of us have gone off the reservation and built our own version of the provider to get around this problem. I created PR #244 for the main provider to address the problem in the way suggested by one of the contributors. Others, like sl1pm4t / terraform-provider-kubernetes, have forked the code since Hashicorp seems uninterested in making this provider capable of real-world deployments. Either way, the only real workaround at this point is to either build your own custom provider or use the local-exec hack to run kubectl.
Also just run into this trying to set storage classes on persistent volume claims in OpenShift, without the ability to set set the annotations to specify a storage tier the claims will sit in pending and Terraform times out with “Still creating…”
No, I don’t hack anything… I just configure the provider:
When you created the cluster first, you need to skip the
-r(role) part, until you didn’t add the configmap for the role as wellPlease read https://github.com/terraform-providers/terraform-provider-kubernetes/pull/50#issue-251016641, which explains why this check exists.
This is pretty annoying, why enforce this at all? Doesn’t kubernetes API prevent this?
Hi @udangel-r7 I’m sorry for any inconvenience caused by this.
Allowing users to specify internal (
kubernetes.io) annotations is something I plan to look into eventually, but it’s non trivial for us to support. See full explanation at https://github.com/terraform-providers/terraform-provider-kubernetes/pull/50#issue-251016641