azure-key-vault-to-kubernetes: [BUG]Environment injection does not work - UNAUTHORIZED: authentication required.
Note: Make sure to check out known issues (https://akv2k8s.io/troubleshooting/known-issues/) before submitting
Components and versions Select which component(s) the bug relates to with [X].
[ ] Controller, version: x.x.x (docker image tag)
[X] Env-Injector (webhook), version: 1.4.0 (docker image tag)
[ ] Other
Describe the bug I created a new AKS cluster and deployed a simple nginx pod. All works well. Then I added a secret injected through the environment and the replicaSet fails to start with the following error:
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$ k describe rs toolbox-78544646dd | tail -1
Warning FailedCreate 26s replicaset-controller Error creating: Internal error occurred: failed calling webhook "pods.env-injector.admission.spv.no": failed to call webhook: an error on the server ("{\"response\":{\"uid\":\"2e772ecb-e618-42f8-9273-a43a5b17ac52\",\"allowed\":false,\"status\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"failed to get auto cmd, error: GET https://app541deploycr.azurecr.io/oauth2/token?scope=repository%3Achip%2Ftoolbox%3Apull\\u0026service=app541deploycr.azurecr.io: UNAUTHORIZED: authentication required, visit https://aka.ms/acr/authorization for more information.\\ncannot fetch image descriptor\\ngithub.com/SparebankenVest/azure-key-vault-to-kubernetes/pkg/docker/registry.getImageConfig\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/pkg/docker/registry/registry.go:144\\ngithub.com/SparebankenVest/azure-key-vault-to-kubernetes/pkg/docker/registry.(*Registry).GetImageConfig\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/pkg/docker/registry/registry.go:103\\nmain.getContainerCmd\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/cmd/azure-keyvault-secrets-webhook/registry.go:39\\nmain.podWebHook.mutateContainers\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/cmd/azure-keyvault-secrets-webhook/pod.go:143\\nmain.podWebHook.mutatePodSpec\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/cmd/azure-keyvault-secrets-webhook/pod.go:299\\nmain.vaultSecretsMutator\\n\\t/go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/cmd/azure-keyvault-secrets-webhook/main.go:163\\ngithub.com/slok/kubewebhook/pkg/webhook/mutating.MutatorFunc.Mutate\\n\\t/go/pkg/mod/github.com/slok/kubewebhook@v0.11.0/pkg/webhook/mutating/mutator.go:25\\ngithub.com/slok/kubewebhook/pkg/webhook/mutating.mutationWebhook.mutatingAdmissionReview\\n\\t/go/pkg/mod/github.com/slok/kubewebhook@v0.11.0/pkg/webhook/mutating/webhook.go:128\\ngithub.com/slok/kubewebhook/pkg/webhook/mutating.mutationWebhook.Review\\n\\t/go/pkg/mod/github.com/slok/kubewebhook@v0.11.0/pkg/webhook/mutating/webhook.go:120\\ngithub.com/slok/kubewebhook/pkg/webhook/internal/instrumenting.(*Webhook).Review\\n\\t/go/pkg/mod/github.com/slok/kubewebhook@v0.11.0/pkg/webhook/internal/") has prevented the request from succeeding
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$
This has all the markers of the issue describe in here - https://akv2k8s.io/installation/with-aad-pod-identity. But trying to fix it as described does not work:
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$ helm -n akv2k8s upgrade akv2k8s akv2k8s/akv2k8s --set addAzurePodIdentityException=true
Error: UPGRADE FAILED: [resource mapping not found for name: "akv2k8s-controller-exception" namespace: "akv2k8s" from "": no matches for kind "AzurePodIdentityException" in version "aadpodidentity.k8s.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "akv2k8s-env-injector-exception" namespace: "" from "": no matches for kind "AzurePodIdentityException" in version "aadpodidentity.k8s.io/v1"
ensure CRDs are installed first]
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$
So it does not work either way.
The AKS cluster is deployed using our terraform code. The AKS cluster version is 1.25.4.
To Reproduce
- Deploy AKS cluster at version 1.25.4. I can provide any configuration options as needed.
- Deploy akv2k8s using the following terraform resource:
resource "helm_release" "akv2k8s" {
name = "akv2k8s"
chart = "akv2k8s"
version = "2.3.2"
create_namespace = true
namespace = "akv2k8s"
repository = "http://charts.spvapi.no"
}
- Deploy simple nginx app. In our case:
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$ helm get manifest toolbox
---
# Source: chip-toolbox/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: toolbox
name: toolbox
namespace: chip
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: toolbox
---
# Source: chip-toolbox/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: toolbox
name: toolbox
namespace: chip
spec:
replicas: 1
selector:
matchLabels:
app: toolbox
template:
metadata:
labels:
app: toolbox
spec:
containers:
- name: toolbox
image: app541deploycr.azurecr.io/chip/toolbox:1.0.23062.13
env:
- name: DUMMY_SECRET
value: dummy@azurekeyvault
---
# Source: chip-toolbox/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app: toolbox
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: toolbox
namespace: chip
spec:
ingressClassName: nginx-internal
rules:
- host: chip-can.np.dayforcehcm.com
http:
paths:
- path: /toolbox(/|$)(.*)
pathType: Prefix
backend:
service:
name: toolbox
port:
number: 80
tls:
- hosts:
- chip-can.np.dayforcehcm.com
---
# Source: chip-toolbox/templates/akv.yaml
apiVersion: spv.no/v1
kind: AzureKeyVaultSecret
metadata:
name: secret
namespace: chip
spec:
vault:
name: c541chip
object:
name: dummy
type: secret
mark@L-R910LPKW:~/chip/toolbox/k8s [test ≡ +0 ~2 -0 !]$
The replicaSet fails. And applying the exception flag fails too.
Expected behavior The replicaSet is able to scale as requested and the secret is injected.
Logs I am not sure which logs to provide, I will provide any logs on demand.
Additional context The terraform code used to deploy the HELM charts used to deploy AAD Pod Identity in the past, but that particular HELM was deleted and was never applied to the new cluster. So it is a mystery to us why it happens in the first place.
About this issue
- Original URL
- State: open
- Created a year ago
- Reactions: 4
- Comments: 20 (5 by maintainers)
What is the sample injector app you’re using? Is it
image: spvest/akv2k8s-env-test:2.0.1?UPDATE: it was azure_policy add-on that was preventing us from creating the pods. The UNAUTHORIZED pull error from ACR was misleading.