kubernetes: v1.10 AlwaysPullImages admission control order breaks MutatingAdmissionWebhook functionality like Istio

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:

In v1.10, 7c5f9e0bbaff15570f1709e70b7fa6952395d7cd introduced the ability to not worry about admission control order because it’s handled here

AlwaysPullImages is before MutatingAdmissionWebhook. When trying to use Istio sidecar injection, the pod fails to initialize stating

Error creating: pods "sleep-86f6b99f94-qxvq6" is forbidden: spec.initContainers[0].imagePullPolicy: Unsupported value: "IfNotPresent": supported values: "Always"

In v1.9, everything works as expected when placing AlwaysPullImages after MutatingAdmissionWebhook. If you put AlwaysPullImages before MutatingAdmissionWebhook, the same error above occurs.

What you expected to happen:

In v1.10, when AlwaysPullImages and MutatingAdmissionWebhook are turned on, sidecar injection like Istio should work.

How to reproduce it (as minimally and precisely as possible):

  • In v1.10, enable AlwaysPullImages and MutatingAdmissionWebhook admission controllers.
  • Install latest Istio
  • Enable sidecar injection
  • Enabled istio injection on a namespace ie kubectl label namespace default istio-injection=enabled
  • Deploy anything in that namespace
  • Run kubectl describe rs [REPLICA_SET_NAME]. You should see error events similar to Error creating: pods "sleep-86f6b99f94-qxvq6" is forbidden: spec.initContainers[0].imagePullPolicy: Unsupported value: "IfNotPresent": supported values: "Always"
  • Turning off AlwaysPullImages seems to fix Istio

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): v1.10.3
  • Cloud provider or hardware configuration: acs-engine
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 31 (25 by maintainers)

Commits related to this issue

Most upvoted comments

I have a doubt: why enable AlwaysPullImages while inject Istio sidecar with ‘IfNotPresent’ image pull policy?

@mbohlool @liggitt @kubernetes/sig-api-machinery-misc I see this as an area that needs to be resolved before admission webhooks can move to GA.

Multiple mutating admission passes. In the case where a mutating admission plugin makes structural additions (e.g. adding a new container), we could rerun the mutating phase to give all plugins visibility to the new substruct, and the opportunity to enforce/default/reject fields on that substruct.

to address #67050 i would support this. as a webhook dev, i want to do the minimum possible and inherit the controls the system puts in place. In my own request I don’t want to have to re-implement the logic from previous admission plugins in my webhook. To this end, I want my mutating webhook to run first so as to be as close to the user’s initial request as possible. This way I can inject my changes (update securityContext, add containers, volumes, etc) without having to know what the name of a service account’s token is. This strategy leads to:

  1. Much more readable code
  2. More secure webhooks
  3. Better reuse of webhook code

As an example, in order to inject the kerberos sidecar in our webhook we had to generate the following patches in OPA:

krb_sidecar_patch = [
    {
        "op": "replace",
        "path": "/spec/serviceAccountName",
        "value": "run-as-user"
    },
    {
        "op": "replace",
        "path": "/spec/imagePullSecrets/0/name",
        "value": user_data["sa-dockercfg"]
    },
    {
        "op": "replace",
        "path": "/spec/volumes/0/name",
        "value": user_data["sa-token"]
    },
    {
        "op": "replace",
        "path": "/spec/volumes/0/secret/secretName",
        "value": user_data["sa-token"]
    },
    {
        "op": "replace",
        "path": "/metadata/annotations/openshift.io~1scc",
        "value": sprintf("run-as-%s",[kerb_sidecar_data.uid])
    },
    {
        "op": "add",
        "path": "/spec/securityContext/runAsUser",
        "value": kerb_sidecar_data.uidnumber
    },
    {
        "op": "replace",
        "path": "/spec/containers/0/securityContext/runAsUser",
        "value": kerb_sidecar_data.uidnumber
    },
    {
        "op": "replace",
        "path": "/spec/containers/0/volumeMounts/0/name",
        "value": user_data["sa-token"]
    },
    {
        "op": "add",
        "path": "/spec/containers/1",
        "value": {
            "resources": null,
            "image": "mlbiam/krbsidecar",
            "name": "sidecar",
            "env": [
                {
                    "name": "USER_PRINCIPAL",
                    "value": kerb_sidecar_data.userPrincipalName
                },
                {
                    "name": "KRB5_CONFIG",
                    "value": "/etc/krb-conf/krb5.conf"
                }
            ],
            "volumeMounts": [
                {
                    "name": "krb-kt",
                    "mountPath": "/krb5"
                },
                {
                    "name": "krb-conf",
                    "mountPath": "/etc/krb-conf"
                },
                {
                    "name": "dshm",
                    "mountPath": "/dev/shm"
                }
            ],
            "imagePullPolicy": "Always",
            "terminationMessagePolicy": "File",
            "terminationMessagePath": "/dev/termination-log"
        }
    },
    {
        "op": "add",
        "path": "/spec/containers/0/env",
        "value": [
            {
                "name": "KRB5_CONFIG",
                "value": "/etc/krb-conf/krb5.conf"
            }
        ]
    },
    {
        "op": "add",
        "path": "/spec/containers/0/volumeMounts/1",
        "value": {
            "name": "krb-conf",
            "mountPath": "/etc/krb-conf"
        }
    },
    {
        "op": "add",
        "path": "/spec/containers/0/volumeMounts/2",
        "value": {
            "name": "dshm",
            "mountPath": "/dev/shm"
        }
    },
    {
        "op": "add",
        "path": "/spec/volumes/1",
        "value": {
            "name": "krb-kt",
            "hostPath": {
                "path": sprintf("/home/%s/%s/keytab",[kerb_sidecar_data.domain,kerb_sidecar_data.uid]),
                "type": "Directory"
            }
        }
    },
    {
        "op": "add",
        "path": "/spec/volumes/2",
        "value": {
            "name": "krb-conf",
            "configMap": {
                "name": "krb5-config"
            }
        }
    },
    {
        "op": "add",
        "path": "/spec/volumes/3",
        "value": {
            "name": "dshm",
            "emptyDir": {
                "medium": "Memory"
            }
        }
    },
    {
        "op": "replace",
        "path": "/spec/serviceAccount",
        "value": "run-as-user"
    },
    {
        "op": "add",
        "path": "/metadata/annotations/com.tremolosecurity.openshift.krb5_sidecar.added",
        "value": "krb5_sidecar"
    }
]

where as if the webhook could be run at the beginning of the chain my code would be:

krb_sidecar_patch = [
    {
        "op": "replace",
        "path": "/spec/serviceAccountName",
        "value": "run-as-user"
    },
    
    {
        "op": "add",
        "path": "/spec/securityContext/runAsUser",
        "value": kerb_sidecar_data.uidnumber
    },
    
    {
        "op": "add",
        "path": "/spec/containers/1",
        "value": {
            "resources": null,
            "image": "mlbiam/krbsidecar",
            "name": "sidecar",
            "env": [
                {
                    "name": "USER_PRINCIPAL",
                    "value": kerb_sidecar_data.userPrincipalName
                },
                {
                    "name": "KRB5_CONFIG",
                    "value": "/etc/krb-conf/krb5.conf"
                }
            ],
            "volumeMounts": [
                {
                    "name": "krb-kt",
                    "mountPath": "/krb5"
                },
                {
                    "name": "krb-conf",
                    "mountPath": "/etc/krb-conf"
                },
                {
                    "name": "dshm",
                    "mountPath": "/dev/shm"
                }
            ],
            "imagePullPolicy": "Always",
            "terminationMessagePolicy": "File",
            "terminationMessagePath": "/dev/termination-log"
        }
    },
    {
        "op": "add",
        "path": "/spec/containers/0/env",
        "value": [
            {
                "name": "KRB5_CONFIG",
                "value": "/etc/krb-conf/krb5.conf"
            }
        ]
    },
    {
        "op": "add",
        "path": "/spec/containers/0/volumeMounts/1",
        "value": {
            "name": "krb-conf",
            "mountPath": "/etc/krb-conf"
        }
    },
    {
        "op": "add",
        "path": "/spec/containers/0/volumeMounts/2",
        "value": {
            "name": "dshm",
            "mountPath": "/dev/shm"
        }
    },
    {
        "op": "add",
        "path": "/spec/volumes/1",
        "value": {
            "name": "krb-kt",
            "hostPath": {
                "path": sprintf("/home/%s/%s/keytab",[kerb_sidecar_data.domain,kerb_sidecar_data.uid]),
                "type": "Directory"
            }
        }
    },
    {
        "op": "add",
        "path": "/spec/volumes/2",
        "value": {
            "name": "krb-conf",
            "configMap": {
                "name": "krb5-config"
            }
        }
    },
    {
        "op": "add",
        "path": "/spec/volumes/3",
        "value": {
            "name": "dshm",
            "emptyDir": {
                "medium": "Memory"
            }
        }
    },
    {
        "op": "add",
        "path": "/metadata/annotations/com.tremolosecurity.openshift.krb5_sidecar.added",
        "value": "krb5_sidecar"
    }
]

The second block of code eliminates updates to individual container’s service accounts and is portable between k8s distros (since its no longer coded to openshift’s annotations). It also eliminates the need for me to lookup secret names for specific service accounts leading to fewer possible chances to leak information.