serving: volume mount break Pod startup

In what area(s)?

/area API

What version of Knative?

HEAD

Expected Behavior

Run the nginx container as a service by specifying a non default port (.e port 80). I expected the service to run successfully and be able to access the nginx welcome page.

Actual Behavior

The Pod did not start due to volume mount issues :

2019-04-18T16:41:52.643263735Z nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (2: No such file or directory)
2019-04-18T16:41:52.649571891Z 2019/04/18 16:41:52 [emerg] 1#1: open() "/var/log/nginx/error.log" failed (2: No such file or directory)          

Pod manifest:

metadata:
    name: ffff-hcxmd-deployment-544bc9d587-5qhvs
    generateName: ffff-hcxmd-deployment-544bc9d587-
    namespace: sebgoa
    selfLink: /api/v1/namespaces/sebgoa/pods/ffff-hcxmd-deployment-544bc9d587-5qhvs
    labels:
        app: ffff-hcxmd
        pod-template-hash: 544bc9d587
        serving.knative.dev/configuration: ffff
        serving.knative.dev/configurationGeneration: '1'
        serving.knative.dev/configurationMetadataGeneration: '1'
        serving.knative.dev/revision: ffff-hcxmd
        serving.knative.dev/revisionUID: 0ad1904a-61f8-11e9-ac42-42010a800170
        serving.knative.dev/service: ffff
    annotations:
        sidecar.istio.io/inject: 'true'
        sidecar.istio.io/status: '{"version":"c3e1cae4ba6edc90052026dd7913ae40955b8500a82aae7245ab0d1059f37e54","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
        traffic.sidecar.istio.io/includeOutboundIPRanges: '*'
    ownerReferences:
        -
            apiVersion: apps/v1
            kind: ReplicaSet
            name: ffff-hcxmd-deployment-544bc9d587
            uid: 0b2536b6-61f8-11e9-ac42-42010a800170
            controller: true
            blockOwnerDeletion: true
spec:
    volumes:
        -
            name: varlog
            emptyDir: {}
        -
            name: default-token-bbrzm
            secret: {secretName: default-token-bbrzm, defaultMode: 420}
        -
            name: istio-envoy
            emptyDir: {medium: Memory}
        -
            name: istio-certs
            secret: {secretName: istio.default, defaultMode: 420, optional: true}
    initContainers:
        -
            name: istio-init
            image: 'docker.io/istio/proxy_init:1.0.7'
            args: ['-p', '15001', '-u', '1337', '-m', REDIRECT, '-i', '*', '-x', "", '-b', '80,8012,8022,9090', '-d', ""]
            resources: {}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            imagePullPolicy: IfNotPresent
            securityContext: {capabilities: {add: [NET_ADMIN]}, privileged: true, procMount: Default}
    containers:
        -
            name: user-container
            image: 'index.docker.io/library/nginx@sha256:e71b1bf4281f25533cf15e6e5f9be4dac74d2328152edf7ecde23abc54e16c1c'
            ports: [{name: user-port, containerPort: 80, protocol: TCP}]
            env: [{name: PORT, value: '80'}, {name: K_REVISION, value: ffff-hcxmd}, {name: K_CONFIGURATION, value: ffff}, {name: K_SERVICE, value: ffff}]
            resources: {requests: {cpu: 400m}}
            volumeMounts: [{name: varlog, mountPath: /var/log}, {name: default-token-bbrzm, readOnly: true, mountPath: /var/run/secrets/kubernetes.io/serviceaccount}]
            lifecycle: {preStop: {httpGet: {path: /wait-for-drain, port: 8022, scheme: HTTP}}}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: FallbackToLogsOnError
            imagePullPolicy: IfNotPresent
        -
            name: queue-proxy
            image: 'gcr.io/triggermesh/queue-7204c16e44715cd30f78443fb99e0f58@sha256:96d64c803ce7b145b4250ec89aabe4acde384b1510c8e4e117b590bfd15eb7a3'
            ports: [{name: queue-port, containerPort: 8012, protocol: TCP}, {name: queueadm-port, containerPort: 8022, protocol: TCP}, {name: queue-metrics, containerPort: 9090, protocol: TCP}]
            env: [{name: SERVING_NAMESPACE, value: sebgoa}, {name: SERVING_SERVICE, value: ffff}, {name: SERVING_CONFIGURATION, value: ffff}, {name: SERVING_REVISION, value: ffff-hcxmd}, {name: SERVING_AUTOSCALER, value: autoscaler}, {name: SERVING_AUTOSCALER_PORT, value: '8080'}, {name: CONTAINER_CONCURRENCY, value: '0'}, {name: REVISION_TIMEOUT_SECONDS, value: '300'}, {name: SERVING_POD, valueFrom: {fieldRef: {apiVersion: v1, fieldPath: metadata.name}}}, {name: SERVING_POD_IP, valueFrom: {fieldRef: {apiVersion: v1, fieldPath: status.podIP}}}, {name: SERVING_LOGGING_CONFIG, value: "{\n  \"level\": \"info\",\n  \"development\": false,\n  \"outputPaths\": [\"stdout\"],\n  \"errorOutputPaths\": [\"stderr\"],\n  \"encoding\": \"json\",\n  \"encoderConfig\": {\n    \"timeKey\": \"ts\",\n    \"levelKey\": \"level\",\n    \"nameKey\": \"logger\",\n    \"callerKey\": \"caller\",\n    \"messageKey\": \"msg\",\n    \"stacktraceKey\": \"stacktrace\",\n    \"lineEnding\": \"\",\n    \"levelEncoder\": \"\",\n    \"timeEncoder\": \"iso8601\",\n    \"durationEncoder\": \"\",\n    \"callerEncoder\": \"\"\n  }\n}"}, {name: SERVING_LOGGING_LEVEL, value: info}, {name: SERVING_REQUEST_LOG_TEMPLATE}, {name: SERVING_REQUEST_METRICS_BACKEND}, {name: USER_PORT, value: '80'}, {name: SYSTEM_NAMESPACE, value: knative-serving}]
            resources: {requests: {cpu: 25m}}
            volumeMounts: [{name: default-token-bbrzm, readOnly: true, mountPath: /var/run/secrets/kubernetes.io/serviceaccount}]
            readinessProbe: {httpGet: {path: /health, port: 8022, scheme: HTTP}, timeoutSeconds: 10, periodSeconds: 1, successThreshold: 1, failureThreshold: 3}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            imagePullPolicy: IfNotPresent
        -
            name: istio-proxy
            image: 'docker.io/istio/proxyv2:1.0.7'
            args: [proxy, sidecar, '--configPath', /etc/istio/proxy, '--binaryPath', /usr/local/bin/envoy, '--serviceCluster', ffff-hcxmd, '--drainDuration', 45s, '--parentShutdownDuration', 1m0s, '--discoveryAddress', 'istio-pilot.istio-system:15007', '--discoveryRefreshDelay', 1s, '--zipkinAddress', 'zipkin.istio-system:9411', '--connectTimeout', 10s, '--proxyAdminPort', '15000', '--controlPlaneAuthPolicy', NONE]
            ports: [{name: http-envoy-prom, containerPort: 15090, protocol: TCP}]
            env: [{name: POD_NAME, valueFrom: {fieldRef: {apiVersion: v1, fieldPath: metadata.name}}}, {name: POD_NAMESPACE, valueFrom: {fieldRef: {apiVersion: v1, fieldPath: metadata.namespace}}}, {name: INSTANCE_IP, valueFrom: {fieldRef: {apiVersion: v1, fieldPath: status.podIP}}}, {name: ISTIO_META_POD_NAME, valueFrom: {fieldRef: {apiVersion: v1, fieldPath: metadata.name}}}, {name: ISTIO_META_INTERCEPTION_MODE, value: REDIRECT}, {name: ISTIO_METAJSON_ANNOTATIONS, value: "{\"sidecar.istio.io/inject\":\"true\",\"traffic.sidecar.istio.io/includeOutboundIPRanges\":\"*\"}\n"}, {name: ISTIO_METAJSON_LABELS, value: "{\"app\":\"ffff-hcxmd\",\"pod-template-hash\":\"544bc9d587\",\"serving.knative.dev/configuration\":\"ffff\",\"serving.knative.dev/configurationGeneration\":\"1\",\"serving.knative.dev/configurationMetadataGeneration\":\"1\",\"serving.knative.dev/revision\":\"ffff-hcxmd\",\"serving.knative.dev/revisionUID\":\"0ad1904a-61f8-11e9-ac42-42010a800170\",\"serving.knative.dev/service\":\"ffff\"}\n"}]
            resources: {requests: {cpu: 10m}}
            volumeMounts: [{name: istio-envoy, mountPath: /etc/istio/proxy}, {name: istio-certs, readOnly: true, mountPath: /etc/certs/}]
            lifecycle: {preStop: {exec: {command: [sh, '-c', 'sleep 20; while [ $(netstat -plunt | grep tcp | grep -v envoy | wc -l | xargs) -ne 0 ]; do sleep 1; done']}}}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            imagePullPolicy: IfNotPresent
            securityContext: {runAsUser: 1337, readOnlyRootFilesystem: true, procMount: Default}
    restartPolicy: Always
    terminationGracePeriodSeconds: 300
    dnsPolicy: ClusterFirst
    serviceAccountName: default
    serviceAccount: default
    securityContext: {}
    schedulerName: default-scheduler
    tolerations:
        -
            key: node.kubernetes.io/not-ready
            operator: Exists
            effect: NoExecute
            tolerationSeconds: 300
        -
            key: node.kubernetes.io/unreachable
            operator: Exists
            effect: NoExecute
            tolerationSeconds: 300
    priority: 0

Steps to Reproduce the Problem

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
    annotations:
        serving.knative.dev/creator: 'system:serviceaccount:sebgoa:default'
        serving.knative.dev/lastModifier: 'system:serviceaccount:sebgoa:default'
    name: fooport
spec:
    runLatest:
        configuration:
            revisionTemplate: {metadata: {creationTimestamp: null}, spec: {container: {image: nginx, name: "", ports: [{containerPort: 80}], resources: {requests: {cpu: 400m}}}, timeoutSeconds: 300}}

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 5
  • Comments: 25 (14 by maintainers)

Commits related to this issue

Most upvoted comments

Knative needs to create a volume mounted at /var/log so that a fluentd sidecar can collect logs. This is going to wipeout anything that is pre-configured at this directory.

It is not clear to me that this is really a Knative bug, but rather an incompatibility between the container as packaged and the platform. It is possible to get the nginx container to run without rebuilding the image by simply updating Cmd/Args to do something like mkdir -p /var/log/nginx && nginx -g "daemon off";.

If you are rebuilding the nginx image anyways to update other configuration, you can also update the logging directory to remove the nginx sub-directory, or update the default command.

I do see how it would be slick to run nginx on Knative with little to no arguments, but I am not sure that will be a common enough use-case outside of a demo to warrant adding additional configuration/flags around fluentd mount behavior.