kubernetes: ENV invalid: spec.template.spec.containers[0].env[14].valueFrom: Invalid value: "": may not be specified when `value` is not empty

When applying a deploy with some env vars kubernetes accepts the deployment but when doing a var change gives an error with an invalid var value(which is not empty)

Removing some random vars can make it work or not but if you remove the deploy and apply it again makes that the deployment is deployed again successfully

Similar issues: https://github.com/kubernetes/kubernetes/issues/35134 https://github.com/kubernetes/kubernetes/issues/46826

BUG REPORT:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  labels:
    name: test
  name: test
  namespace: dc1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: test
    spec:
      imagePullSecrets:
        - name: docker-artifactory-credential-secret
      containers:
      - image:  alpine:test
        name: test
        ports:
        - containerPort: 9292
          name: http
        livenessProbe:
          httpGet:
            path: /
            port: 9292
          initialDelaySeconds: 60
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 60
          failureThreshold: 6
        resources:
          requests:
            memory: "128Mi"
            cpu: "25m"
          limits:
            memory: "512Mi"
            cpu: "100m"
        env:
          - name: OKTA_ASSERTION_CONSUMER_URL
            value: 'http://test/auth/saml/callback'
          - name: OKTA_ISSUER
            value: test
          - name: OKTA_TARGET_URL
            value: value’
          - name: OKTA_FINGERPRINT
            value: value’
          - name: OKTA_NAME_IDENTIFIER_FORMAT
            value: ‘value’
          - name: APP_CURRENT_PATH
            value: “/test”
          - name: APP_INTERCEPT_MAIL
            value: 'false'
          - name: DATABASE_ADAPTER
            value: mysql2
          - name: DATABASE_DATABASE
            value: reports
          - name: DATABASE_HOST
            value: mysql-test
          - name: DATABASE_PORT
            value: '3306'
          - name: DATABASE_RECONNECT
            value: 'true'
          - name: DATABASE_USERNAME
            value: test
          - name: DOMAIN_NAME
            value: localhost
          - name: KEEN_ENABLED
            value: 'false'
          - name: KEEN_READ_TIMEOUT
            value: '5'
          - name: ERRORS_HOST
            value: http://errors.peertransfer.in
          - name: KEEN_ENABLED
            value: 'false'
          - name: KEEN_PROJECT_ID
            value: 3232333433243432131
          - name: KEEN_READ_TIMEOUT
            value: '5'
          - name: MEMCACHE_SERVERS
            value: memcached-test:11211
          - name: MONGO_DATABASE
            value: test
          - name: MONGO_HOSTS
            value: mongodb-test
          - name: MONGO_LOGGER
            value: 'false'
          - name: RACK_ENV
            value: development
          - name: S3_BUCKET
            value: testing-transfer-dev-s3
          - name: RACK_ENV
            value: development
          - name: UNICORN_PORT
            value: '9292'
          - name: UNICORN_TIMEOUT
            value: '30'
          - name: UNICORN_WORKERS
            value: '6'
          - name: WIKI_GIT_REPO
            value: ''
          - name: WIKI_PATH
            value: tmp/test
          - name: WIKI_SYNC
            value: 'false'
          - name: S3_SECRET_ACCESS_KEY
            valueFrom:
              secretKeyRef:
                name: test-app-secrets
                key: s3_secret_access_key
          - name: S3_ACCESS_KEY_ID
            valueFrom:
              secretKeyRef:
                name: test-app-secrets
                key: s3_access_key_id
          - name: PUSHER_KEY
            valueFrom:
              secretKeyRef:
                name: test-app-secrets
                key: pusher_key
          - name: KEEN_WRITE_KEY
            valueFrom:
              secretKeyRef:
                name: test-app-secrets
                key: keen_write_key
          - name: KEEN_READ_KEY
            valueFrom:
              secretKeyRef:
                name: test-app-secrets
                key: keen_read_key
          - name: KEEN_PROJECT_ID
            valueFrom:
              secretKeyRef:
                name: test-app-secrets
                key: keen_project_id
          - name: ERRORS_API_KEY
            valueFrom:
              secretKeyRef:
                name: test-app-secrets
                key: errors_api_key
          - name: APP_SESSION_SECRET
            valueFrom:
              secretKeyRef:
                name: test-app-secrets
                key: app_session_secret
          - name: DATABASE_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mysql-test
                key: mysql-password

Kubernetes version (use kubectl version):

kubectl version
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-beta.3", GitCommit:"0cd5ed469508e6dfc807ee6681561c845828917e", GitTreeState:"clean", BuildDate:"2017-03-11T04:20:22Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}

Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2+coreos.0", GitCommit:"79fee581ce4a35b7791fdd92e0fc97e02ef1d5c0", GitTreeState:"clean", BuildDate:"2017-04-19T23:13:34Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): CoreOS
  • Install tools: tectonic-installer (terraform)

What happened:

kubectl apply -f templates/test.yaml 
deployment "test" configured
kubectl apply -f templates/test.yaml 
The Deployment "test" is invalid: spec.template.spec.containers[0].env[14].valueFrom: Invalid value: "": may not be specified when `value` is not empty
kubectl apply -f templates/test.yaml 
The Deployment "test" is invalid: spec.template.spec.containers[0].env[14].valueFrom: Invalid value: "": may not be specified when `value` is not empty
kubectl delete deploy test -n dc1
deployment "test" deleted
kubectl apply -f templates/test.yaml 
deployment "test" configured

What you expected to happen: The deployment is applied in kubernetes successfully without deleting the deployment first.

How to reproduce it : Following previous steps.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 13
  • Comments: 40 (10 by maintainers)

Commits related to this issue

Most upvoted comments

This issue is also cause by changing the secret type. For example, if you have an env var in the deployment as a key/value, then you change the same env var but try to read from a kubernetes secret.

Deleting the deployment and running kubectl apply on your deployment.yaml resolves the issue.

This issue still occurs, and it seems to be due to a bug in how kubectl apply generates the patch request body.

What’s interesting, is changing between value <-> valueRef works if you do a kubectl edit, but not if you do a kubectl apply. Because of this, I dug a bit deeper:

Take the following deployment as an example:

# nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName

Apply this yaml to your cluster (kubectl apply -f nginx.yaml). Now run kubectl edit deployment nginx, and modify the env var, removing the valueFrom object, and replacing it with value: someString.

The PATCH body that is generated by kubectl edit looks something like this:

{"spec": {"template": {"spec": {"$setElementOrder/containers": [{"name": "nginx"}],"containers": [{
  "name": "nginx",
  "$setElementOrder/env": [{"name": "NODE_NAME"}],
  "env": [
    {
      "name": "NODE_NAME",
      "value": "someString",
      "valueFrom": null
    }
  ]
}]}}}}

And this request to edit goes through fine. Note the "valueFrom": null in the request body, explicitly telling k8s to delete that field.

Now try to re-apply the original deployment (kubectl apply -f nginx.yaml). The following is the request body of the PATCH call by this apply operation:

{"spec": {"template": {"spec": {"$setElementOrder/containers": [{"name": "nginx"}],"containers": [{
  "name": "nginx",
  "$setElementOrder/env": [{"name": "NODE_NAME"}],
  "env": [
    {
      "name": "NODE_NAME",
      "valueFrom": {
        "fieldRef": {
          "fieldPath": "spec.nodeName"
        }
      }
    }
  ]
}]}}}}

And you receive the following error: The Deployment "nginx" is invalid: spec.template.spec.containers[0].env[0].valueFrom: Invalid value: "": may not be specified when `value` is not empty

Note how in the PATCH request body for this apply command, the differing valueFrom object is provided, but "value": null is not in the request body, so k8s does not try to clear the value field from the object when applying the patch, hence the above error message.

Now we can fix this for the apply command if we edit the original nginx.yaml to include value: null alongside the valueFrom object:

...
mage: nginx:1.14.2
env:
- name: NODE_NAME
   value: null
   valueFrom:
     fieldRef:
       fieldPath: spec.nodeName

Now if we do another apply with this modified nginx.yaml, it uses the following PATCH request body:

{"spec": {"template": {"spec": {"$setElementOrder/containers": [{"name": "nginx"}],"containers": [{
  "name": "nginx",
  "$setElementOrder/env": [{"name": "NODE_NAME"}],
  "env": [
    {
      "name": "NODE_NAME",
      "value": null,
      "valueFrom": {
        "fieldRef": {
          "fieldPath": "spec.nodeName"
        }
      }
    }
  ]
}]}}}}

And the apply works as expected with no errors (since now the apply request body has the "value": null in the PATCH request body).

It seems like kubectl has the logic when generating the PATCH request body for an edit command to include the someField: null when a field is being removed, but it does not perform the same logic when generating the PATCH request body for an apply command.

This seems like it’s probably a bug in kubectl’s apply behavior, leading to this unexpected behavior.

Edit: It appears this is not a bug, but intended behavior, because kubectl edit does not modify metadata.annotations.kubectl.kubernetes.io/last-applied-configuration appropriately, which is what kubectl is using with the apply command to generate the patch. With this in mind, perhaps kubectl edit’s behavior should be changed to --save-config=true by default to prevent this generally unexpected behavior. This can be confusing because you can edit some field with kubectl edit to change it’s value, and a kubectl apply with the original yaml will re-apply the original value, but if you do kubectl edit to modify something like the case above, then kubectl apply no longer works properly because it doesn’t delete the conflicting field added from kubectl edit. The behavior difference with those 2 cases does not seem very intuitive, leading to the confusion.

recreating the deployment works but it should not be

Here’s a way I handle this without creating an outage on a deployment:

  1. Pause rollouts:
kubectl rollout pause <resource>
  1. Edit your config – remove all env vars on the “offending” container (e.g. set env: [] on the container’s configuration) – you do this either via kubectl edit, or if you’re using templates/configs and applying those configs – then edit the file followed by kubectl apply -f <file>.
  2. Now revert back to your configuration that has the env vars set that you desire. Apply that config.
  3. Resume rollouts:
kubectl rollout resume <resource>

This issue is also cause by changing the secret type

This error message can also be caused by accidentally specifying both valueFrom (from the secret) and value (from a literal value) as keys. Once value is removed, the error goes away and kubectl apply works correctly.

I had the same problem. Reason: In my old deployment, I defined some Environment vars literally. But in the new deployment, I was defining them in the secret.

Solution: I edited current deployment using kubectl edit <DEPLOYMENT> and removed the literally defined vars. Then ran kubectl apply -f <NEW DEPLOYMENT YML FILE>

Have fun.

@Thematrixme I think the root cause of this is that resources were initially created with kubectl create and are later trying to be updated with kubectl apply. This is also mentioned in some of the related issues linked in this issue’s description.

I’m not able to reproduce the issue when following these steps:

  • Write this data to file.yaml:
apiVersion: v1
kind: List
items:
- apiVersion: v1
  metadata:
    name: my-secret
  data:
    mySecretKey: "Ym9ndXMK"
  kind: Secret
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    name: my-deployment
  spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 2
    selector:
      matchLabels:
        run: my-deployment
    strategy:
      rollingUpdate:
        maxSurge: 25%
        maxUnavailable: 25%
      type: RollingUpdate
    template:
      metadata:
        creationTimestamp: null
        labels:
          run: my-deployment
      spec:
        containers:
        - command:
          - /bin/bash
          env:
          - name: SOME_ENV_PARAM
            valueFrom:
              secretKeyRef:
                key: mySecretKey
                name: my-secret
          image: python:latest
          imagePullPolicy: Always
          name: my-deployment
          resources: {}
          stdin: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          tty: true
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
  • Then kubectl apply -f file.yaml
  • Edit file.yaml, adjust env config to use a standard key/value:
          env:
          - name: SOME_ENV_PARAM
            value: "something"
  • Then kubectl apply -f file.yaml

If you use a kubectl create initially, and then try to apply the change later, you will run into:

Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
The Deployment "my-deployment" is invalid: spec.template.spec.containers[0].env[0].valueFrom: Invalid value: "": may not be specified when `value` is not empty

and as far as I know, this is not a bug since it’s known that applying config to a resource that did not initially use apply can cause issues (hence the printed warning). If there’s any issue to be opened here, perhaps the error message could be even more clear to let the end user know the likely root cause of this… but given the warning message printed I’m sure it won’t be too much of a priority 😆

The work-around I posted above can at least get you past this patch error and back on the right track to be able to run kubectl apply on the resource in future though.

I’m getting this error currently because my deployment YAML contains the value key (from a literal value), and I’m trying to overwrite it so that it’s set from a secret. Perhaps the way to go here is to show a more informative error message, but I would say it makes more sense to overwrite it directly in most cases.

I’m having this issue on 1.14. I can’t apply my deployment, changing env from a key-value pair to key-secret pair. Is there no other way to do this? I’d really hope to avoid a downtime on this!

Just adding to @gabrie30 - recreating the deployment works flawlessly (kubectl delete & kubectl apply)

For me the reason was the same env var declared twice

Helm version 3.3.4 doesn’t handle the valueRef -> value change with the helm upgrade. I needed to manually modify the Deployment (remove the valueRef) and then rerun the helm upgrade with the value.

Number should be quoted to be decoded as string.

Just to mention my specific error:

Error: Deployment.apps "my-test-service" is invalid: spec.template.spec.containers[0].env[15].valueFrom.configMapKeyRef.name: Invalid value: "": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

This can also happen if you change the type of value.

For example, first apply:

        - name: LOG_LEVEL
          valueFrom:
            configMapKeyRef:
              key: logLevel
              name: configs

Edit manually (with kubectl edit):

        - name: LOG_LEVEL
          value: DEBUG

Try another apply (update):

spec.template.spec.containers[0].env[14].valueFrom: Invalid value: "": may not be specified when `value` is not empty

@ferrandinand I just changed following and it works fine

  • name: KEEN_PROJECT_ID value: ‘3232333433243432131’

excuse me but why is this closed? /open /reopen

Has there been an issue opened for this problem? The one explained after this specific issue has been closed:

This issue is also cause by changing the secret type. For example, if you have an env var in the deployment as a key/value, then you change the same env var but try to read from a kubernetes secret.

Because there have been two workarounds given in the following comments but none of them seems to be the right way this should be working.

Hitting this issue. I don’t understand why this is breaking