kubernetes: ENV invalid: spec.template.spec.containers[0].env[14].valueFrom: Invalid value: "": may not be specified when `value` is not empty
When applying a deploy with some env vars kubernetes accepts the deployment but when doing a var change gives an error with an invalid var value(which is not empty)
Removing some random vars can make it work or not but if you remove the deploy and apply it again makes that the deployment is deployed again successfully
Similar issues: https://github.com/kubernetes/kubernetes/issues/35134 https://github.com/kubernetes/kubernetes/issues/46826
BUG REPORT:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
name: test
name: test
namespace: dc1
spec:
replicas: 1
template:
metadata:
labels:
name: test
spec:
imagePullSecrets:
- name: docker-artifactory-credential-secret
containers:
- image: alpine:test
name: test
ports:
- containerPort: 9292
name: http
livenessProbe:
httpGet:
path: /
port: 9292
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 60
failureThreshold: 6
resources:
requests:
memory: "128Mi"
cpu: "25m"
limits:
memory: "512Mi"
cpu: "100m"
env:
- name: OKTA_ASSERTION_CONSUMER_URL
value: 'http://test/auth/saml/callback'
- name: OKTA_ISSUER
value: test
- name: OKTA_TARGET_URL
value: value’
- name: OKTA_FINGERPRINT
value: value’
- name: OKTA_NAME_IDENTIFIER_FORMAT
value: ‘value’
- name: APP_CURRENT_PATH
value: “/test”
- name: APP_INTERCEPT_MAIL
value: 'false'
- name: DATABASE_ADAPTER
value: mysql2
- name: DATABASE_DATABASE
value: reports
- name: DATABASE_HOST
value: mysql-test
- name: DATABASE_PORT
value: '3306'
- name: DATABASE_RECONNECT
value: 'true'
- name: DATABASE_USERNAME
value: test
- name: DOMAIN_NAME
value: localhost
- name: KEEN_ENABLED
value: 'false'
- name: KEEN_READ_TIMEOUT
value: '5'
- name: ERRORS_HOST
value: http://errors.peertransfer.in
- name: KEEN_ENABLED
value: 'false'
- name: KEEN_PROJECT_ID
value: 3232333433243432131
- name: KEEN_READ_TIMEOUT
value: '5'
- name: MEMCACHE_SERVERS
value: memcached-test:11211
- name: MONGO_DATABASE
value: test
- name: MONGO_HOSTS
value: mongodb-test
- name: MONGO_LOGGER
value: 'false'
- name: RACK_ENV
value: development
- name: S3_BUCKET
value: testing-transfer-dev-s3
- name: RACK_ENV
value: development
- name: UNICORN_PORT
value: '9292'
- name: UNICORN_TIMEOUT
value: '30'
- name: UNICORN_WORKERS
value: '6'
- name: WIKI_GIT_REPO
value: ''
- name: WIKI_PATH
value: tmp/test
- name: WIKI_SYNC
value: 'false'
- name: S3_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: test-app-secrets
key: s3_secret_access_key
- name: S3_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: test-app-secrets
key: s3_access_key_id
- name: PUSHER_KEY
valueFrom:
secretKeyRef:
name: test-app-secrets
key: pusher_key
- name: KEEN_WRITE_KEY
valueFrom:
secretKeyRef:
name: test-app-secrets
key: keen_write_key
- name: KEEN_READ_KEY
valueFrom:
secretKeyRef:
name: test-app-secrets
key: keen_read_key
- name: KEEN_PROJECT_ID
valueFrom:
secretKeyRef:
name: test-app-secrets
key: keen_project_id
- name: ERRORS_API_KEY
valueFrom:
secretKeyRef:
name: test-app-secrets
key: errors_api_key
- name: APP_SESSION_SECRET
valueFrom:
secretKeyRef:
name: test-app-secrets
key: app_session_secret
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-test
key: mysql-password
Kubernetes version (use kubectl version
):
kubectl version
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-beta.3", GitCommit:"0cd5ed469508e6dfc807ee6681561c845828917e", GitTreeState:"clean", BuildDate:"2017-03-11T04:20:22Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2+coreos.0", GitCommit:"79fee581ce4a35b7791fdd92e0fc97e02ef1d5c0", GitTreeState:"clean", BuildDate:"2017-04-19T23:13:34Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Cloud provider or hardware configuration: AWS
- OS (e.g. from /etc/os-release): CoreOS
- Install tools: tectonic-installer (terraform)
What happened:
kubectl apply -f templates/test.yaml
deployment "test" configured
kubectl apply -f templates/test.yaml
The Deployment "test" is invalid: spec.template.spec.containers[0].env[14].valueFrom: Invalid value: "": may not be specified when `value` is not empty
kubectl apply -f templates/test.yaml
The Deployment "test" is invalid: spec.template.spec.containers[0].env[14].valueFrom: Invalid value: "": may not be specified when `value` is not empty
kubectl delete deploy test -n dc1
deployment "test" deleted
kubectl apply -f templates/test.yaml
deployment "test" configured
What you expected to happen: The deployment is applied in kubernetes successfully without deleting the deployment first.
How to reproduce it : Following previous steps.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 13
- Comments: 40 (10 by maintainers)
Commits related to this issue
- Based on the troubles with deploying and the errors incurred during failed rollouts, I believe we are hitting this issue - https://github.com/kubernetes/kubernetes/issues/46861. The TL;DR is that due... — committed to hashgraph/hedera-json-rpc-relay by rustyShacklefurd 2 years ago
- Configmap fix env from change (#767) * Based on the troubles with deploying and the errors incurred during failed rollouts, I believe we are hitting this issue - https://github.com/kubernetes/kuberne... — committed to hashgraph/hedera-json-rpc-relay by rustyShacklefurd 2 years ago
This issue is also cause by changing the secret type. For example, if you have an env var in the deployment as a key/value, then you change the same env var but try to read from a kubernetes secret.
Deleting the deployment and running kubectl apply on your deployment.yaml resolves the issue.
This issue still occurs, and it seems to be due to a bug in how
kubectl apply
generates the patch request body.What’s interesting, is changing between
value
<->valueRef
works if you do akubectl edit
, but not if you do akubectl apply
. Because of this, I dug a bit deeper:Take the following deployment as an example:
Apply this yaml to your cluster (
kubectl apply -f nginx.yaml
). Now runkubectl edit deployment nginx
, and modify the env var, removing thevalueFrom
object, and replacing it withvalue: someString
.The PATCH body that is generated by kubectl edit looks something like this:
And this request to edit goes through fine. Note the
"valueFrom": null
in the request body, explicitly telling k8s to delete that field.Now try to re-apply the original deployment (
kubectl apply -f nginx.yaml
). The following is the request body of the PATCH call by this apply operation:And you receive the following error:
The Deployment "nginx" is invalid: spec.template.spec.containers[0].env[0].valueFrom: Invalid value: "": may not be specified when `value` is not empty
Note how in the PATCH request body for this apply command, the differing
valueFrom
object is provided, but"value": null
is not in the request body, so k8s does not try to clear thevalue
field from the object when applying the patch, hence the above error message.Now we can fix this for the apply command if we edit the original
nginx.yaml
to includevalue: null
alongside thevalueFrom
object:Now if we do another apply with this modified nginx.yaml, it uses the following PATCH request body:
And the apply works as expected with no errors (since now the apply request body has the
"value": null
in the PATCH request body).It seems like kubectl has the logic when generating the PATCH request body for an
edit
command to include thesomeField: null
when a field is being removed, but it does not perform the same logic when generating the PATCH request body for anapply
command.This seems like it’s probably a bug in kubectl’s
apply
behavior, leading to this unexpected behavior.Edit: It appears this is not a bug, but intended behavior, because
kubectl edit
does not modifymetadata.annotations.kubectl.kubernetes.io/last-applied-configuration
appropriately, which is what kubectl is using with theapply
command to generate the patch. With this in mind, perhapskubectl edit
’s behavior should be changed to--save-config=true
by default to prevent this generally unexpected behavior. This can be confusing because you can edit some field withkubectl edit
to change it’s value, and akubectl apply
with the original yaml will re-apply the original value, but if you dokubectl edit
to modify something like the case above, thenkubectl apply
no longer works properly because it doesn’t delete the conflicting field added fromkubectl edit
. The behavior difference with those 2 cases does not seem very intuitive, leading to the confusion.recreating the deployment works but it should not be
Here’s a way I handle this without creating an outage on a deployment:
env: []
on the container’s configuration) – you do this either viakubectl edit
, or if you’re using templates/configs and applying those configs – then edit the file followed bykubectl apply -f <file>
.env
vars set that you desire. Apply that config.This error message can also be caused by accidentally specifying both
valueFrom
(from the secret) andvalue
(from a literal value) as keys. Oncevalue
is removed, the error goes away andkubectl apply
works correctly.I had the same problem. Reason: In my old deployment, I defined some Environment vars literally. But in the new deployment, I was defining them in the secret.
Solution: I edited current deployment using
kubectl edit <DEPLOYMENT>
and removed the literally defined vars. Then rankubectl apply -f <NEW DEPLOYMENT YML FILE>
Have fun.
@Thematrixme I think the root cause of this is that resources were initially created with
kubectl create
and are later trying to be updated withkubectl apply
. This is also mentioned in some of the related issues linked in this issue’s description.I’m not able to reproduce the issue when following these steps:
file.yaml
:kubectl apply -f file.yaml
file.yaml
, adjustenv
config to use a standard key/value:kubectl apply -f file.yaml
If you use a
kubectl create
initially, and then try to apply the change later, you will run into:and as far as I know, this is not a bug since it’s known that
apply
ing config to a resource that did not initially useapply
can cause issues (hence the printed warning). If there’s any issue to be opened here, perhaps the error message could be even more clear to let the end user know the likely root cause of this… but given the warning message printed I’m sure it won’t be too much of a priority 😆The work-around I posted above can at least get you past this patch error and back on the right track to be able to run
kubectl apply
on the resource in future though.I’m getting this error currently because my deployment YAML contains the
value
key (from a literal value), and I’m trying to overwrite it so that it’s set from a secret. Perhaps the way to go here is to show a more informative error message, but I would say it makes more sense to overwrite it directly in most cases.I’m having this issue on 1.14. I can’t
apply
my deployment, changing env from a key-value pair to key-secret pair. Is there no other way to do this? I’d really hope to avoid a downtime on this!Just adding to @gabrie30 - recreating the deployment works flawlessly (kubectl delete & kubectl apply)
For me the reason was the same env var declared twice
Helm version 3.3.4 doesn’t handle the
valueRef
->value
change with thehelm upgrade
. I needed to manually modify the Deployment (remove thevalueRef
) and then rerun thehelm upgrade
with thevalue
.Number should be quoted to be decoded as string.
Just to mention my specific error:
This can also happen if you change the type of value.
For example, first apply:
Edit manually (with kubectl edit):
Try another apply (update):
@ferrandinand I just changed following and it works fine
excuse me but why is this closed? /open /reopen
Has there been an issue opened for this problem? The one explained after this specific issue has been closed:
Because there have been two workarounds given in the following comments but none of them seems to be the right way this should be working.
Hitting this issue. I don’t understand why this is breaking