kustomize: Error merging common config with multiple microservice's configMap
I have configMap for each microservice like this:
configMapGenerator:
- name: env-vars
literals:
- MICRO_SERVER_NAME=accountsrv
I want to merge bellow common config with each microservice’s configMap
configMapGenerator:
- name: env-vars
behavior: merge
literals:
- MICRO_SERVER_ADDRESS=0.0.0.0:8080
- MICRO_BROKER_ADDRESS=0.0.0.0:10001
- APP_ENV=development
- CONFIG_DIR=/config
- CONFIG_FILE=config.yaml
- MICRO_LOG_LEVEL=debug
- MICRO_CLIENT_RETRIES=3
- MICRO_CLIENT_REQUEST_TIMEOUT=5s
I am getting Error: accumulating resources: recursed accumulation of... found multiple objects
# @kustomize build deploy/overlays/e2e/ | sed -e "s|\$(NS)|default|g" -e "s|\$(IMAGE_VERSION)|v0.2.5|g" | kubectl apply -f -
Error: accumulating resources: recursed accumulation of path '../../bases/micros': merging from generator &{0xc0004aa2a0 0xc00034e580 { } {map[] map[] false} {{ env-vars merge {[MICRO_SERVER_ADDRESS=0.0.0.0:8080 MICRO_BROKER_ADDRESS=0.0.0.0:10001 APP_ENV=development CONFIG_DIR=/config CONFIG_FILE=config.yaml MICRO_LOG_LEVEL=debug MICRO_CLIENT_RETRIES=3 MICRO_CLIENT_REQUEST_TIMEOUT=5s] [] [] }}}}: found multiple objects [{"apiVersion":"v1","data":{"CORS_ALLOWED_HEADERS":"Authorization,Content-Type","CORS_ALLOWED_METHODS":"POST,GET","CORS_ALLOWED_ORIGINS":"*","MICRO_API_ENABLE_RPC":"true","MICRO_API_HANDLER":"rpc","MICRO_API_NAMESPACE":"","MICRO_LOG_LEVEL":"debug","MICRO_SERVER_NAME":"gatewaysrv"},"kind":"ConfigMap","metadata":{"annotations":{"org":"acmeCorporation"},"labels":{"app.kubernetes.io/component":"microservice","app.kubernetes.io/instance":"gateway-abcxzy","app.kubernetes.io/managed-by":"kustomize","app.kubernetes.io/name":"gateway","app.kubernetes.io/part-of":"micro-starter-kit"},"name":"gatewayenv-vars"}}{nsfx:true,beh:unspecified} {"apiVersion":"v1","data":{"MICRO_PROXY_PROTOCOL":"grpc","MICRO_SERVER_NAME":"proxysrv"},"kind":"ConfigMap","metadata":{"annotations":{"org":"acmeCorporation"},"labels":{"app.kubernetes.io/component":"microservice","app.kubernetes.io/instance":"proxy-abcxzy","app.kubernetes.io/managed-by":"kustomize","app.kubernetes.io/name":"proxy","app.kubernetes.io/part-of":"micro-starter-kit"},"name":"proxyenv-vars"}}{nsfx:true,beh:unspecified} {"apiVersion":"v1","data":{"DATABASE_HOST":"$(DATABASE_ENDPOINT)","MICRO_SERVER_NAME":"accountsrv"},"kind":"ConfigMap","metadata":{"labels":{"app.kubernetes.io/component":"microservice","app.kubernetes.io/instance":"account-srv-abcxzy","app.kubernetes.io/name":"account-srv"},"name":"accountenv-vars"}}{nsfx:true,beh:unspecified} {"apiVersion":"v1","data":{"MICRO_SERVER_NAME":"emailersrv"},"kind":"ConfigMap","metadata":{"labels":{"app.kubernetes.io/component":"microservice","app.kubernetes.io/instance":"emailer-srv-abcxzy","app.kubernetes.io/name":"emailer-srv"},"name":"emailerenv-vars"}}{nsfx:true,beh:unspecified} {"apiVersion":"v1","data":{"MICRO_SERVER_NAME":"greetersrv"},"kind":"ConfigMap","metadata":{"labels":{"app.kubernetes.io/component":"microservice","app.kubernetes.io/instance":"greeter-srv-abcxzy","app.kubernetes.io/name":"greeter-srv"},"name":"greeterenv-vars"}}{nsfx:true,beh:unspecified}] that could accept merge of ~G_v1_ConfigMap|~X|env-vars
Source https://github.com/xmlking/micro-starter-kit/tree/develop/deploy/bases/micros
Are there any alternative way to accomplish this?
Structure
micros
├── account-srv
│ ├── config
│ │ └── config.yaml
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ └── service.yaml
├── emailer-srv
│ ├── config
│ │ └── config.yaml
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ └── service.yaml
├── gateway
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ └── service.yaml
├── greeter-srv
│ ├── config
│ │ └── config.yaml
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ └── service.yaml
├── kconfig.yaml
├── kustomization.yaml
└── proxy
├── deployment.yaml
├── kustomization.yaml
└── service.yaml
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 2
- Comments: 17 (5 by maintainers)
this is not stale. It is really useful feature request
Issues go stale after 90d of inactivity. Mark the issue as fresh with
/remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.If this issue is safe to close now please do so with
/close.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
When using you example with the latest version of kustomize, it seems to work ok. check here. Must be missing a piece of information.