envoy: Invalid path for TLS certificates but certificates accessible to envoy user within the container
Title: TLS certificates provided by LetsEncrypt when used in envoy.yaml config throw error showing invalid path
Description: TLS certificates available to envoy user are not found while starting Envoy using Podman on Fedora 32. The container is started using the command below.
$ sudo podman run -p 80:8080 -p 443:8443 -p 3306:3306 \
-v /etc/envoy/envoy.yaml:/etc/envoy/envoy.yaml:Z \
-v /etc/letsencrypt/:/etc/letsencrypt/:Z \
--network dev --name dev-envoy --security-opt label=type:envoy.process \
-d envoyproxy/envoy:v1.15.0
The container cannot find the TLS certificates and returns the following error:
[2020-09-22 18:15:55.556][3][critical][main] [source/server/server.cc:101] error initializing configuration '/etc/envoy/envoy.yaml': Invalid path: /etc/letsencrypt/live/rebhu.com/cert.pem
[2020-09-22 18:15:55.557][3][info][main] [source/server/server.cc:704] exiting
Invalid path: /etc/letsencrypt/live/rebhu.com/cert.pem
These files are present in the container and accessible to the container’s envoy user. Here is the IRC-Matrix.org conversation I had with the developers of Podman. The envoy.yaml config and the TLS certificates share the same permissions. However, the certificates are not accessible by the application but the configuration file is. This is how we can access the envoy user within the container and at least read the TLS certificate files.
Before running the following command, comment out the listener_mysql and listener_https sections in the configuration attached below. Otherwise, the container will return the same error.
$ sudo podman container rm dev-envoy
$ sudo podman run -p 80:8080 \
-v /etc/envoy/envoy.yaml:/etc/envoy/envoy.yaml:Z \
-v /etc/letsencrypt/:/etc/letsencrypt/:Z \
--network dev --name dev-envoy --security-opt label=type:envoy.process \
-d envoyproxy/envoy:v1.15.0
$ sudo podman exec -it --user envoy dev-envoy bash
envoy@<container-id>$ cat /etc/letsencrypt/live/rebhu.com/cert.pem
Any extra documentation required to understand the issue.
Here is a small introduction on how to run Envoy using Podman on Fedora 32
$ sudo dnf install podman dnsmasq udica
$ sudo podman pull envoyproxy/envoy:v1.15.0
$ sudo podman create -p 80:8080 -p 443:8443 -p 3306:3306 \
-v /etc/envoy/envoy.yaml:Z -v /etc/letsencrypt/:/etc/letsencrypt:Z \
--network dev --name my-envoy envoyproxy/envoy:v1.15.0
Using run in place of create returns the following error
# chown: changing ownership of '/dev/stdout': Permission denied
# chown: changing ownership of '/dev/stderr': Permission denied
We can avoid this by following these steps.
$ sudo podman inspect envoy > my-envoy.udica.json
$ sudo udica -j my-envoy.udica.json my-envoy
This will generate a SELinux *.cil policy file. Use this file to activate SELinux policy specific to the Envoy container. Before activating the policy you will have to handle some Access Vector Cache(AVC) issues. This can be done by editing the newly generated *.cil file. A reference file has been added below. You can monitor the AVC using the command below.
$ sudo cat /var/log/audit/audit.log | grep AVC
After editing the *.cil file, install the SELinux policy using the following command.
$ sudo semodule -i my-envoy.cil /usr/share/udica/templates/{base_container.cil,net_container.cil}
Now our SELinux policy is loaded and we can use it to start our container. Notice the difference in the executed command. -p 443:8443 and -p 3306:3306 have been removed as these are the reason for raising this issue. Before running the following command, comment out the listener_mysql and listener_https sections in the configuration attached below. Otherwise, the container will return the same error.
$ sudo podman run -p 80:8080 \
-v /etc/envoy/envoy.yaml:/etc/envoy/envoy.yaml:Z \
-v /etc/letsencrypt/:/etc/letsencrypt/:Z \
--network dev --name dev-envoy --security-opt label=type:my-envoy.process -d \
envoyproxy/envoy:v1.15.0
`*.cil` file for starting Envoy
This is the *.cil file that will get generated by the Udica tool. However, Udica can only generate what it sees from the podman inspect command. And it doesn’t know about the application specific error. This configuration has allowed HTTP(S), MySQL, and volume mount related details.
(block envoy
(blockinherit container)
(blockinherit restricted_net_container)
(allow process process ( capability ( audit_write chown dac_override fowner fsetid kill mknod net_bind_service net_raw setfcap setgid setpcap setuid sys_chroot )))
(allow process mysqld_port_t ( tcp_socket ( name_bind )))
(allow process http_cache_port_t ( tcp_socket ( name_bind )))
(allow process http_port_t ( tcp_socket ( name_bind )))
(allow process etc_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write )))
(allow process etc_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write )))
(allow process etc_t ( sock_file ( append getattr open read write )))
(allow process etc_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write )))
(allow process etc_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write )))
(allow process etc_t ( sock_file ( append getattr open read write )))
)
Before installing(sudo semodule -i ....) the SELinux policy, append the following lines in the file above.
(allow process container_runtime_t ( fifo_file ( setattr )))
(allow process unreserved_port_t (tcp_socket (name_bind)))
Here is Envoy's configuration that I am using
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
protocol: TCP
address: 127.0.0.1
port_value: 9901
static_resources:
listeners:
- name: listener_http
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
stat_prefix: ingress_http
route_config:
name: http_route
virtual_hosts:
- name: http_service
domains: ["*"]
routes:
- match:
prefix: "/.well-known"
route:
cluster: service_letsencrypt
- match:
prefix: "/.well-known/"
route:
cluster: service_letsencrypt
http_filters:
- name: envoy.filters.http.router
- name: listenet_https
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8443
filter_chains:
- filter_chain_match:
server_names: ["rebhu.com"]
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "/etc/letsencrypt/live/rebhu.com/fullchain.pem"
private_key:
filename: "/etc/letsencrypt/live/rebhu.com/privkey.pem"
filters:
- name: envoy.filters.network.https_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_https
route_config:
virtual_hosts:
- name: wp_https
domains: "*"
routes:
- match:
prefix: "/"
route:
cluster: service_letsencrypt
- name: listener_mysql
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 3306
filter_chains:
- filter_chain_match:
server_names: ["db.rebhu.com"]
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "/etc/letsencrypt/live/rebhu.com/cert.pem"
private_key:
filename: "/etc/letsencrypt/live/rebhu.com/privkey.pem"
filters:
- name: envoy.filter.network.mysql_proxy
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.network.mysql_proxy.v3.MySQLProxy"
stat_prefix: ingress_mysql
- name: envoy.filters.network.tcp_proxy
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: ingress_tcp_mysql
cluster: service_mysql
clusters:
- name: service_letsencrypt
connect_timeout: 0.25s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: service_letsencrypt
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: dev-nginx
port_value: 80
- name: service_mysql
connect_timeout: 0.25s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: service_mysql
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: dev-mysql
port_value: 3306
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 1
- Comments: 20 (8 by maintainers)
@lamualfa-client you might find this helpful.
@phlax suggested three possible fix for this on #14877
envoyuser to work with them-e ENVOY_UID=0while starting EnvoyProxyI have tested 1, and 3 and these two work for me.