ingress-nginx: An error occurred between opentelemetry modules
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
bash-5.1$ ./nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.1.3
Build: 9d3a285f19a704524439c75b947e2189406565ab
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.10
-------------------------------------------------------------------------------
[root@trace1 ingress-nginx]# kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.8", GitCommit:"7061dbbf75f9f82e8ab21f9be7e8ffcaae8e0d44", GitTreeState:"clean", BuildDate:"2022-03-16T14:10:06Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.8", GitCommit:"7061dbbf75f9f82e8ab21f9be7e8ffcaae8e0d44", GitTreeState:"clean", BuildDate:"2022-03-16T14:04:34Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Cloud provider or hardware configuration:
- baremetal, kubernetes on centos7
[root@trace1 ingress-nginx]# cat /etc/*release
CentOS Linux release 7.9.2009 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
CentOS Linux release 7.9.2009 (Core)
CentOS Linux release 7.9.2009 (Core)
-
Cloud provider or hardware configuration:
-
Kernel (e.g.
uname -a):
[root@trace1 ingress-nginx]# uname -a
Linux trace1 3.10.0-1160.59.1.el7.x86_64 #1 SMP Wed Feb 23 16:47:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
-
Install tools:
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.- kubespray 2.18
- helm chart
-
Basic cluster related info:
kubectl get nodes -o wide
[root@trace1 ingress-nginx]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
trace1 Ready control-plane,master 3d v1.22.8
trace2 Ready <none> 3d v1.22.8
- How was the ingress-nginx-controller installed:
- If helm was used then please show output of
helm ls -A | grep -i ingress
- If helm was used then please show output of
git clone https://github.com/kubernetes/ingress-nginx.git
cd ingress-nginx/charts/ingress-nginx
cat << EOF >> extra.yml
extraModules:
- name: opentelemetry
image: gcr.io/k8s-staging-ingress-nginx/opentelemetry:v20220331-controller-v1.1.2-36-g7517b7ecf
EOF
helm install ingress-nginx . -f extra.yml`
[root@trace1 ingress-nginx]# helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ingress-nginx default 1 2022-04-06 05:03:26.400791535 +0000 UTC deployed ingress-nginx-4.0.19 1.1.3
- If helm was used then please show output of
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
[root@trace1 ingress-nginx]# helm get values ingress-nginx
USER-SUPPLIED VALUES:
null
-
If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
-
if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
-
Current State of the controller:
kubectl describe ingressclasseskubectl -n <ingresscontrollernamespace> get all -A -o widekubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
[root@trace1 ingress-nginx]# kubectl get po -n default
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-5c5846d6f7-2b7gr 1/1 Running 0 20m
[root@trace1 ingress-nginx]# kubectl describe po ingress-nginx-controller-5c5846d6f7-2b7gr
Name: ingress-nginx-controller-5c5846d6f7-2b7gr
Namespace: default
Priority: 0
Node: trace2/192.168.16.69
Start Time: Wed, 06 Apr 2022 05:03:28 +0000
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=5c5846d6f7
Annotations: cni.projectcalico.org/containerID: 6446671f8ac137a1024148588551902e25e715c3a22f35cbed1544a9a096682a
cni.projectcalico.org/podIP: 10.233.94.74/32
cni.projectcalico.org/podIPs: 10.233.94.74/32
Status: Running
IP: 10.233.94.74
IPs:
IP: 10.233.94.74
Controlled By: ReplicaSet/ingress-nginx-controller-5c5846d6f7
Init Containers:
opentelemetry:
Container ID: containerd://24b7aa7186e49c44cfa5ba59f7a580ac4e61f9f527e2052176d927196559b752
Image: yjkim1ntels/ingress-nginx:opentelemetry
Image ID: docker.io/yjkim1ntels/ingress-nginx@sha256:d86b3679ad13c510d5991f5e2c6e34dc5d2e957e60b493487cff875a138ed806
Port: <none>
Host Port: <none>
Command:
sh
-c
/usr/local/bin/init_module.sh
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 06 Apr 2022 05:03:34 +0000
Finished: Wed, 06 Apr 2022 05:03:34 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/modules_mount from modules (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xn5cv (ro)
Containers:
controller:
Container ID: containerd://96113bb6bdbd35d91534ef40c5794d2ac1c551b06b64bd0d91dbb57c4042453f
Image: k8s.gcr.io/ingress-nginx/controller:v1.1.3@sha256:31f47c1e202b39fadecf822a9b76370bd4baed199a005b3e7d4d1455f4fd3fe2
Image ID: k8s.gcr.io/ingress-nginx/controller@sha256:31f47c1e202b39fadecf822a9b76370bd4baed199a005b3e7d4d1455f4fd3fe2
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Running
Started: Wed, 06 Apr 2022 05:03:35 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-5c5846d6f7-2b7gr (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/modules_mount from modules (rw)
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xn5cv (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
modules:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-xn5cv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 20m default-scheduler Successfully assigned default/ingress-nginx-controller-5c5846d6f7-2b7gr to trace2
Normal Pulling 20m kubelet Pulling image "yjkim1ntels/ingress-nginx:opentelemetry"
Normal Pulled 20m kubelet Successfully pulled image "yjkim1ntels/ingress-nginx:opentelemetry" in 5.495976532s
Normal Created 20m kubelet Created container opentelemetry
Normal Started 20m kubelet Started container opentelemetry
Normal Pulled 20m kubelet Container image "k8s.gcr.io/ingress-nginx/controller:v1.1.3@sha256:31f47c1e202b39fadecf822a9b76370bd4baed199a005b3e7d4d1455f4fd3fe2" already present on machine
Normal Created 20m kubelet Created container controller
Normal Started 20m kubelet Started container controller
Normal RELOAD 20m nginx-ingress-controller NGINX reload triggered due to a change in configuration
What happened:
- An error occurred while adding the openteletry module as a sidecar to ingress-nginx.
ls -al /modules_mount/etc/nginx/modules/modules/otel_ngx_module.so
-rwxr-xr-x 1 root root 8077416 Apr 6 04:20 /modules_mount/etc/nginx/modules/modules/otel_ngx_module.so
# add nginx.conf load_module
bash-5.1$ cat nginx.conf | head -n 3
load_module /modules_mount/etc/nginx/modules/modules/otel_ngx_module.so;
# Configuration checksum: 6642706386070326205
nginx -s reload
# error
2022/04/06 04:45:48 [emerg] 619#619: dlopen() "/modules_mount/etc/nginx/modules/modules/otel_ngx_module.so" failed (Error relocating /modules_mount/etc/nginx/modules/modules/otel_ngx_module.so: _ZN13opentelemetry5proto5trace2v14Span8CopyFromERKS3_: symbol not found) in /etc/nginx/nginx.conf:1
nginx: [emerg] dlopen() "/modules_mount/etc/nginx/modules/modules/otel_ngx_module.so" failed (Error relocating /modules_mount/etc/nginx/modules/modules/otel_ngx_module.so: _ZN13opentelemetry5proto5trace2v14Span8CopyFromERKS3_: symbol not found) in /etc/nginx/nginx.conf:1
What you expected to happen:
- I want the nginx-ingress-controller and opentelemetry module to work together.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 48 (44 by maintainers)
@longwuyuan I can confirm that on the latest image
gcr.io/k8s-staging-ingress-nginx/opentelemetry:v20220906-controller-v1.3.1-3-g981ce38a7problem withlibopentelemetry_exporter_otlp_grpc.sois reproducible.@esigo only Otel_ngx_module.so. We build our own nginx
@kuzaxak This issue is closed. Please track all opentelemetry things in https://github.com/kubernetes/ingress-nginx/issues/9016
I finally got a local version of the nginx controller running with the opentelemetry sidecar. Changes done to get it working can be checked at https://github.com/Tobrek/ingress-nginx/commit/555cb568026cbc25421b95e499a2df168105c0c8 Before anybody say something … I know that this is no final solution. There are some open points (e.g. nginx.tmpl must be adapted to load the module when the sidecar is configured. Or i just disabled the checksum check for building the opentelemetry-cpp-contrib.). But for me it was just important to get a running version with otel active. I will not work on a proper solution, because i have what i need for the moment (and that a task for people who knows what they are doing 😄 ). This solution should be just an idea how to get it working. tobrek.md contains a description of the steps how I build the different images, the otel config k8s configmap and helm command / values.
Hope this helps and i’m looking forward for a released nginx version with opentelemetry sidecar 🙂
We can edit the Dockerfole for opentelemetry under /images and promote again. What would be a good way to get a list of missing modules.
Thanks, ; Long
On Wed, 6 Apr, 2022, 1:21 PM oct28-yjkim, @.***> wrote: