kubernetes-ingress: memory leak

I’m currently running the ingress controller in a simple setup to test. I’ve had the images set to autoupdate (using the keel project). A month ago or so an update (unfortunately don’t know which version exactly as I’m simply tracking latest currently) started making pod memory go out the roof. The controller/pods are literally doing nothing (although some prometheus endpoints may be getting hit in an automated fashion by the cluster itself) and my usage climbs linearly. For example I’ve currently got 3 replicas that have been running for 14hours and they’re sitting just under 6GB of memory usage each.

Here’s the diff to the yaml that I’ve made:

--- haproxy-ingress.yaml.1	2019-10-13 16:39:17.459985188 -0600
+++ haproxy-ingress.yaml	2019-07-05 21:35:31.672175096 -0600
@@ -81,7 +81,7 @@
 kind: ConfigMap
 metadata:
   name: haproxy-configmap
-  namespace: default
+  namespace: haproxy-controller
 data:
 
 ---
@@ -133,8 +133,14 @@
     run: haproxy-ingress
   name: haproxy-ingress
   namespace: haproxy-controller
+  annotations:
+    keel.sh/pollSchedule: "@every 24h"
+    #keel.sh/pollSchedule: "@every 3m"
+    keel.sh/trigger: poll
+    keel.sh/policy: force
+    keel.sh/match-tag: "true"
 spec:
-  replicas: 1
+  replicas: 3
   selector:
     matchLabels:
       run: haproxy-ingress
@@ -148,7 +154,8 @@
       - name: haproxy-ingress
         image: haproxytech/kubernetes-ingress
         args:
-          - --configmap=default/haproxy-configmap
+          - --default-ssl-certificate=haproxy-controller/tls-secret
+          - --configmap=haproxy-controller/haproxy-configmap
           - --default-backend-service=haproxy-controller/ingress-default-backend
         resources:
           requests:

I have just noticed that the secret for the --default-ssl-certificate doesn’t exist so maybe it’s connected.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 16 (8 by maintainers)

Most upvoted comments

@travisghansen you already helped us a lot with this reporting 😃

this small increase over time will be probably a bit harder to catch, but rest assured, we will find what is the true reason for this. I was closing it because of a major leak we fixed.

To be clear, I made the comment above about my current memory usage simply for historical/informational purposes, not because I’m dissatisfied (or think this should be re-opened) or anything of that nature.