kubernetes-ingress: possible memory leak with latest versions

With latest releases of ingress I see a pattern with some kind of memory leak on Openshift 3.11 (k8s v1.11.0+d4cacc0)

tested from compiled master branch, 1.4.4,1.4.3,1.4.2 tags and lastly using official alpine latest image. All of them emits similar behavior. After startup containers uses roughly 450M of ram, which drops a little and then it rises up the max RAM limit. Screenshot_20200518_101307 You can see starts and OOM kills on a memory usage of containers. In the log of container is nothing suspicious

2020/05/18 08:07:18 
 _   _    _    ____                                            
| | | |  / \  |  _ \ _ __ _____  ___   _                       
| |_| | / _ \ | |_) | '__/ _ \ \/ / | | |                      
|  _  |/ ___ \|  __/| | | (_) >  <| |_| |                      
|_| |_/_/   \_\_|   |_|  \___/_/\_\\__, |                      
 _  __     _                       |___/             ___ ____  
| |/ /   _| |__   ___ _ __ _ __   ___| |_ ___  ___  |_ _/ ___| 
| ' / | | | '_ \ / _ \ '__| '_ \ / _ \ __/ _ \/ __|  | | |      
| . \ |_| | |_) |  __/ |  | | | |  __/ ||  __/\__ \  | | |___ 
|_|\_\__,_|_.__/ \___|_|  |_| |_|\___|\__\___||___/ |___\____| 
                                                                                                                         

2020/05/18 08:07:18 HAProxy Ingress Controller v1.4.4 3c07bf3

2020/05/18 08:07:18 Build from: git@github.com:haproxytech/kubernetes-ingress.git
2020/05/18 08:07:18 Build date: 2020-05-07T15:30:32

2020/05/18 08:07:18 ConfigMap: tools-shared/ingress-haproxy
2020/05/18 08:07:18 Ingress class: 
2020/05/18 08:07:18 Publish service: 
2020/05/18 08:07:18 Default backend service: /
2020/05/18 08:07:18 Default ssl certificate: /
2020/05/18 08:07:18 Controller sync period: 5s
2020/05/18 08:07:18 controller.go:263 Running with HA-Proxy version 2.1.4 2020/04/02 - https://haproxy.org/
2020/05/18 08:07:18 INFO    controller.go:268 Starting HAProxy with /etc/haproxy/haproxy.cfg
2020/05/18 08:07:18 INFO    controller.go:273 Running on haproxy-ingress-7d8dfb84f9-fz4vt
[NOTICE] 138/080718 (26) : New worker #1 (27) forked
2020/05/18 08:07:18 INFO    controller.go:107 Running on Kubernetes version: v1.11.0+d4cacc0 linux/amd64
2020/05/18 08:07:18 INFO    monitor.go:35 executing syncPeriod every 5s
2020/05/18 08:07:24 INFO    controller.go:224 HAProxy reloaded
2020/05/18 08:07:30 INFO    controller.go:224 HAProxy reloaded

I’ve checked the rss of ingress-controller proc inside of POD and it’s that one, which starts growing.

I have following config-map

check: "true"
  rate-limit: "true"
  rate-limit-expire: 30m
  rate-limit-interval: 10s
  rate-limit-size: 100k
  ssl-certificate: tools-shared/tls-secret
  ssl-redirect: "true"
  ssl-redirect-code: "302"
  timeout-client: 50s
  timeout-connect: 5s
  timeout-http-keep-alive: 1m
  timeout-http-request: 5s
  timeout-queue: 5s
  timeout-server: 50s
  timeout-tunnel: 1h

and service with one endpoint. Ingress is started with these args

    - --configmap=$(POD_NAMESPACE)/ingress-haproxy
    - --namespace-whitelist=$(POD_NAMESPACE)

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 22 (12 by maintainers)

Most upvoted comments

@oktalz Hi, we were using the 1.3.3 version https://github.com/haproxytech/kubernetes-ingress/releases/tag/v1.3.3 and then I tried the latest which was 1.4.4 in master branch