argo-events: Gateway pod is leaking memory.
Describe the bug Gateway pod is leaking memory. Is this a bug? Is there a deployment method to prevent it?
Environment (please complete the following information):
- OS: macOS Mojave 10.14.4
- minikube: v1.0.1
- Helm: v2.14.0
- argo workflow: v2.2.1
- Argo-events: v0.9.2
To Reproduce
Setup minikube & helm.
minikube start
minikube addons enable ingress
helm init
Install argo & argo-events.
helm repo add argo https://argoproj.github.io/argo-helm
kubectl create namespace argo
helm install argo/argo --name argo --namespace argo
kubectl create namespace argo-events
helm install argo/argo-events --name argo-events --namespace argo-events
Apply argo-events webhook sample.
kubectl apply -n argo-events -f https://raw.githubusercontent.com/argoproj/argo-events/master/examples/event-sources/webhook.yaml
kubectl apply -n argo-events -f https://raw.githubusercontent.com/argoproj/argo-events/master/examples/gateways/webhook.yaml
kubectl apply -n argo-events -f https://raw.githubusercontent.com/argoproj/argo-events/master/examples/sensors/webhook.yaml
Set monitring environment. Check minikube ip.
minikube ip
Add minikube IP to /etc/hosts.
sudo vim /etc/hosts
192.168.99.106 alertmanager.minikube prometheus.minikube grafana.minkube
Prepare prometheus.yaml.
alertmanager:
ingress:
enabled: true
hosts:
- alertmanager.minikube
persistentVolume:
size: 1Gi
storageClass: "standard"
server:
ingress:
enabled: true
hosts:
- prometheus.minikube
persistentVolume:
size: 1Gi
storageClass: "standard"
retention: "12h"
pushgateway:
enabled: false
Install promehteus
kubectl create namespace monitoring
helm install --name prometheus --namespace monitoring -f prometheus.yaml stable/prometheus
acesess http://prometheus.minikube and do the following query.
container_memory_working_set_bytes{pod_name="webhook-gateway-http"}
Screenshots
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 16 (3 by maintainers)
Here’s my update:
Migrating from 0.11 to 0.12 was terrible – no docs and a ton of model changes. Thankfully, the changes were fairly good and well structured, so 🙌 for that!
0.12 seems to have fixed the memory leaks 🎉 🎉 🥂
Here are the notes I took while migrating, and a summary of changes I noticed:
Webhook Gateway
Webhook Sensor
{ name, gatewayName, eventName }
, EX: old format ofmy-gateway:my-event-source
is now{ name: 'myEventName', gatewayName: 'my-gateway', eventName: 'my-event-source' }
, you can access this in parameters usingmyEventName
.eventProtocol
tosubscription
object to attach to the gatewayparameters
as a child oftemplate
see the entiretemplate
object.parameters
as a child of thek8s
object in the template seek8s.source
as the object – so, one extra layer deep. Example below:webhook
) and the further configs, the example for a webhook event source is easy enough to follow.Other notes:
contextKey
or adataKey
{ context, data }
shape wherecontext
is meta about the event itself.data
is base64 encoded raw of whatever the payload isdata
can take different forms/valuesdata
is the JSON of the full http request, IE:{ header, body }
dataKey: body.[...]
, likedataKey: body.some.json.path
parameters
in a trigger definition – where is important.template.parameters
totemplate
documentk8s.source.hit
as the destk8s.source
gets resolved/builttemplate.k8s.parameters
gets applied totemplate.k8s.source
objectspec.arguments.parameters.0
k8s
document@emilssolmanis It’s been 12 days since your comment. Did the upgrade fix the leak? Did you upgrade from 0.10? Any tips when upgrading?
This is definitely here on
v0.11
, had it die after ~15 days or so.