keda: STAN scaler does not scale from 0
A clear and concise description of what the bug is.
Expected Behavior
When there messages in the queue and the HPA is scaled down to 0, the scaler should create at least one instance of a pod to create the subscription with STAN.
Actual Behavior
When the loop runs, The logic checks if the pending count of messages is greater than 0 and that the queue names match. It never takes into account if the subscription does not exist.
Steps to Reproduce the Problem
- Create a new
ScaledObjectthat can scale to 0
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: test-scaledobject
labels:
deploymentName: test
spec:
pollingInterval: 10 # Optional. Default: 30 seconds
cooldownPeriod: 30 # Optional. Default: 300 seconds
minReplicaCount: 0 # Optional. Default: 0
maxReplicaCount: 5 # Optional. Default: 100
scaleTargetRef:
deploymentName: test
triggers:
- type: stan
metadata:
natsServerMonitoringEndpoint: "stan.default:8222"
queueGroup: "place_indexer"
durableName: "ImDurable"
subject: myQueue
lagThreshold: "100"
- cURL the STAN channel endpoint. It will return a response like:
{
"name": "myQueue",
"msgs": 500,
"bytes": 0,
"first_seq": 0,
"last_seq": 0,
"subscriptions": []
- Keda will ignore the
msgsfield because there are no active subscriptions. - If
minReplicaCountis set to 1, scaling works normally.
Specifications
- Version: 1.0.0
- Platform: Linux
- Scaler(s): STAN
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 21 (21 by maintainers)
Yes, I just did a full deploy using the master image tag and scale up/down is working for me.
When this error shows up does it stop scaling? Care to show what
kubectl get hpalooks like? And also yourscaledobjectyaml. Thanks!After the scale down occurred, the number of subscribers dropped to 1. I’m not sure the time it took for that to happen though.
I opened an issue to ask the best practices for this.
https://github.com/nats-io/nats-streaming-server/issues/1001
I do use
MaxInFlightbut set it to 50 since each one of my pods can handle that number of concurrent requests.I saw that one of the newer versions of STAN/NATS does have a keepalive that it sends periodically to make sure a subscriber is still listening but I’m not sure of the relationship between a connection being open and a subscription existing for it. I’m going to open up an issue in the STAN repo and see if the team can provide some best practices on it.