yet-another-cloudwatch-exporter: [BUG] large amount of SQS queues causes missing metrics
Is there an existing issue for this?
- I have searched the existing issues
Current Behavior
We have around 9,5 k SQS queues in the eu-west-1 region of one of our prod accounts, but the YACE exporter only provides metrics for around 5 K of them.


I already tried to run several YACE instances in parallel:
- split by SQS metrics
- split by searchTags
Both didn’t improve the situation. I also requested quota increase of AWS quotas for GetMetricData (1000 per second) and ListMetrics (100 per second) requests and according to AWS monitoring we are far away from reaching it.
In the YACE debug log I couldn’t find any entries which explain the missing metrics.
Expected Behavior
The exporter should provide metrics of all SQS queue (it worked with official Cloudwatch exporter)
Steps To Reproduce
config:
extraArgs:
scraping-interval: 120
debug: true
config: |-
discovery:
exportedTagsOnMetrics:
sqs:
- dh_app
- dh_country
- dh_env
- dh_platform
- dh_region
- dh_squad
- dh_tribe
jobs:
- type: sqs
regions:
- eu-west-1
delay: 600
period: 120
length: 120
awsDimensions:
- QueueName
metrics:
- name: ApproximateAgeOfOldestMessage
statistics:
- Average
Anything else?
No response
About this issue
- Original URL
- State: open
- Created 3 years ago
- Reactions: 5
- Comments: 16 (6 by maintainers)
Hi, we are having the same issue, happy to see that it was reported already 😃
Hi! We’re facing the same issue, we have around 350 queues and some of them are entirely ignored. Reverting back to old cloudwatch exporter fixes the issue. We’re using version: v0.28.0-alpha
Hi Thomas, thanks for your quick reaction.
I created a role with the desired permissions and a trust policy for your user in our stg account:
arn:aws:iam::487596255802:role/yace_debugThis account has actually over 12 k SQS queues in the eu-west-1 region 😃
Please ping if you need anything from my side.