uptime-kuma: Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
⚠️ Please verify that this bug has NOT been raised before.
- I checked and didn’t find similar issue
🛡️ Security Policy
- I agree to have read this project Security Policy
Description
Logged in this evening to find no monitors and the following error displayed:
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
Full startup log below. Is this a known issue?
Matt
👟 Reproduction steps
- Login to Kuma
- No monitors or status pages displayed
- Error message appears on screen
- Error logged
👀 Expected behavior
Login is normal and view monitors/status pages etc.
😓 Actual Behavior
- No monitors or status pages displayed
- Error message appears on screen
- Error logged
🐻 Uptime-Kuma Version
1.18.5
💻 Operating System and Arch
louislam/uptime-kuma Container Image
🌐 Browser
107.0.5304.110
🐋 Docker Version
Amazon Fargate LATEST(1.4.0)
🟩 NodeJS Version
No response
📝 Relevant log output
2022-11-23 21:04:59Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
2022-11-23 21:04:59at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26)
2022-11-23 21:04:59at runNextTicks (node:internal/process/task_queues:61:5)
2022-11-23 21:04:59at listOnTimeout (node:internal/timers:528:9)
2022-11-23 21:04:59at processTimers (node:internal/timers:502:7)
2022-11-23 21:04:59at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
2022-11-23 21:04:59at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
2022-11-23 21:04:59at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:588:22)
2022-11-23 21:04:59at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:574:22)
2022-11-23 21:04:59at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:609:19)
2022-11-23 21:04:592022-11-23T21:04:59.523Z [MONITOR] ERROR: Caught error
2022-11-23 21:04:592022-11-23T21:04:59.523Z [MONITOR] ERROR: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
2022-11-23 21:04:59If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
2022-11-23 21:04:59Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
2022-11-23 21:04:59at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26)
2022-11-23 21:04:59at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
2022-11-23 21:04:59at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
2022-11-23 21:04:59at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:588:22)
2022-11-23 21:04:59at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:574:22)
2022-11-23 21:04:59at async Function.calcUptime (/app/server/model/monitor.js:826:22)
2022-11-23 21:04:59at async Function.sendUptime (/app/server/model/monitor.js:889:24)
2022-11-23 21:04:59at async Function.sendStats (/app/server/model/monitor.js:768:13) {
2022-11-23 21:04:59sql: '\n' +
2022-11-23 21:04:59' SELECT\n' +
2022-11-23 21:04:59' -- SUM all duration, also trim off the beat out of time window\n' +
2022-11-23 21:04:59' SUM(\n' +
2022-11-23 21:04:59' CASE\n' +
2022-11-23 21:04:59' WHEN (JULIANDAY(time) - JULIANDAY(?)) * 86400 < duration\n' +
2022-11-23 21:04:59' THEN (JULIANDAY(time) - JULIANDAY(?)) * 86400\n' +
2022-11-23 21:04:59' ELSE duration\n' +
2022-11-23 21:04:59' END\n' +
2022-11-23 21:04:59' ) AS total_duration,\n' +
2022-11-23 21:04:59'\n' +
2022-11-23 21:04:59' -- SUM all uptime duration, also trim off the beat out of time window\n' +
2022-11-23 21:04:59' SUM(\n' +
2022-11-23 21:04:59' CASE\n' +
2022-11-23 21:04:59' WHEN (status = 1)\n' +
2022-11-23 21:04:59' THEN\n' +
2022-11-23 21:04:59' CASE\n' +
2022-11-23 21:04:59' WHEN (JULIANDAY(time) - JULIANDAY(?)) * 86400 < duration\n' +
2022-11-23 21:04:59' THEN (JULIANDAY(time) - JULIANDAY(?)) * 86400\n' +
2022-11-23 21:04:59' ELSE duration\n' +
2022-11-23 21:04:59' END\n' +
2022-11-23 21:04:59' END\n' +
2022-11-23 21:04:59' ) AS uptime_duration\n' +
2022-11-23 21:04:59' FROM heartbeat\n' +
2022-11-23 21:04:59' WHERE time > ?\n' +
2022-11-23 21:04:59' AND monitor_id = ?\n' +
2022-11-23 21:04:59' ',
2022-11-23 21:04:59bindings: [
2022-11-23 21:04:59'2022-10-24 21:03:59',
2022-11-23 21:04:59'2022-10-24 21:03:59',
2022-11-23 21:04:59'2022-10-24 21:03:59',
2022-11-23 21:04:59'2022-10-24 21:03:59',
2022-11-23 21:04:59'2022-10-24 21:03:59',
2022-11-23 21:04:5927
2022-11-23 21:04:59]
2022-11-23 21:04:59}
2022-11-23 21:04:59at process. (/app/server/server.js:1728:13)
2022-11-23 21:04:59at process.emit (node:events:513:28)
2022-11-23 21:04:59at emit (node:internal/process/promises:140:20)
2022-11-23 21:04:59at processPromiseRejections (node:internal/process/promises:274:27)
2022-11-23 21:04:59at processTicksAndRejections (node:internal/process/task_queues:97:32)
2022-11-23 21:04:592022-11-23T21:04:59.514Z [MONITOR] ERROR: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
2022-11-23 21:04:592022-11-23T21:04:59.514Z [MONITOR] ERROR: Caught error
2022-11-23 21:04:592022-11-23T21:04:59.482Z [MONITOR] ERROR: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
2022-11-23 21:04:592022-11-23T21:04:59.482Z [MONITOR] ERROR: Caught error
2022-11-23 21:03:592022-11-23T21:03:59.649Z [MONITOR] WARN: Monitor https://github.com/louislam/uptime-kuma/issues/49 '': Failing: Request failed with status code 401 | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0
2022-11-23 21:03:412022-11-23T21:03:41.599Z [AUTH] INFO: Successfully logged in user . IP=
2022-11-23 21:03:412022-11-23T21:03:41.431Z [AUTH] INFO: Username from JWT:
2022-11-23 21:03:412022-11-23T21:03:41.428Z [AUTH] INFO: Login by token. IP=
2022-11-23 21:03:232022-11-23T21:03:23.632Z [SERVER] INFO: Listening on 3001
2022-11-23 21:03:232022-11-23T21:03:23.623Z [SERVER] INFO: Adding socket handler
2022-11-23 21:03:232022-11-23T21:03:23.623Z [SERVER] INFO: Init the server
2022-11-23 21:03:232022-11-23T21:03:23.588Z [SERVER] INFO: Adding route
2022-11-23 21:03:232022-11-23T21:03:23.550Z [SERVER] INFO: Load JWT secret from database.
2022-11-23 21:03:232022-11-23T21:03:23.398Z [DB] INFO: Your database version: 10
2022-11-23 21:03:232022-11-23T21:03:23.398Z [DB] INFO: Latest database version: 10
2022-11-23 21:03:232022-11-23T21:03:23.398Z [DB] INFO: Database patch not needed
2022-11-23 21:03:232022-11-23T21:03:23.398Z [DB] INFO: Database Patch 2.0 Process
2022-11-23 21:03:232022-11-23T21:03:23.384Z [DB] INFO: SQLite Version: 3.38.3
2022-11-23 21:03:232022-11-23T21:03:23.385Z [SERVER] INFO: Connected
2022-11-23 21:03:23[ { cache_size: -12000 } ]
2022-11-23 21:03:23[ { journal_mode: 'wal' } ]
2022-11-23 21:03:232022-11-23T21:03:23.377Z [DB] INFO: SQLite config:
2022-11-23 21:03:232022-11-23T21:03:23.046Z [SERVER] INFO: Connecting to the Database
2022-11-23 21:03:232022-11-23T21:03:23.044Z [DB] INFO: Data Dir: ./data/
2022-11-23 21:03:222022-11-23T21:03:22.966Z [SERVER] INFO: Version: 1.18.5
2022-11-23 21:03:222022-11-23T21:03:22.900Z [NOTIFICATION] INFO: Prepare Notification Providers
2022-11-23 21:03:222022-11-23T21:03:22.816Z [SERVER] INFO: Importing this project modules
2022-11-23 21:03:222022-11-23T21:03:22.813Z [SERVER] INFO: Server Type: HTTP
2022-11-23 21:03:222022-11-23T21:03:22.812Z [SERVER] INFO: Creating express and socket.io instance
2022-11-23 21:03:222022-11-23T21:03:22.065Z [SERVER] INFO: Importing 3rd-party libraries
2022-11-23 21:03:222022-11-23T21:03:22.064Z [SERVER] INFO: Welcome to Uptime Kuma
2022-11-23 21:03:222022-11-23T21:03:22.064Z [SERVER] INFO: Node Env: production
2022-11-23 21:03:222022-11-23T21:03:22.064Z [SERVER] INFO: Importing Node libraries
2022-11-23 21:03:22Your Node.js version: 16
2022-11-23 21:03:22Welcome to Uptime Kuma
2022-11-23 21:03:22==> Starting application with user 0 group 0
2022-11-23 21:03:21==> Performing startup jobs and maintenance tasks
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 1
- Comments: 30 (3 by maintainers)
Same error, and it happens every night at a specific time. (3.17 AM)
I am getting the same running on Kubernetes. Found this while looking in the knex https://github.com/knex/knex/issues/2820
Users are strongly encouraged to update to
1.23before reporting related issues. You can either try the beta now or wait for an official release soon.The server runs the task clearing monitor history data beyond the defined period at 03:14am each day (server time).
1.23includes PR #2800 and #3380, which improves database write to disk behavior and how deletes are handled. Database operations are still blocking, but it should now takes less time to process them.If you are still having issues, pressing the “Settings” -> “Monitor History” -> “Shrink Database” button should also help in the short term (the description previously written is not entirely accurate). Finally, disk performance is important and if your server has poor IO performance and/or you are running a large number of monitors, the chance of this error occurring will increase.
Same… not sure what uptime kuma does at that time, but multiple monitors go offline with this error at 3:14 for me then come back online like 4min later. Maybe it’s some DB cleanup process that hammers the DB and causes it I suppose.
It’s in my 2.0 roadmap. https://github.com/users/louislam/projects/4
What fixed this for me was Settings -> Monitor History -> Clear all Statistics. Then change Keep monitor history for 7 days.
This is likely not a cpu power issue but an issue of having too much data in sqlite which takes longer (and ultimately times out) to run queries with so much data. I believe the old default was 0 for keep monitor history (forever) which that default should be changed to something like 7 or 14. I probably had a years worth of data which is also pretty useless but since I cleared everything I haven’t had any issues.
I have this error from time to time, causing a burst of downtime notifications that are quickly resolved. It would be nice to get rid of those false-positives.
I had the same issue. As soon as I am trying to delete a specific monitor (which as a lot of events associated to it), I get:
Then, the DB is corrupted and I have to (force) stop the container and restore an old DB to get uptime kuma working. Other monitor deletion worked.
I think it is a timeout somewhere related to a big SQL query. I have no issue related to performances (it is a big VM).
docker version:
Uptime kuma is the last version.
Thanks,
EDIT : after stopping the container, and waiting for it to be stopped (…very long time), removing it and restarting it back (long period to wait before it becomes healthy and is available again), it worked.
I got it one or two times. At least not daily. Op 10 sep 2023 om 10:14 heeft Uthpal P @.***> het volgende geschreven: @toineenzo Did upgrading to 1.23 work? I’m still facing this error after the upgrade.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
I have discovered the problem for me at least. I am running uptime inside docker on a NAS. When the disk activity was high i would get this error message.
Once i addressed the continues high disk read/write actions the messages stayed away.
Hopefully someone else can benefit from this respons.
Happening to me as well, editing a monitor kills uptime Kuma
For ref currently seem to have ameliorated this issue by changing the connection pool settings to:
I can’t remember which issue it was, but there was a suggestion about splitting up the config and results into two separate databases, something that would make sense. I think for the results database, a time series one would be an appropriate choice, then we could just stick to sqlite for config
I have the same issue and its not due to underpowered machine.