questdb: About "could not get table writer"
Describe the bug
Hello, I have a project where I get binance spot data from websocket and print it to questdb database. But after a certain time (like 90 million data) I can’t insert with wire protocol. What could be the reason? Thank you in advance for your help.
This is the error in the error log:
2023-09-09T08:00:44.900294Z I i.q.c.p.WalWriterPool could not get, busy [table=binance_spot~25
, thread=54, retries=5]
2023-09-09T08:00:44.900300Z I i.q.c.l.t.LineTcpMeasurementScheduler could not get table writer [tableName=binance_spot, ex=table busy [reason=unknown]
]
To reproduce
- Create table with CREATE TABLE IF NOT EXISTS binance_spot(pair SYMBOL CAPACITY 2000 NOCACHE INDEX CAPACITY 512, interval SYMBOL CAPACITY 100 NOCACHE INDEX CAPACITY 512, open double, close double, high double, low double, volume double, time timestamp) timestamp(time) PARTITION BY DAY WAL DEDUPLICATE UPSERT KEYS(time, pair, interval)’
- Insert more than 90 million data
Expected Behavior
No response
Environment
- **QuestDB version**: 7.3.1
- **OS**:Ubuntu 20.04(Containerd Docker)
Additional context
No response
About this issue
- Original URL
- State: open
- Created 10 months ago
- Comments: 31 (14 by maintainers)
Okay, I’ll let you know when I’ve done my tests, it might take a while. I’m actually using at most 8 influxdb protocols, so I’ll observe that in detail.
Thank you for your help. I’ll try this and let you know how it goes. I estimate there will be over 3 billion data. Do you think this scenario would be enough for me?
Thanks for sharing this.
AFAIR this is the default on most Linux distros. We recommend setting this kernel param to a significantly larger value, e.g. 524288. Please note that you should also set
vm.max_map_count
to the same value.Could you try increasing these kernel params and see if it helps? My hypothesis is that your issue may be caused with a “open file limit reached” error.
Sure.
I solved the problem by focusing on the unavailability of WAL workers. I reduced the number of workers inserting into the database and this time the number of workers doing inserts decreased. I have achieved both a more performance and a more stable recording method. Thank you for your help. In addition, thanks to your construction of this beautiful database, our instant data recording situation has become more performant. Have a good day!
Sorry for the delay. When I run the wal_tables() query, suspended returns false. If the situation repeats, I will check again in detail and tell you.